Leveraging a data driven approch for pedegagical decision making.
Renaissance Learning is a leading educational technology company. Their products have been used by K-12 schools nationwide. A few years ago, the company designed a build-in tool, the Goal-Setting Wizard in their product, to help teachers use their data and research to set appropriate goals for students. However, this tool is rarely used by their users.
Our CMU capstone team has partnered with Renaissance to redefine the problem and created a solution that empowers teachers to use data-driven approaches to setting student goals. The project lasted six months. The solution has been designed, prototyped, tested, and shipped.
A few years ago, Renaissance Learning designed a build-in tool, the Goal-Setting Wizard in their education product, to help teachers use their data and research to set appropriate goals for students. However, only 5% of their users use this tool, and even fewer people use it correctly.
Setting goals is a fundamental aspect of assessing a student’s progress. By setting a goal for a student, a teacher creates a criterion against which to measure student growth. With this criterion, that teacher can measure the effectiveness of their interventions, decide how to allocate resources for students in need of support, and consider the types of interventions that the student needs. In addition to providing feedback about intervention efficacy, challenging yet reasonable goals can engage students and promote growth. Renaissance Learning’s psychometric research and years of students data are extremely valuable to help the teacher make decisions. However, the low usage of this product damages the interests of both sides.
To direct this project, we used a human-centered research approach including contextual inquiry, interview, cognitive task analysis. We approached classroom teachers, administrators, and intervention specialists to understand goal-setting workflows and gain insights about goal-setting processes in Multi-Tiered Support System (MTSS) systems. Those user research results helped us to understand users’ pain points and frustrations. The insights brought us a whole new angel to view and reframe the problem.
We designed a product that empowers teachers to use data while goal-setting for students in their classes. Vittore’s solution pairs a tutorial on goal-setting best practices with an “in-vivo” pedagogical agent that guides users through their goal-setting processes.
The tutorial trains teachers on goal-setting best practices, how to use the goal-setting interface, and how the pedagogical agent can aid their decisions. This tutorial is designed to provide easily-understood, actionable information about goal-setting with data.
The pedagogical agent provides alerts where users deviate from best practices and visualizes student data to ease interpretation of these metrics. The alerts are offered as unobtrusive recommendations to teachers at relevant times in their workflow to offer improvements to their practices and to provide easily-accessed reference materials.
Due to the different knowledge level about data, some users might not be able to understand data metrics that are used in the product. However, almost all the users can apply them to make an appropriate decision.
The majority of the research that informed the direction of this project stemmed from two human-centered research methods: contextual inquiries and think-alouds. We interviewed educators in different roles (administrators, classroom teachers, and intervention specialists), teaching experience, geographic areas, and experience with data. More specifically, we used semi-structured interviews with the subject about their work and their work tools in the context in which the subject completes their work. This is a method suited towards understanding the work environment and culture that may influence the subject.
We used coding and affinity diagramming to analyze data from the contextual inquiries
Coding is a process of categorizing quotes and notes from raw interview data into relevant and informative codes determined by the protocol and guiding research questions. Some examples of our codes include: GS (Goal Setting), IoR (Interpretation of Reports), and CwS (Communication with Students).
Affinity diagramming involves grouping raw data from contextual inquiries into summative, hierarchical thematic and insight categories. This is a bottom-up process in which categories emerged organically from the interview notes. It allowed us to consider how the findings fit into categories of instructional design including educational goals, instruction, and assessment. Also, we marked frustrations and possible design ideas in the affinity diagram.
While doing affinity diagram, we used graphic models allowed us to organize interview findings to uncover user behavior patterns. In this project, we used Cultural (Value) Model and Flow models. Value model helped identify value points, expectations, and influences between stakeholders. ISequence flow models were used for a detailed view of an intervention specialist’s workflow.
Sequence Flow Model shows the intent, interaction between people and tools along their workflow in great details. In this model, we narrowed down our scope and analysed the whole intervention and progress monitor process from an intervention teacher’s perspective.
A cultural model reveals the value points, expectations, and information flows between administrators, teachers, students and other roles in school education. For this project, expectations of tools were also considered. In our cultural model, we found that the school education culture has impacts on the following aspects of progress monitor and intervention.
Insights are behavior patterns, breakdowns, and pain points generated from the affinity diagrams and models, used to guide product developed. We compared the models to best-practice workflows to identify breakdowns and clustered user interview notes to find pain points.
While the team generated ten key insights from the spring research, the four insights that most guided the product are highlighted in this section.
Personas are compiled from the behaviors, motivations, and needs of the many users whom we interviewed. Based on the work completed this spring, we found that there are four main personas that represent the key stakeholder’s use and views of data: Mr. Wisdom, Miss Rainey, Ms. Mitchell, and Mrs. Horan. Mr. Wisdom, our “data expert,” represents the teacher who understands data well, sees its value in classroom practices, and enjoys helping other teachers to use data. Ms. Mitchell, our “data enthusiast,” sees data as useful for predicting her students’ performance but feels pressured to get everyone to proficient levels. Miss Rainey, our “data skeptic,” prefers observational data about her students and believes numbers miss important information about students. Mrs. Horan, our “data advocate,” relies on data to demonstrate school performance and growth each year and establishes decision rules to use data.
We brainstormed product features based on those personas and test those ideas with other users.
Rapid prototyping cycles were used to assess learning outcomes of the product, to evaluate usability, and to collect user preferences through user-centered design methods. Seven iterations of prototyping were conducted, with foci on content, data visualizations, actionability, usability, and visual preferences to make research-based decisions for the final product. We catogries those seven runs into three phases.