When I visit schools as an RTI/MTSS consultant and talk with teachers about Tier 1/classroom academic interventions, I often hear frustration over the difficulty of collecting and interpreting data to monitor student progress. Yet, the critical importance of data is that it ‘tells the story’ of the academic or behavioral intervention, revealing the answers to such central questions as:
- what specific skills or behaviors does the student find challenging?
- what is the student’s baseline or starting point?
- what outcome goal would define success for this student?
- has the student reached the goal?
If the information required to answer any of these questions is missing, the data story becomes garbled and teachers can find themselves unsure about the purpose and/or outcome of the intervention.
While following a guide does not eliminate all difficulties in tracking Tier 1/classroom interventions, these 7 steps will help the educators you work with ask the right questions, collect useful data and arrive at meaningful answers at Tier 1.
STEP 1: What skill or behavior is being measured?
The first step in setting up a plan to monitor a student is to choose the specific skill or behavior to measure. Your ‘problem-identification’ statement should define that skill or behavior in clear, specific terms.
Keep in mind that a clear problem definition is a necessary starting point for developing a monitoring plan[1]: “If you can’t name the problem, you can’t measure it.”
STEP 2: What data-collection method will best measure the target skill or behavior?
Next, select a valid, reliable and manageable way to collect data on the skill or behavior the instructor has targeted for intervention. Data sources used to track student progress on classroom interventions should be brief, valid measures of the target skill, and sensitive to short-term student gains.[2]
There are a range of teacher-friendly data-collection tools to choose from, such as rubrics, checklists, Daily Behavior Report Cards (DBRC), Curriculum-based Measures (CBMs), teacher logs and student work products.
STEP 3: How long will the intervention last?
When planning a classroom intervention, the teacher should choose an end-date when he/she will review the progress-monitoring data and decide whether the intervention is successful.
A good practice is to run an academic intervention for at least 6-8 instructional weeks before evaluating its effectiveness. Student data can vary significantly from day to day[3]: Allowing 6-8 weeks for data collection permits the teacher to collect sufficient data points to have greater confidence when judging the intervention’s impact.
STEP 4: What is the student’s baseline performance?
Before launching the intervention, the teacher will use the selected data-collection tool to record baseline data reflecting the student’s current performance. Baseline data represents a starting point that allows the teacher to calculate precisely any progress the student makes during the intervention.
Because student data can be variable, the instructor should strive to collect at least 3 data points before starting the intervention and average them to calculate baseline.
STEP 5: What is the student’s outcome goal?
Next, the teacher sets a post-intervention outcome goal that defines the student’s expected performance on the target skill or behavior if the intervention is successful (e.g., after 6-8 weeks). Setting a specific outcome goal for the student is a critical step, as it allows educators to judge the intervention’s effectiveness.
Teachers can use several sources to calculate an outcome goal[4]:
- When using academic CBMs with benchmark norms, those grade-level norms can help the instructor to set a goal for the student.
- Classroom Norms. When measuring an academic skill for which no benchmark norms are available, the teacher might instead decide to compile classroom norms (i.e., sampling the entire class or a subgroup of the class) and use those group norms to set an outcome goal.
Real-world Example:
A teacher with a student who frequently writes incomplete sentences might collect writing samples from a small group of ‘typical’ student writers in the class, analyze those samples to calculate percentage of complete sentences, and use this peer norm (e.g., 90 percent complete sentences) to set a sentence-writing outcome goal for that struggling writer.
- Teacher-defined Performance Goal (Criterion Mastery). Sometimes, the instructor must write an outcome goal — but will have access to neither benchmark norms nor classroom norms for the skill or behavior being measured. In this case, the teacher may be able to use his or her own judgment to define a meaningful outcome goal.
Real-world Example:
A math instructor wishes to teach a student to follow a 7-step procedural checklist when solving math word problems. The data source in this example is the checklist, and the teacher sets as the outcome goal that — when given a word problem — the student will independently follow all steps in the teacher-supplied checklist in the correct order.
TIP: For a student with a large academic deficit, the teacher may not be able to close that skill-gap entirely within one 6-8-week intervention cycle. In this instance, the instructor should instead set an ambitious ‘intermediate goal’ that, if accomplished, will demonstrate the student is clearly closing the academic gap with peers. It is not unusual for students with substantial academic delays to require several successive intervention-cycles with intermediate goals before they are able to close a skill-gap sufficiently to bring them up to meet their grade-level peers.
STEP 6: How often will data be collected?
The more frequently the teacher collects data, the more quickly she/he will be able to judge whether an intervention is effective.[5] This is because more data points make trends of improvement easier to spot and increase instructors’ confidence in the overall direction or ‘trend’ of the data.
Ideally, teachers should strive to collect data at least weekly for the duration of the intervention period. If that is not feasible, student progress should be monitored no less than twice per month.
STEP 7: How does the student’s actual performance compare with the outcome goal?
Once the teacher has created a progress-monitoring plan for the student, she/he puts that plan into action. At the end of the pre-determined intervention period (e.g., in 6 weeks), the teacher reviews the student’s cumulative progress-monitoring data, compares it to the outcome goal and judges the effectiveness of the intervention. Here are the decision rules:
- Outcome goal met. If the student meets the outcome goal, the intervention is a success. The teacher may decide that the intervention is no longer necessary and discontinue. Or she/he may choose to continue the present intervention for an additional period because the student still appears to benefit from it.
- Clear progress but outcome goal not met. If the student fails to meet the outcome goal, but the teacher sees clear signs that the student is making progress, that educator might decide that the intervention shows promise. In this case, the next step would be to alter the existing intervention in some way(s) to intensify its effect. For example, the teacher could meet more frequently with the student, meet for longer sessions, shrink the group size (if the intervention is group-based), etc.
- Little or no progress observed. If the student fails to make meaningful progress on the intervention, the teacher’s logical next step will be to replace the current intervention plan with a new strategy. The instructor may also decide to refer the student to receive additional RTI/MTSS academic support.
Key Takeaway: Let Data Be Your Guide
The goal in monitoring any classroom intervention is to let the data guide you in understanding a learner’s unique story. When teachers can clearly define a student’s specific academic or behavioral challenge, collect data that accurately tracks progress, and calculate baseline level and outcome goal as points of reference to judge intervention success, the student’s story will be truly told.
[1] Upah, K. R. F. (2008). Best practices in designing, implementing, and evaluating quality interventions. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp. 209-223). Bethesda, MD: National Association of School Psychologists.
[2] Howell, K. W., Hosp, J. L., & Kurns, S. (2008). Best practices in curriculum-based evaluation. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp.349-362). Bethesda, MD: National Association of School Psychologists.
[3] Hixson, M. D., Christ, T. J., & Bruni, T. (2014). Best practices in the analysis of progress monitoring data and decision making in A. Thomas & Patti Harris (Eds.), Best Practices in School Psychology VI (pp. 343-354). Silver Springs, MD: National Association of School Psychologists.
[4] Shapiro, E. S. (2008). Best practices in setting progress-monitoring monitoring goals for academic skill improvement. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp. 141-157). Bethesda, MD: National Association of School Psychologists.
[5] Filderman, M. J., & Toste, J. R. (2018). Decisions, decisions, decisions: Using data to make instructional decisions for struggling readers. Teaching Exceptional Children, 50(3), 130-140.