By: Jessie Kember, Ph,D., NCSP
This blog is a continuation of previous blogs exploring the problem-solving process. The purpose of this blog is to dive deeper into the fifth and final step of the problem-solving process: plan evaluation. In addition, the differences between assessment and evaluation, how much data are needed to conduct plan evaluation, and examples of individual and group data will be considered. As a brief review, the problem-solving process is outlined below:
- Problem identification
- Problem analysis
- Plan development
- Plan implementation
- Plan evaluation
Plan evaluation occurs after an intervention is implemented with integrity. The primary purpose of plan evaluation is to make a decision. Plan evaluation can occur at the individual, classroom, school, or district level. Put simply, plan evaluation answers the following question: did the intervention work?
What is the Purpose of Plan Evaluation?
The primary purpose of plan evaluation is decision-making. In order to evaluate the effectiveness of an intervention, a progress-monitoring process must be put in place that is rigorous, precise, and comprehensive. Following progress monitoring, the problem that was identified in step 1 of the problem-solving process (e.g., the difference between what is happening and what is expected) is revisited. Problem evaluation involves comparing prior student performance with post-intervention performance. There are three possible outcomes of this comparison: the discrepancy has decreased, the discrepancy has increased, or the discrepancy has remained the same. If the discrepancy has increased or remained the same, it is important to confirm whether the intervention was implemented with fidelity. If the intervention was implemented with fidelity, then a decision can be made regarding whether the intervention should continue. If the intervention was not implemented with fidelity, adjustments need to be made to ensure treatment integrity. These three outcomes determine the “next” steps following plan evaluation: continuing with the intervention as is, modifying the intervention, or implementing a different intervention. If it is determined that the intervention was not effective, alternative interventions may be implemented.
What is the Difference between Assessment and Evaluation?
It is important to acknowledge that assessment and evaluation are not the same, and that they serve different purposes. While assessment is the process of documenting knowledge (i.e., collecting data) and skills in measurable terms, evaluation is the process of making judgments based on criteria and evidence (e.g., data) about worth or quality. For example, assessment is the collection of data on early reading skill development using a FastBridge Learning® tool. Evaluation is the comparison of the data with a standard in order to make a judgment about whether students are on-track in their early reading skill development, or whether they are in need of more intensive intervention or instruction.
When Can Plan Evaluation Occur?
Regardless of the approach used for progress-monitoring, effective plan evaluation is systematic and allows for decision-making. In addition, a plan to evaluate intervention effectiveness should be developed prior to intervention implementation. In other words, decision rules should be established prior to evaluating intervention effectiveness. Depending on the timing of plan evaluation, it can be formative or summative. Formative plan evaluation occurs after initial intervention implementation, but while the intervention is in place. Summative plan evaluation occurs after the intervention has been implemented and completed. Whether summative or formative, as mentioned in a previous blog, only when a plan has been followed with integrity is it possible to conclude that the observed changes in student performance were the result of the intervention. More specifically, it is important that progress monitoring has occurred at least monthly, but ideally weekly, and that approximately 9-12 data points have been collected before plan evaluation can occur.
Why 9-12 data points? The number of data points affects the reliability of the information. With one or two data points it is possible to determine some limited features of student performance, but additional data improve understanding of the consistency and trend of student performance. As a general rule, the more data points available, the more reliable the information.
In addition to intervention data, other factors play an important role in evaluation. Given that the purpose of evaluation is decision making, it is important to acknowledge that data cannot make a decision. Instead data should inform decisions. Importantly, evaluation decisions should take into account the data as well as other sources of information such as expected performance by students of the same grade or skill and the instructional setting.
How Can FastBridge Learning® Help Me with Plan Evaluation?
FastBridge Learning® offers multiple report options that allow users to evaluate program effectiveness (i.e., intervention and monitoring) at various levels within a Multi-Tiered System of Support (MTSS) framework. For example, at the classroom, grade, school, and district levels, the Group Growth Report displays students’ observed performance and predicted performance, allowing users to decide whether different instruction or additional intervention is needed. The Group Growth Report is particularly helpful in reviewing the effects of Tier 1 core instruction interventions. FastBridge Learning® also offers reports at the individual student level (e.g., Progress Monitoring and Student At-A-Glance Reports) to allow users to engage in plan evaluation. Here is a sample graph from a progress report.
The progress monitoring report provides the following information about an individual student’s progress:
- Trend Line: Shows the general direction of the student’s growth
- Goal Line: Shows the direction and amount of growth needed to reach the goal
The report shows the student’s assessment data, and the team uses the data to evaluate whether the plan was successful. In the above example, the student did not make much progress with the first intervention and the team’s evaluation was that a different intervention was needed. A new intervention was put in place but there are not enough data yet to evaluate whether it is working.
FastBridge Learning® reports can also be used to evaluate group progress. The Group Growth Report includes information about improvements for all students in a class, grade, school, or district. At the top of this report is a bar graph showing group performance over time. Here is an example for one school:
This report does not include specific goal lines for group performance, but the team can evaluate the plan’s success by comparing the percentages of students in each category to the goals set for the school. Using data to evaluate group outcomes is as important as looking at individual progress. The most effective outcomes result from the combination of evidence-based Tier 1 core instruction plus strategic or intensive interventions based on individual student needs.
The FastBridge Learning® system is designed to be used as part of a Multi-Tiered System of Support (MTSS) that utilizes problem-solving steps. Such problem solving is best done by teams of educators at the grade, school, or district level. Importantly, the problem-solving steps focus on understanding why students are struggling and what needs to happen so they can be successful. Plan evaluation involves the actual decisions that a team makes based on student data. Plan evaluation is the final step in the problem-solving process, however, this model is designed to be used continuously such that once an initial problem is solved for students, different problems can be identified and addressed.