By: Yvette Arañas
With so many state assessments, screening, progress monitoring, and classroom exams in place, there is no doubt that students go through a lot of testing. One question that we need to ask ourselves is, are we just testing for the sake of testing? Are we using the data from our assessments appropriately? It is possible to get so caught up with collecting data from students that we forget what the purpose of testing is. Testing shouldn’t be done for the sake of testing; instead, results from assessments should be used to make important educational decisions, such as identifying students with difficulties, allocating resources, determining what students have learned, and informing instruction. (See "Too Much Testing?" blog post from January 7, 2016).
Using Screening Data to Identify Struggling Students
Screening is done for all students in a school, usually at the beginning, middle, and end of a school year. The purpose of screening is to identify students who have difficulties in a particular area, such as reading, math, or behavior.
Using data from screening assessments is one way schools can determine how to allocate their resources for interventions. Right after collecting the screening data, we suggest that schools hold data team meetings for each grade level. Such meetings should include classroom teachers, the school psychologist (or someone who is trained to interpret assessment data), interventionists, literacy and math coaches, and the building principal. During these meetings, the team typically sets a criterion to determine which students are in most need of targeted services (i.e., students that were flagged for being High Risk on CBMreading should be considered for receiving small-group reading interventions). Then the team examines the screening data and determines which students meet the criterion.
It is best practice to use as many sources of data as possible in order to get the most accurate picture of a student’s ability. Teachers might use more than one screening assessment to make their educational decisions. For example, a data team might decide to provide intensive services to students who were High Risk on CBMreading and scored at or below the 25th percentile on a computerized broad reading assessment to see where students stand on reading rate and general reading skills. In another example, a student who read less than 80% of the words correctly on the CBMreading screener might also be given a screener for decoding to determine if phonics interventions would be appropriate for the student.
Using Progress Monitoring Data to Inform Instruction
While providing targeted interventions to struggling students, it is important to make sure that their progress is being monitored. Teachers can use progress monitoring graphs to determine whether the intervention is working for the student. Luckily, FastBridge Learning has the option to graph progress monitoring data automatically. Teachers can see if the student’s progress is trending upwards (suggesting that the student is improving), trending downwards (the student is still struggling and performing worse), or projecting a flat line (the student is showing no change).
Graphs like the ones on FastBridge Learning also show where a student is performing relative to his or her individualized goal (see figure to the left). If a student’s most recent scores are consistently falling below the goal line, the teacher might want to consider finding another intervention. If the scores are falling well above the goal line, the student might be ready to increase his or her goal, or to begin the next step of a series of interventions (e.g., a student is ready for phonics interventions after mastering phonemic awareness). If the scores are inconsistent (some falling above the goal, some falling below), it might be best to continue the same intervention for a few more weeks.
There are a few best practices to keep in mind when using progress monitoring data. We suggest collecting data weekly and to maintain an intervention for at least eight weeks of progress monitoring before making a change to the intervention, as some interventions take time to work. It is also important to use progress monitoring tools that examine the same skill that is being taught. For instance, a student receiving small-group interventions for math fact fluency should be monitored using a math automaticity task (e.g., CBMmath Automaticity).
Using Data from Unit Tests to Determine What Was Learned
In addition to screening and progress monitoring, students often take tests at the end of a unit or chapter (e.g., SBmath). These tests are usually created by teachers, or taken from test banks from the adopted curriculum. These tests are useful for determining whether students have learned the specific content that was taught in class. They can also assess whether students are meeting course objectives or learning standards. If many students failed to do well on a particular question, the teacher might want to consider spending some time re-teaching the content that the question was addressing.
One thing that teachers can learn from the test data is whether a test question should be dropped or revised for future versions of the test. If all students answered a question incorrectly, the teacher might need to review the question for errors and/or decide that a question should not be counted toward the overall test score. A teacher also might want to determine if a test is too easy (e.g., more than 90% of the students answered almost all the questions correctly); if this is the case, the teacher might want to consider changing the format of the test (e.g., switching from fill-in-the-blank to multiple choice questions), or adding questions using multiple answer formats.
Assessments can help improve student outcomes by showing what students have learned so far. Teachers can learn from the assessment outcomes, as well by examining patterns in students’ data and revising assessments as needed. Using assessment data appropriately can help educators use their resources wisely, provide support to students who struggle most, and implement appropriate interventions that are best suited to students’ needs.
Yvette Arañas is a doctoral student at the University of Minnesota. She was a part of FastBridge Learning’s research team for four years and contributed to developing the FAST™ reading assessments. Yvette is currently completing an internship in school psychology at a rural district in Minnesota.