Columns » Your Turn

Test scores don't necessarily measure learning



The latest round of standardized test scores has just arrived, and my students performed beautifully. As a group, they scored better than last year's students, better than the district average, and better than the other grades in my school. Most importantly, they made big gains over their fall scores.

So why do I feel uncomfortable crowing over their achievements?

I haven't forgotten my reservations about standardized testing just because my students did well. The tests measure a limited number of skills. They are biased against minorities and the poor. And the data is easy to manipulate -- depending on which numbers you choose and your point of comparison, it can advance the rhetoric of almost any position.

The scores I just received come from District 11's DALT, or District Achievement Level Test. Each spring and fall, students in grades three through nine take DALTs in reading, language and mathematics. They are "level" tests because students take tests that match their current ability. When kids improve, the DALT computer assigns them a higher level for the next test. It calculates numeric scores by figuring in the current level and the number of correct answers. As students learn, their numbers should climb.

Checking the scores of my seventh-graders, I scan down the list, subtracting the fall numbers from the spring to see how much they have gained in reading. Fifteen points, 13, 10, 11, 5, 8, 1, 1, 14, 1, 20. As I go, I look at the names and ask myself if the score matches what I know about the student's learning. I am often surprised.

Chrissy, Lisa, and Cal, for example, work hard and make good grades.One of them carries a gifted and talented appellation, and one made student of the month. All ought to show significant gains, yet all improved by only one point. Another thing that they have in common is that each advanced one level between fall and spring. Could there be something in the scoring calculation that punishes them for advancing a level?

The issue gets even more interesting when I look at students who jumped two levels, such as Bonnie, whose score dropped more than 10 points. Bonnie rarely does her homework and needs constant pressure to do her work in class. But I have applied that pressure, and I know she has learned. How could she have lost 10 points?

My guess is that the new level was too hard. Although I can't say for sure what the problem was for any of these kids, I have noticed that most students who skip a level show a decline.

More scores raise more questions. There's Becky, who was in tears last fall when the DALT ranked her in the first percentile in reading. My colleague pointed out her 90th percentiles in language and math. Maybe she skipped a question and was on the wrong line for the rest of the test. Or perhaps she was in the wrong section of the booklet. In any case, we all knew she could read well and that the score must be mistaken.

The Iowa Test of Basic Skills -- another one we give in seventh grade -- ranked her in the mid-60s, so we were all shocked when her spring DALT percentile came in, once again, at bedrock.

Of course, the high scores could be just as wacky. Twenty-six point gains, 24, 46. What's wrong?

If we take the scores together, probably nothing. No test is perfect.The anomalies I have been kvetching about all wash out in the pool of averages. I am reminded to look at each test as one piece in the puzzle that makes up my "global" body of evidence of student achievement. Look at my ITBS, my CSAP, my QRI, BRI, CAT, SAT, ACT ...

I am proud of my students this year. I'm proud of their parents, and I'm proud of us teachers. I do think that the high scores indicate that, globally -- or at least continentally -- we have learned. Nevertheless, I believe that a focus on raising scores is not necessarily a focus on learning. And sometimes the lessons of the test are not in the score.

Take Sydney, for example. A sensitive, high-strung writer of poetry, this girl did not have a good testing week. The first day found her crying in the hall, inconsolable. Given her emotional state, I'm not surprised that she scored about the same this spring as she did last fall. I don't think her score tells us what she has learned this year, or that it reflects her parents' effort, or shows what her teachers have taught. But the poem she showed me the day after the test -- the poem she wrote about her despair -- that was worth innumerable standard deviations above the norm.

Brian Mandabach is a middle school English teacher in a District 11 school.

Add a comment

Clicky Quantcast