“Are Test Scores the Right Measuring Stick for Teachers?” is a good short piece from American RadioWorks.
Thanks to Greg Toppo for the tip.
I’m adding it to The Best Resources For Learning About The “Value-Added” Approach Towards Teacher Evaluation.
The thing that never seems to come up in any of these discussions about test scores as a basis for judging, well, pretty much anything, is whether or not the students even take the darned tests seriously.
There was an interesting discussion on our public radio station a few months back where a group of high school students were interviewed about, among other things, high stakes testing. They were VERY candid. One thing they said: at least a quarter of the students they knew didn’t make an effort on standardized tests unless the score affected them, the STUDENT, personally. If it just measured the school/teacher, they couldn’t care less. They just randomly guessed at answers, sometimes not even reading the questions. The students said it was because of “testing overload.” Their mindset: “Ugh. Not another test!? I’ll just fill in some bubbles and get it over with.” Tests that determined whether they could graduate or pass a class, and college entrance exams like the ACT or SAT: those they cared about. Others – not so much.
How can we honestly assess teachers and schools based on test results when a significant proportion of the students don’t bother to even read the questions on the test?
We are trying to use data-driven practices but the data may very well be garbage. Leaving alone the question of whether the tests measure real learning, they can’t even measure what they purport to measure if they students just randomly fill in bubbles on their answer sheets!