(Editor’s Note: As regular readers know, I’ve written a great deal about how I use student evaluations to improve my classes and my instruction (see The Best Posts On Students Evaluating Classes (And Teachers). I’ve invited Dr. John Thompson to add his thoughts on the topic. )
Guest Post by Dr. John Thompson
Education consultant Craig Jerald deserves at least two cheers for his analysis of the Gates Foundation’s recent reports on the use of student survey data to improve instruction. Jerald explained that Gates researchers do not anticipate students giving a summative grade for their teacher. “Instead, they are asked a series of carefully worded questions about their classroom experiences that measure specific kinds of instructional practices and classroom conditions that are conducive to student learning.”
Jerald challenged the loose wording of Republican presidential candidate Mitt Romney, who said, “I would love to have the students grade the teachers at the end of the year as opposed to just the other way around so that teachers get feedback” He then praised Amanda Ripley’s “engaging and informative” Atlantic article, “Why Kids Should Grade Teachers,” while adding, “Just skip the headline.”
Jerald argued that “nobody anywhere is really asking students to ‘grade teachers,’ and when journalists, pundits, and presidential candidates call it that, they risk undermining the very tool they seek to champion.” He worried that serious misperceptions about student surveys “could translate into a very serious problem when it comes time to ask even more teachers to buy into the process.”
The problem is that many of those misperceptions belong to persons pushing tougher evaluations and/or persons who will be conducting those evaluations. Even though the increased use of student perceptions is one of the best things about the Gates Measuring Effective Teaching (MET) process, it would still be a part of a risky scheme for evaluating teachers. It would be nice if the student survey component (like the use of videotaping for professional development, as opposed to high-stakes purposes) could be a corrective to the inherently dangerous aspects of the MET. It is just as likely, however, that the flaws in the rest of the MET’s policies will undermine the benefits of student surveys (just as videotaping for evaluations would likely compromise the constructive use of videotaping to improve instruction.)
In other words, “to a man with a hammer, every problem looks like a nail.” Similarly, during an era of teacher-bashing school “reform,” every piece of data can look like a grade.
It is easy to see why the Gates Foundation sought to add teeth to student survey data by making it a part of teachers’ evaluations. The Atlantic’s Ripley began with the story of a senior,Nubia Baptiste, who “could have revealed things about her school that no adult could have known.” Ripley observed, “She knew which security guards to befriend and where to hide out to skip class (try the bleachers). She knew which teachers stayed late to write college recommendation letters for students; she knew which ones patrolled the halls like guards in a prison yard, barking at kids to disperse.” Even so, Nubia never had a chance to express her judgments about school until she filled out a survey during her senior year.”
Ripley also cited the disappointing experience of Ronald Ferguson in persuading teachers to pay attention to survey results. Over a decade, “only a third of teachers even clicked on the link sent to their e-mail inboxes to see the results.”
I understand some sorts of recalcitrance on our part. I would never criticize teachers for turning off their brains and sleep-walking through the normative professional development quick fixes that are dumped on us. But, when our students speak, we must listen. And, if we and policy wonks were to heed the wisdom of students, we would finally be held to standards that make sense. For instance, the following are types of patterns that correlate with increased student performance. They are fundamentally different than the narrowed test prep and scripted instruction that has been imposed by “reformers.”
1. Students in this class treat the teacher with respect.
2. My classmates behave the way my teacher wants them to.
3. Our class stays busy and doesn’t waste time.
4. In this class, we learn a lot almost every day.
5. In this class, we learn to correct our mistakes.
When offered a chance to build learning environments based the above principles, we should take “yes” for an answer. If given a chance to create school cultures based on those goals, should we not do whatever it takes to build on them?
We teachers have the right to resent the predisposition to apply stakes to all sorts of data, while acknowledging that we are not blameless. If for no other reason than for respect for our students, teachers should commit to a counter-proposal. We should embrace Ripley’s analysis of college student surveys. “Decades of research,” she concluded, “indicate that the surveys are only as valuable as the questions they include, the care with which they are administered—and the professors’ reactions to them.”
In return for transforming student survey results into diagnostic data for school improvement, we should commit to making student surveys a prime metric for building respectful learning environments. After all, Ripley closed with the wisdom of a principal who benefited from surveys in a pilot program where the data was not linked to teachers’ names, but “he still found the information more useful than what standardized tests provided.” The principal said, “’It’s very, very precious data for me.'”
At the same time, the Gates Foundation should consider its own evidence and earn three hearty cheers. The Foundation deserves one cheer for using value-added models, videotapes, and student surveys to identify effective teaching. It also deserves praise for using each of those measures to make the others more reliable.
The MET should take a bow for using multiple measures to make each of its efforts more valid. Then, there would be no shame in admitting that it had discovered that those measures are not valid enough for high-stakes purposes. The best service that the Gates Foundation could do for students would be to acknowledge that their evidence did not support their hypothesis that multiple measures should transform teacher evaluations, incentives, and professional development along the lines that it originally anticipated. On the other hand, the Foundation would remain committed to using each measure to improve teaching and learning. Especially when studied along with videotapes of instruction, the use of study survey data could inform discussions that would be truly transformative.
John Thompson taught for 18 years in the inner city. He blogs regularly at This Week in Education, Anthony Cody’s Living in Dialogue, the Huffington Post and Schools Matter. He is completing a book, Getting Schooled, on his experiences in the Oklahoma City Public School System.
This is a very thoughtful post. I suspect you speak for many teachers who find great value in feedback from student surveys and classroom observations but balk at using the evidence thus collected to formally evaluate teachers. We have vigorously debated this before, but I do believe that if such evidence-capture strategies are implemented carefully with a lot of input from teachers, they can be used for both purposes. But you’ve given me lots to think about on that question, John, so I’m going to be seeking some examples to highlight soon. I anticipate continuing thoughtful debate!