Education Week Teacher has just published a report titled Next Up in Teacher Evaluations: Student Surveys (Learning First also has a good post on the same topic). It discusses the growing interest in using student survey results as a part of a teacher evaluation process and specifically talks about what’s happening in Pittsburgh. Here is how it ends:

But there was no denying that linking the Tripod results to evaluations is the logical and likely next step for Pittsburgh—and one that a few other places have already taken.

(ADDENDUM: I just remembered I wrote specifically about Tripod surveys earlier this year)

You will find few, if there are any, other teachers who have written as positively and as often about the value of student evaluations (see The Best Posts On Students Evaluating Classes (And Teachers) ). I do them regularly, post results — warts and all — on this blog, voluntarily report them to my colleagues and supervisors, and always learn a great deal from them.

But I’m not convinced that tying them to a formal teacher evaluation is a good idea.

They are great tools for formative assessment (not summative) and can be key ways for me to be data-informed (not data-driven).

Just as formative assessments have been found to be much more effective in supporting student learning than high-stakes summative assessments like standardized tests, the same can hold true for educators.

As I’ve written before, I just don’t understand why “reformers” like the Gates Foundation and others continue to take tools that have incredible potential for teacher development as a formative assessment and destroy their many positive attributes in the name of summative assessment. They’re doing it with videotaping teachers and now they’re doing it with student evaluations.

This is what I’ve written before about how and why I use student surveys:

I want to know more from students than what Gates is asking. I want to know if they think I’m patient and if they believe I care about their lives outside of school. Yes, I certainly want to know what they think I could do better, and I also want to know what they think they could do better. I want to learn if they think their reading habits have changed and, for example, when I’m teaching a history class, are they more interested in learning about history than they were prior to taking the class. I want to find-out what they believe are the most important things they learned in the class and, for many, it might be learning life skills like the fact their brain actually grows when they learn new things or the fact that they had in them the capacity to complete reading a book or writing an essay for the first time in their lives. And, in the discussion that follows (one thing I learned during my nineteen year community organizing career is that a survey’s true use is as a spark for a conversation) we discuss all these things and many more, including the differences between what might be what we like to do best and what we learn the most from.

Just as I don’t believe standardized test scores give an accurate assessment of student learning (though I incorporate it as part of my being “data-informed”), and just as I doubt the value of feedback on a video of my teaching offered by someone a thousand miles away who knows nothing about me or my students, I question how helpful a standardized student evaluation form created by people completely disconnected from my students and me is going to be.

And, though I’m confident that the vast majority of my students will take it seriously and not view it as a grade on whether they were entertained or not (one of the reasons I know this to be the case is because we do a mini-unit on the qualities of a good lesson, and they have to incorporate them in lessons that they teach to their classmates), I’m not confident that this is  the case everywhere.

Do we really need to ratchet up the pressure even more on teachers — many of whom leave within their first five years now?

What do you think?