The Wall Street Journal today ran a for/against op ed on the question: Should Student Test Scores Be Used to Evaluate Teachers?

The pro-side was written by Thomas Kane, who’s leading the massive Gates Foundation program on the subject (see The Best Posts On The Gates’ Funded Measures Of Effective Teaching Report). I’ve head some decent things about him as a researcher, but was particularly disappointed that in his justification he emphasized the Chetty study (one of those authors summarized it by saying “The message is to fire people sooner rather than later” and you can read more about it at The Best Posts On The NY Times-Featured Teacher Effectiveness Study). In an additional disappointment, he spent more time talking about using student evaluations to evaluate teachers than he did classroom observations. Plus, he used — in my opinion, at least – the “cop-out” of saying test scores should be used as part of “multiple measures” without saying what percentage test scores should weigh.

Linda Darling-Hammond wrote the “con” side, and, as usual, wrote a very smart piece (you can see more at Hot Off The Press! The Best Piece Yet Published On Teacher Evaluation).

It’s unfortunate that people like Professor Kane and other advocates of value-added measurement don’t seem to publicly recognize the difference between formative and summative assessment and the difference between being data-informed and data-driven.

I use student test scores as one of many non-high-stakes formative assessments with my students as a way of being data-informed. I use student evaluations of my teaching as an important non-high-stakes formative assessment to help me become a better teacher.

We could have such a better conversation about teacher assessment if we applied some of the best practices we know about student formative assessment to us educators…