5013141112

 

I thought that new – and veteran – readers might find it interesting if I began sharing my best posts from over the years. You can see the entire collection here

I wrote this post in 2009, and subsequently revised it considerably and it was published in The Washington Post the next year under the title The best kind of teacher evaluation (Unfortunately, that piece now seems to be off-line).

NEW YORK — Mayor Michael Bloomberg has ordered the city’s public schools to start using student achievement data in the evaluations of teachers who are up for tenure this school year.

“It is an aggressive policy, but our obligation is to take care of our kids,” Bloomberg said last week in a speech in Washington.

“Nobody wants to promote and give lifetime employment to teachers who can’t teach,” Bloomberg told reporters after the speech. “Those days are gone.”

Bloomberg, like Education Secretary Arne Duncan and President Obama, has long pressed for merit-pay programs that reward teachers for gains in student achievement.

The state also should make it easier to fire ineffective teachers, he said.

Assuming that this is an accurate report on what the Mayor said (and since it comes fromhis own Bloomberg News Service, I’m guessing that this is a safe assumption to make), this is representative of an all-too-common context in which teacher evaluation is discussed — it needs to be done so teachers can be fired.

Given that common context, is there any wonder why many teachers, like me, get very, very cautious when the topic is broached?

I don’t pretend to have the answer to what is the best way to evaluate teachers. However, I do have a pretty good idea of how not to do it. And I have some very clear ideas on what has worked for me and made me a much better teacher.

HOW NOT TO DO IT

Jay Matthews at the Washington Post has recently written two posts about the new evaluation process implemented by Michelle Rhee at the Washington, D.C. schools.

You should read his two posts, but before I describe it and share my reaction, I should preface my remarks by telling readers that it would be very difficult to find a teacher anywhere who is more open to critique than I am. I spent many years working as a community organizer for the Industrial Areas Foundation, which is known for many things — including its exceptional work and for its very high emphasis on the value of critique (those two qualities are related).

Even with that background, I would find very little value in the D.C. program — at least how it’s described in those two columns (though I have to say that I don’t necessarily agree with all the teacher comments quoted by Matthews). Having some stranger parachuted into my room for thirty minutes with a checklist, and who has no real knowledge of my school, my students, or me is unlikely to be received well by me, nor provide me with particularly helpful feedback.

However, I do relentlessly pursue evaluation and critique, and here are some of the things that I have found helpful:

* Being observed by administrators who know our school, our students, and me, and whose judgment and skills I, in turn, respect. I know they are genuinely concerned about my professional development for several reasons, including their knowledge that helping me improve my skills is the best thing they can do to help our students. Administrators typically come by for two thirty minute formal observations each school year, and numerous short “drop-ins.” I’m very confident in my ability as a teacher, but I have received some very helpful hard critique this way that has made me an even better educator.

* Getting a clear message from administrators that asking for help — from either them or other teachers — is not a sign of weakness, as I wrote in Have You Ever Taught A Class That Got “Out Of Control”?. In addition, having administrators and teachers clearly make data freely available and encourage us to reflect on it (individually and collectively), but in a culture of being data “informed” and not data “driven.” I’ve written more about that in“Data-Driven” Versus “Data-Informed.”

* Hearing regular feedback from students. I’ve written in-depth about how I use this process:

Results From Student Evaluation Of My Class And Me

Results From Student Evaluation Of My Class And Me (Part Two)

* Having colleagues observe me and provide feedback. Our “Small Learning Community” of twenty teachers (our school is divided into seven similar SLC’s where the 300 students stay with this same group of teachers all four years) periodically do these observations on our own initiative with our own short checklist (which we created) that includes questions like:

Are all students engaged? If so, how? If not, why?

Do you feel the expectations of the class are too much or not enough?
Is the work being given higher order thinking or just task work (book work)

In addition, these kinds of observations provide opportunities to see how our same students act in different classes, which can be very helpful to us as teachers. (Jay Matthews wrote an interesting column about the value of increasing peer collegiality for professional development)

* Doing what Alice Mercer did — observe my class, then write an open letter to my students asking them questions. Both Alice’s observations and the students’ answers were very insightful.  You can read about it at What Alice Mercer Saw When She Observed My Class.

* Hearing from parents about what their students tell them about our class. I always try to find-out — either over the phone, during home visits, parent-teacher conferences, or open houses — what their children say about our class — good or bad.

I’m not sure how all those features could be incorporated in a formalized teacher evaluation system — or even if they should.

But, come on, I can’t see how any of them can’t be better than what they’re doing in D.C…….