(UPDATE: Readers might also be interested in “The Best Posts About The LA Times Article On “Valued-Added” Teacher Ratings”)
You might also be interested in:
I’m part of a group of teachers working with The Center For Teaching Quality that’s preparing a policy report on Teacher Working Conditions and how they relate to student learning.
I’m learning a lot about many things during research, and one of them is about the “valued added” approach that’s being discussed a lot for use in teacher evaluation. And what I’m finding is leaving me deeply concerned about it.
You might also be interested in The Best Resources For Learning About Effective Student & Teacher Assessments.
I thought readers here might find it useful to see what I think are The Best Resources For Learning About The “Value-Added” Approach Towards Teacher Evaluation (please feel free to leave your comments and other suggestions in the comments section, too):
The National Research Council and the National Academy of Education jointly issued a report on value-added approaches, and their report has been summarized in The Washington Post. Don’t rush to link teacher evaluation to student achievement is a must-read.
No Value Added: The Mismeasurement of Teaching Quality is a column by David M. Cohen that appeared in Teacher Magazine. By the way, be sure to check-out the InterACT blog written by David and others who are members of Accomplished California Teachers (I’m a member, too!).
Pondering Legal Implications of Value-Added Teacher Evaluation raises some interesting points.
How NOT to Evaluate Teachers is by Daniel Willingham. It’s a couple of years old, but still definitely relevant.
Willingham refers to a post titled My Value-Added Bucket List by eduwonkette, who used to write at Education Week.
Using Value-Added Measures to Evaluate Teachers at Educational Leadership reviews some recent research.
The No Stats All Star raises a key point in evaluating teachers…and basketball players.
The Hechinger Ed Report has a nice summary of a major study that raises questions about using test scores to evaluate teachers:
In a typical rating system aimed at identifying poorly performing teachers, one in four teachers whose performance is fine could be misidentified as bad. At the same time, teachers whose students underperform had a one in four chance of being mislabeled as average performers.
School Finance 101 also has an analysis of the report.
Were some D.C. teachers fired based on flawed calculations? is title of a piece the Washington Post. It’s another cautionary tale about “value-added” teacher assessment.
Proceed with Caution: Using Standardized Test Scores in High-Stakes Decisions is the title of a good post by Anne O’Brien at The Learning First Alliance. It links to stories about recent problems with New York and Florida state standardized tests, and discusses that problems like these wave a caution flag to notions like teacher merit pay.
LA Times Value-Added Release – Problems and Solutions at The Quick and The Ed (I’d recommend you skip down to the “Problems with Value Added Measures of Teacher Effectiveness” section and also read John Thompson’s comment)
Putting Teachers to the Test is a good explanation of “value-added” measures for teachers, where they are evaluated on their students’ growth in test scores. The Wall Street Journal published it .
Problems with the Use of Student Test Scores to Evaluate Teachers is a report from the Economic Policy Institute, and may be the best study out there. Ken Bernstein has written a good post about it.
“Are Test Scores the Right Measuring Stick for Teachers?” is a good short piece from American RadioWorks.
Formula to Grade Teachers’ Skill Gains in Use, and Critics is an article in The New York Times. It seems to me to be one of the better short accessible pieces out there about the “valued-added” approach.
Assessing A Teacher’s Value is the headline of a New York Times feature that highlights four supporters and four critiques of the “value-added” approach of assessing teachers. Critics including Linda Darling-Hammond and Diane Ravitch.
Teacher Added-Value Scores: Publish and Perish is a very thoughtful analysis of the problems inherent in publishing the “value-added” assessments of teachers. It’s from the Albert Shanker Institute, and raises some issues I haven’t seen raised elsewhere.
Is D.C.’s teacher evaluation system rigged? is a guest post by Aaron Pallas at The Washington Post’s “Answer Sheet” blog. It makes some excellent points about the “value-added” assessment system for teachers, including some I hadn’t heard before.
Public Displays of Teacher Effectiveness is a column from Ed Week.
Hurdles Emerge in Rising Effort to Rate Teachers is the headline of a New York Times article that gives a fair-to-middlin’ overview on the issue of using the value-added approach in teacher assessment. It does have some good info.
Neither Fair Nor Accurate • Research-Based Reasons Why High-Stakes Tests Should Not Be Used to Evaluate Teachers comes from Rethinking Schools.
Mike Dwyer: Value Adders – the newest members of the Monday Morning Quarterback Club comes from Anthony Cody’s blog at Ed Week.
I was critical of the December, 2010 Gate Foundation report on supporting the value-added approach towards teacher evaluation, and I wasn’t the only one. A well-regarded professor and economist, Jess Rothstein, has come-out with a thorough, and critical, analysis of that same report. In addition to reviewing his report (or instead of), you could read summaries of it here:
Premises, Presentation And Predetermination In The Gates MET Study at the Shanker Blog.
New analysis challenges Gates study on value-added measures by Valerie Strauss at The Washington Post.
How About a Measures of Effective Reporting Project? by Sabrina Stevens Shupe at The Huffington Post.
“Beyond Value-Added Models…Getting the Mechanics of High-Stakes Teacher Effectiveness Policies Right” is a post at Ed Week by Dan Goldhaber. I’m not too thrilled by the article itself. However, the comments section is a must-read for anyone interested in teacher evaluation. The multiple comments by John Thompson are especially insightful.
The pitfalls of putting economists in charge of education is a useful post by Diane Ravitch.
The “three great teachers in a row” myth is a piece by Valerie Strauss at The Washington Post.
Evaluating New York Teachers, Perhaps the Numbers Do Lie is an article from The New York Times. Check-out the equation above the headline!
Gates’ Measures of Effective Teaching Study: More Value-Added Madness is by Justin Baeder at Ed Week.
Student Test Score Based Measures of Teacher Effectiveness Won’t Improve NJ Schools is an excellent article on the problems of Value Added Assessment
‘Value-added’ teacher evaluations: L.A. Unified tackles a tough formula is from The Los Angeles Times.
Education writers from throughout the United States recently met in New Orleans, and I read their tweets about the conference. I was particularly interested in the session on the value-added approach to teacher evaluation, and found some excellent resources.
Douglas Harris is from the University of Wisconsin, and has written a book titled Value-Added Measures in Education What Every Educator Needs to Know. He spoke at the conference, and I’ll include one related tweet a little later. Here are links to two pieces he’s written:
Matthew Nathan quoted Harris in tweet:
It’s like publishing 10 politicians names as corrupt when you know the data tells you 6 of 10 are not
Student Test Scores: An Inaccurate Way to Judge Teachers is from Fair Test.
Mathematical Intimidation: Driven by the Data is by John Ewing, president of Math For America. He provides a good critique of value-added assessment. Here’s an excerpt:
Whether naïfs or experts, mathematicians need to confront people who misuse their subject to intimidate others into accepting conclusions simply
because they are based on some mathematics. Unlike many policy makers, mathematicians are not bamboozled by the theory behind VAM, and they
need to speak out forcefully. Mathematical models have limitations. They do not by themselves convey authority for their conclusions. They are tools, not magic. And using the mathematics to intimidate—to preempt debate about the goals of education and measures of success—is harmful not only to
education but to mathematics itself.
Value-Added Evaluation & Those Pesky Collateralized Debt Obligations by Karl Hess appeared in Education Week. The comments are a “must-read,” too.
An excellent post appeared in The Washington Post’s “The Answer Sheet” titled NY regent: Why we shouldn’t link teacher evaluation to test scores. Here is the introduction to the post:
was written by Roger Tilles, a member of the New York State Board of Regents, which supervises all educational activities within the state. post refers to action taken on Monday by the board, which adopted regulations for a teacher and principal performance evaluation system in which 20 to 40 percent of the evaluation is linked to student standardized test scores.
The letter from assessment experts the N.Y. Regents ignored is from The Washington Post.
On False Dichotomies and Warped Reformy Logic is from School Finance 101.
Value-Added In Teacher Evaluations: Built To Fail comes from The Shanker Blog.
VAM Nauseum: Bleeding the Patient is a post by David B. Cohen.
Firing Line: The Grand Coalition Against Teachers comes from Dissent Magazine.
Heather Hill: Value-Added Assessment 101 is a good short video on Value Added Assessment.
Linda Darling-Hammond’s Getting teacher evaluation right at The Answer Sheet may be THE piece on teacher evaluation.
Christie misses the mark on grading teachers, author says is from The Star-Ledger in New Jersey.
Principals rebel against ‘value-added’ evaluation is from The Washington Post.
Turning the Tables: VAM on Trial is by David B. Cohen.
When The Legend Becomes Fact, Print The Fact Sheet is from The Shanker Blog.
What Value-Added Research Does And Does Not Show is by Matthew Di Carlo at The Shanker Blog.
Value-Added Evaluation Hurts Teaching is a very important commentary written by Linda Darling-Hammond for Education Week.
Teacher: I dare you to measure my ‘value’ is from The Washington Post.
The Problem with Value-Added Measurement is by Gary Rubinstein.
John Thompson’s Book Review: “VAM in Education” — Who has the Burden of Proof? appeared in Education Week.
Value-Added, For The Record is from The Shanker Blog.
How Do Value-Added Indicators Compare to Other Measures of Teacher Effectiveness? is by Douglas Harris.
Practical Assessment, Research and Evaluation is a new study raising questions about the use of Value-Added Measurement.
Ten Reasons Value-Added Measures Are a Bad Idea is by John Spencer.
Now We Know Why Obama Doesn’t Understand VAM is by Mercedes Schneider.
What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models? is from American Educational Research Association.
Will value-added measurement survive the courts? is from The Hechinger Ed blog.
Proposal to refine state’s “value-added” formula elicits concerns is from Gotham Schools.
Value-Added Measures (VAM) is from Scott McLeod.
Thompson: False Positives & Value-Added Evaluation is from John Thompson.
Connecting test scores to teacher evaluations: Why not? is from Dangerously Irrelevant.
The Carnegie Knowledge Network has a series of very useful “briefs” on Value-Added Measurement.
Here’s a short video on Twitter by Arthur Goldstein showing Charlotte Danielson, the present “guru” of teacher evaluation for many districts, saying that student test results should not be used in teacher evaluations:
E.D. Hirsch, Jr. comes out against Value Added Assessment for teachers, at least for those who teach language arts.
Reliability and Validity of Inferences About Teachers Based on Student Test Scores by Edward H. Haertel from Stanford University was published by The Education Testing Service (ETS). Though I’ve only had a chance to skim it, it appears to be an extraordinary critique of the use of Value Added Measures in teacher evaluation.
Case study: The false promise of value-added teacher assessment is from The Washington Post.
The Value Added & Growth Score Train Wreck is Here is from School Finance 101.
AFT’s Weingarten Backtracks on Using Value-Added Measures for Evaluations is from Education Week and Breaking News: Weingarten Rejects VAM! is from Diane Ravitch’s blog.
The U.S. Department of Education came out with a new study on Value Added Measurements. Here are some tweets about it:
Dog barking outside class window can impact teacher value-added scores, new federally funded study finds: https://t.co/XTmXLFKXAu
— Stephanie Simon (@StephanieSimon_) February 3, 2014
Study calculates 2 VAM scores for teachers whose students took 2 diff tests in the same year. Big gap btwn 2 VAM scores for 28% of teachers
— Stephanie Simon (@StephanieSimon_) February 3, 2014
… and moderate gap btwn the 2 VAM scores for another 39% of teachers. Study blames factors outside teacher control, like dog barking.
— Stephanie Simon (@StephanieSimon_) February 3, 2014
That VAM study is here, for all those interested… http://t.co/x9zpxWKu5u
— Stephanie Simon (@StephanieSimon_) February 3, 2014
Comparing Oak Trees’ “Apples to Apples,” by Stanford’s Edward Haertel is from VAMBoozled.
One of many nails in the VAM coffin…. is from Better Living Through Mathematics.
Here is a VAM mathematical formula from Florida.
The Houston teachers union has sued over the district’s use of Value-Added Measurement in teacher evaluations. You can read about it in Ed Week here and also watch this video of their news conference:
And here’s a tweet providing further info:
— Larry Ferlazzo (@Larryferlazzo) May 1, 2014
A Quick Look At The ASA Statement On Value-Added is from The Shanker Blog.
Do Evaluations Penalize Teachers of Needy Students? is from Ed Week.
Education Is Not ‘Moneyball’: Why Teachers Can’t Trust Value-Added Evaluations Yet is an excellent Ed Week piece by William Eger.
Value-Added True Believers Should Listen to Principals is by John Thompson.
High-achieving teacher sues state over evaluation labeling her ‘ineffective’ is from The Washington Post.
A Botched Study Raises Bigger Questions is from NPR.
Gates Scholar, Tom Kane, Continues the Fight to Prove He Is Right is by John Thompson.
Principals reject ‘value-added’ assessment that links test scores to educators’ jobs is from The Washington Post.
Contentious teacher-related policies moving from legislatures to the courts is from The Washington Post.
Dave Powell has written two good posts on Value-Added Measurement: The Declining Value of Value-Added Models, and Why They Persist Anyway and Does the Teacher Matter or Not?
The American Educational Research Association has come out with a special issue examining Value-Added Measurement for teacher evaluation.
What Does the Research Say About Value-Added Models and Teacher Evaluation? is from David Powell at Ed Week.
Can Value Added Add Value to Teacher Evaluation? is by Linda Darling-Hammond.
Example of a horribly-designed statistical formula used to evaluate teacher performance: http://t.co/yZ5cIjPzHX
— Nate Silver (@NateSilver538) May 9, 2015
Will Value-Added Reinforce The Walls Of The Egg-Crate School? is from The Shanker Blog.
Master teacher suing New York state over ‘ineffective’ rating is going to court is from The Washington Post.
Is VAM a Sham? Depends on the Question You’re Asking. is by Ben Spielberg.
A research group is latest to caution the use of “value added” models for teachers. https://t.co/1SnJYdt0XI
— Education Week (@educationweek) November 13, 2015
Beware of the VAM: Valued-Added Measures for Teacher Accountability is from Colorin Colorado and specifically talks about teachers of English Language Learners.
Value-Added Models (VAMs): Caveat Emptor is a new report from The American Statistical Association.
Teacher Tests Test Teachers is from The American Prospect.
Value added measures frequently measure fall to fall–that includes summer, and summer learning loss varies across SES ($) https://t.co/HAffrF8a64
— Daniel Willingham (@DTWillingham) November 22, 2017
Feedback, as always, is welcome. What do you think of the value-added approach? What do you think are the best ways to evaluate teacher effectiveness?