Last year, two very talented educators — Ted Appel, the extraordinary principal we have at our school, and Kelly Young, creator of much of the engaging curriculum we use at our school through his Pebble Creek Labs — brought-up the same point in separate meetings with teachers at my school: The importance of not being “data-driven” and, instead, to be “data-informed.”
These conversations took place in the context of discussing the results of state standardized tests. Here’s the point made by Ted:
If schools are data-driven, they might make decisions like keeping students who are “borderline” between algebra and a higher-level of math in algebra so that they do well in the algebra state test. Or, in English, teachers might focus a lot of energy on teaching a “strand” that is heavy on the tests — even though it might not help the student become a life-long reader. In other words, the school can tend to focus on its institutional self-interest instead of what’s best for the students.
In schools that are data-informed, test results are just one more piece of information that can be helpful in determining future directions.
Since that conversation took place, I’ve written several posts about the topic. I thought it might be useful to bring together several related resources.
Here are my choices for The Best Resources Showing Why We Need To Be “Data-Informed” & Not “Data-Driven”:
First, I’m going to list the post I wrote immediately after that conversation – “Data-Driven” Versus “Data-Informed”
Next, a Dilbert cartoon that Alexander Russo shared on his blog:
The cartoon reminded of what the New York judge said earlier this month when he ruled that the School District can publicly release the names of teachers and their “Teacher Data Reports.” Here is what the judge said (and I kid you not):
“The UFT’s argument that the data reflected in the TDRs should not be released because the TDRs are so flawed and unreliable as to be subjective is without merit,” the judge wrote, citing legal precedent that “there is no requirement that data be reliable for it to be disclosed.”
Data-Driven…Off a Cliff is the title of an excellent post by Robert Pondiscio.
An article in Educational Leadership is a year-old, but it’s new to me and certainly worth sharing. It’s called The New Stupid, and has the subtitle “Educators have made great strides in using data. But danger lies ahead for those who misunderstand what data can and can’t do.” It’s written by Frederick M. Hess.
It’s an article worth reading (though I do have concerns about some of its points), and relates to what I’ve written about being “Data-Driven” Versus “Data-Informed.”
Here are a couple of excerpts:
…the key is not to retreat from data but to truly embrace the data by asking hard questions, considering organizational realities, and contemplating unintended consequences. Absent sensible restraint, it is not difficult to envision a raft of poor judgments governing staffing, operations, and instruction—all in the name of “data-driven decision making.”
First, educators should be wary of allowing data or research to substitute for good judgment. When presented with persuasive findings or promising new programs, it is still vital to ask the simple questions: What are the presumed benefits of adopting this program or reform? What are the costs? How confident are we that the promised results are replicable? What contextual factors might complicate projections? Data-driven decision making does not simply require good data; it also requires good decisions.
The Truth Wears Off: Is there something wrong with the scientific method? by Jonah Lehrer is an exceptional article from The New Yorker. David Brooks from The New York Times wrote a nice summary of the article:
He describes a class of antipsychotic drugs, whose effectiveness was demonstrated by several large clinical trials. But in a subsequent batch of studies, the therapeutic power of the drugs appeared to wane precipitously.
This is not an isolated case. “But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain,” Lehrer writes. “It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable.”
The world is fluid. Bias and randomness can creep in from all directions. For example, between 1966 and 1995 there were 47 acupuncture studies conducted in Japan, Taiwan and China, and they all found it to be an effective therapy. There were 94 studies in the U.S., Sweden and Britain, and only 56 percent showed benefits. The lesson is not to throw out studies, but to never underestimate the complexity of the world around.
Talking To Students About Their Reading (& Their Data) is a post I’ve written.
“Using data for progress, not punishment”
In a Data-Heavy Society, Being Defined by the Numbers is by Alina Tugend at The New York Times.
Data-Driven Instruction and the Practice of Teaching is by Larry Cuban.
The Obituaries for Data-Driven ‘Reform’ Are Being Written is by John Thompson.
California Governor Puts the Testing Juggernaut On Ice is by Anthony Cody at Education Week.
Making the wrong “Data-Driven Decisions” is by Carl Anderson (thanks to Dean Shareski for the tip).
Data-Driven To Distraction appeared on Larry Cuban’s blog.
Larry Cuban has written another interesting post titled Jazz, Basketball, and Teacher Decision-making. John Thompson relates it to school data at Thompson: Duncan Can Shoot — But Can He Rebound?
“Not everything that matters can be measured”
“You Are Not An Equation” (And Neither Are Your Students)
Policy by Algorithm is a nice post over at Ed Week.
Professional Judgment: Beyond Data Worship is by Justin Baeder at Education Week.
This Is Why Our School is “Data-Informed” & Not “Data-Driven”
Bias toward Numbers in Judging Teaching is by Larry Cuban.
The False Allure Of Statistics is by John Thompson.
‘Moneyball’ and making schools better is by John Thompson.
Here’s Another Reason Why We Need To Be Data-Informed & Not Data-Driven
Data Gone Wild
“Why Do Good Policy Makers Use Bad Indicators?” is by Larry Cuban.
New Hope for the Obama/Gates School of Reform is by John Thompson.
“It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept,” Stephen Wolfram says. To which I say, “I’m looking at your data, and you know what’s amazing to me? How much of you is missing.”
This is the last paragraph of Robert Krulwich’s article at NPR, titled Mirror, Mirror On The Wall, Does The Data Tell It All? In it, he compares authors of books, one by Stephen Wolfram, creator of a the Wolfram search engine, and Bill Bryson, author of a biographical account of growing up in Iowa. The column, though not specifically about schools, hits a “bulls-eye” on our current data-driven madness.
What Does “Stop & Frisk” Have To Do With What’s Happening With Our Schools?
What Does The NYPD Have In Common With Many Data-Driven Schools?
Tired of the Tyranny of Data is by Dave Orphal.
Big Data Doesn’t Work if You Ignore the Small Things that Matter is from The Harvard Business Review.
Test Scores Often Misused In Policy Decisions is from The Huffington Post.
The Data-Driven Education Movement is from The Shanker Blog.
Invisible Data is from Stories From School.
Don’t Let Data Drive Your Dialogue is from The Canadian Education Association.
“The Goal Is The Goal”
On the Uses and Meaning of Data is by David B. Cohen.
Friday Thoughts on Data, Assessment & Informed Decision Making in Schools is from School Finance 101.
The New York Times Has Discovered The Perils Of Being Data-Driven — I Just Wish Arne Duncan Would, Too
Here’s a Part One and Part Two series of posts on the use of data in education, and they’re both from Larry Cuban’s blog.
Data: No deus ex machina is by Frederick M. Hess & Jal Mehta.
Bill Gates is naive, data is not objective is by Cathy O’Neil and is really good.
Bill Gates and the Cult of Measurement is by Anthony Cody.
Sure, Big Data Is Great. But So Is Intuition. is from The New York Times. Here’s an excerpt:
It’s encouraging that thoughtful data scientists like Ms. Perlich and Ms. Schutt recognize the limits and shortcomings of the Big Data technology that they are building. Listening to the data is important, they say, but so is experience and intuition. After all, what is intuition at its best but large amounts of data of all kinds filtered through a human brain rather than a math model?
At the M.I.T. conference, Ms. Schutt was asked what makes a good data scientist. Obviously, she replied, the requirements include computer science and math skills, but you also want someone who has a deep, wide-ranging curiosity, is innovative and is guided by experience as well as data.
“I don’t worship the machine,” she said.
Beware the Big Errors of ‘Big Data’ is from Wired.
The NYPD Probably Didn’t Stop All That Crime
Data-Informed Versus Data-Driven PLC Teams is from All Things PLC.
David Brooks, who generally loses all coherence when he writes explicitly about education issues, has just written an eloquent case for the importance of being data-informed, and not data-driven. Read his column titled What Data Can’t Do. Here’s an excerpt:
December 12, 2010
by Larry Ferlazzo
I support developing more effective ways to evaluate teachers — using multiple measures.
What I don’t support, however, is the present effort by the Gates Foundation that’s spending millions of dollars using student scores on standardized tests as THE MEASURE used to evaluate teachers.
I have no objection to scores from existing standardized tests being a part — a small part — of those multiple measures. If present efforts to create a “new generation” of state assessments actually invite teachers to work with them and develop more accurate performance-based assessments, I would have no objection to their proportional weight being increased — a little.
Accomplished California Teachers (of which I am a member) published a report earlier this year that I think accurately reflects my thinking on teacher evaluation:
To support collaboration and the sharing of expertise, teachers should be evaluated both on their success in their own classroom and their contributions to the success of their peers and the school as a whole. They should be evaluated with tools that assess professional standards of practice in the classroom, augmented with evidence of student outcomes. Beyond standardized test scores, those outcomes should include performance on authentic tasks that demonstrate learning of content; presentation of evidence from formative classroom assessments that show patterns of student improvement; the development of habits that lead to improved academic success (personal responsibility, homework completion, willingness and ability to revise work to meet standards), along with contributing indicators like attendance, enrollment and success in advanced courses, graduation rates, pursuit of higher education, and work place success.
I’ve written at the Washington Post what these ideas look like on the ground at our school (see The best kind of teacher evaluation).
I’m not going to spend a lot of time here reviewing the reams of research that have shown how evaluating teachers using student test results are unstable and inaccurate. You can find more than enough evidence for that at The Best Resources For Learning About The “Value-Added” Approach Towards Teacher Evaluation.
But right now my big concerns about the Gates Foundation efforts are how I fear they might be minimizing two key tools that can have a huge impact on improving teacher effectiveness — videotape and student surveys.
As I’ve previously written (There Are Some Right Ways & Some Wrong Ways To Videotape Teachers — And This Is A Wrong Way) Gates is funding a massive effort to videotape teacher lessons and then have them evaluated by people who have never visited the school nor have any kind of relationship with the teacher, and rate them using checklists and correlate them to value-added scores.
Contrast that way with how videotape is being used to universal acclaim at our school (led by principal Ted Appel) where a talented consultant (Kelly Young at Pebble Creek Labs), who has been working with us for years, meets with us to review an edited version of a taped lesson, with us initially giving our own critique and reflections followed by his comments. This process is entirely outside of the official evaluation process, and is focused on helping teachers improve their craft. It has been one of the most significant professional development experiences I’ve had. At my request, Kelly and I subsequently showed the video and shared our critique with my class, which was a transforming experience for all involved. Teacher Magazine will be publishing my account of that class period in early January.
As part of their massive project, Gates is also having thousands of students complete anonymous surveys evaluating their teachers and, you guessed it, correlating the answers to student test scores.
I’m a huge fan of getting student feedback. In fact, I’ve posted My Best Posts On Students Evaluating Classes (And Teachers). To help students see that I take their responses seriously, I always reprint the results in this blog (you can see them and the questions at that “The Best…” list) and email the results to teachers and administrators at my school.
But I want to know more from students than what Gates is asking. I want to know if they think I’m patient and if they believe I care about their lives outside of school. Yes, I certainly want to know what they think I could do better, and I also want to know what they think they could do better. I want to learn if they think their reading habits have changed and, for example, when I’m teaching a history class, are they more interested in learning about history than they were prior to taking the class. I want to find-out what they believe are the most important things they learned in the class and, for many, it might be learning life skills like the fact their brain actually grows when they learn new things or the fact that they had in them the capacity to complete reading a book or writing an essay for the first time in their lives. And, in the discussion that follows (one thing I learned as an organizer is that a survey’s true use is as a spark for a conversation) we discuss all these things and many more, including the differences between what might be what we like to do best and what we learn the most from.
By trying to connect videotaping teachers to anonymous checklist evaluators and test scores, and doing the same to student surveys, I fear the Gates Foundation may succeed in framing the public conversation about these tools as just a means to one end — better scores on assessments that don’t accurately measure learning.
This minimizes these potentially powerful tools, contributes toward seeing both teachers and students as replaceable widgets, and unfortunately reinforces a school reform debate where many worship at the alter of multiple choice test results.
Using videotaped teacher lessons and student surveys for the primary purpose of connecting them to teacher evaluation by test scores is like using a Stradivarius and a Grand Piano to play “Mary Had A Little Lamb” to evaluate the musician. In both instances, the tools have far more value to everyone if used in more expansive ways.
No, we all deserve better…
(Here’s a link to the article I wrote about my evaluation)
December 4, 2010
by Larry Ferlazzo
Today, The New York Times is running two articles on videotaping teachers for evaluation purposes. They are:
Teacher Ratings Get New Look, Pushed by a Rich Watcher
Video Eye Aimed at Teachers in 7 School Systems
They both talk about a Gates Foundation-funding effort to videotape teacher lessons and then have them evaluated by people who have never visited the school nor have any kind of relationship with the teacher, and rate them using checklists.
Here’s a criticism voiced in the article that I agree with wholeheartedly:
Randi Weingarten, president of the American Federation of Teachers, which has several affiliates participating in the research, also expressed reservations. “Videotaped observations have their role but shouldn’t be used to substitute for in-person observations to evaluate teachers,” Ms. Weingarten said. “It would be hard to justify ratings by outsiders watching videotapes at a remote location who never visited the classroom and couldn’t see for themselves a teacher’s interaction and relationship with students.”
I’d call this a wrong way to use videotape of teachers.
I’ve previously written about what I think is a right way to use videotaped teachers (Now, This Is What A Useful & Effective Teacher Assessment Might Look Like).
Our school, led by principal Ted Appel, has begun having Kelly Young, an extraordinarily talented consultant on instructional strategies who we have been working with for years, videotape our lessons (I’ve written much about Kelly in this blog). He then meets with us to review an edited version of the tape, with us initially giving our own critique and reflections followed by his comments. This process is entirely outside of the official evaluation process, and is focused on helping teachers improve their craft.
This process has been universally acclaimed by teachers so far, and it has been one of the most significant professional development experiences I’ve had.
As I mentioned in that previous post on my videotaped lesson, I had suggested to Kelly that we show the video and discuss the critique with my class as an experiment.
We did this a few days ago, and it was truly an amazing one hour.
I’ve written an article for Teacher Magazine about what happened, and they’ll be publishing it after the holidays. After reading it, I think you’ll agree that there are far better ways to use videotaped lessons than what the Gates Foundation is planning.