This week’s “Question Of The Week” at my Education Week Teacher blog relates to how we can tell the difference between good and bad education research. As a supplement to next week’s response on that issue, I wanted to bring together some helpful resources that might be understandable to other teachers and me.
You might also be interested in these related “The Best…” lists:
Here are my choices for The Best Resources For Understanding How To Interpret Education Research:
A primer on navigating education claims by Paul Thomas.
Matthew Di Carlo at the Shanker Blog has written quite a few good posts on the topic:
A Policymaker’s Primer on Education Research: How to Understand, Evaluate and Use It is from the Mid-continent Research for Education and Learning (McREL). Here’s a non-PDF version.
School Finance 101 often does great data analysis. Bruce Baker’s posts there, though, tend to be a little more challenging to the layperson, but it’s still definitely a must-visit blog.
Here’s a related post:
What Counts as a Big Effect? (I) is by Aaron Pallas. I’m adding it to The Best Resources For Understanding How To Interpret Education Research. Thanks to Scott McLeod for the tip, who also wrote a related post.
Why “Evidence-Based” Education Fails is by Paul Thomas.
How to Judge if Research is Trustworthy is by Audrey Watters.
The “Journal of Errology” has a very funny post titled What it means when it says …. Here’s a sample:
“It has long been known” means “I didn’t look up the original reference”
“It is believed that” means “I think”
“It is generally believed that” means “A couple of others think so, too”
Value-Added Versus Observations, Part One: Reliability is from The Shanker Blog.
Understand Uncertainty in Program Effects is a report by Sarah Sparks over at Education Week.
Limitations of Education Studies is by Walt Gardner at Education Week.
More Evidence of Statistical Dodginess in Psychology? is from The Wall Street Journal.
How To Tell Good Science From Bad is by Daniel Willingham.
When You Hear Claims That Policies Are Working, Read The Fine Print is from The Shanker Blog.
Beware Of “Breakthrough” Education Research is by Paul Bruno.
Why Nobody Wins In The Education “Research Wars” is from The Shanker Blog.
Why it’s caveat emptor when it comes to some educational research is by Tom Bennett.
Six Ways to Separate Lies From Statistics is from Bloomberg News.
Thinking (& Writing) About Education Research & Policy Implications is from Bruce Baker.
How to read and understand a scientific paper: a guide for non-scientists is from “Violent Metaphors.”
Word Attack: “Objective” is by Sabrina Joy Stevens.
How people argue with research they don’t like is a useful diagram from The Washington Post.
Twenty tips for interpreting scientific claims is from Nature.
5 key things to know about meta-analysis is from Scientific American.
Understanding Educational Research is by Walt Gardner at Ed Week.
Evaluation: A Revolt Against The “Randomistas”? is by Alexander Russo.
What Is A Standard Deviation? is from The Shanker Blog. I’m adding it to the same list.
Thanks to Compound Interest
Here’s how much your high school grades predict your future salary is an article in The Washington Post about a recent study. It’s gotten quite a bit of media attention. How Well Do Teen Test Scores Predict Adult Income? is an article in the Pacific Standard that provides some cautions about reading too much into the study. It makes important points that are relevant to the interpretation of any kind of research.
How qualitative research contributes is by Daniel Willingham.
Why Statistically Significant Studies Aren’t Necessarily Significant is from Pacific Standard.
The Problem with Research Evidence in Education is from Hunting English.
The U.S. Department of Education has published a glossary of education research terms.
If the Research is Not Used, Does it Exist? is from The Teachers College Record.
How to Read Education Data Without Jumping to Conclusions is a good article in The Atlantic by Jessica Lahey & Tim Lahey.
Here’s an excerpt:
A Draft Bill of Research Rights for Educators is by Daniel Willingham.
Which Education Research Is Worth the Hype? is from The Education Writers Association.
This Is Interesting & Depressing: Only 13% Of Education Research Experiments Are Replicated
Valerie Strauss at The Washington Post picked up my original post on the lack of replication in education research (This Is Interesting & Depressing: Only .13% Of Education Research Experiments Are Replicated) and wrote a much more complete piece on it. She titled it A shocking statistic about the quality of education research.
Usable Knowledge: Connecting Research To Practice is a new site from The Harvard School of Education that looks promising.
When researchers lie, here are the words they use is from The Boston Globe.
Education researchers don’t check for errors — dearth of replication studies is from The Hechinger Report.
How to Tell If You Should Trust Your Statistical Models is from The Harvard Business Review.
What You Need To Know About Misleading Education Graphs, In Two Graphs is from The Shanker Blog.
The one chart you need to understand any health study is from Vox. I think it has implications for ed research.
Small K-12 Interventions Can Be Powerful is from Ed Week.
Trust, But Verify is by David C. Berliner and Gene V Glass and provides a good analysis of how to interpret education research. Here’s an excerpt:
Frustrated with the pace of progress in education? Invest in better evidence is by Thomas Kane.
What Can Educators Learn From ‘Bunkum’ Research? is from Education Week.
Making Sense of Education Research is from The Education Writers Association.
The uses and abuses of evidence in education is not a research study, but a guide to evaluating research. It’s by Geoff Petty.
Education Studies Warrant Skepticism is by Walt Gardner.
Ten reasons for being skeptical about ‘ground-breaking’ educational research is from The Language Gym.
A Quick Guide to Spotting Graphics That Lie is from National Geographic (thanks to Bruce Baker for the tip).
A Trick For Higher SAT scores? Unfortunately no. is by Terry Burnham.
— José Picardo (@josepicardoSHS) June 11, 2015
Be skeptical. “. . .research relevant to education policy is always fraught with . . .problems of generalizability” https://t.co/DBXBDZ16XH
— Regie Routman (@regieroutman) June 2, 2015
How Not to Be Misled by Data is from The Wall Street Journal.
The Politics of Education Research & “What Works” Randomised Controlled Trials http://t.co/lLCKypKzt9
— Carl Hendrick (@C_Hendrick) June 24, 2015
— Corwin Australia (@CorwinAU) July 9, 2015
This is a good video (and here’s a nice written summary of it by Pedro De Bruyckere ):
How I Studied the Teaching of History Then and Now is by Larry Cuban.
Unexpected Honey Study Shows Woes of Nutrition Research is from The New York Times and has obvious connections to ed research.
Nearly all of our medical research is wrong is from Quartz, and can be related to education research.
A Refresher on Statistical Significance is from The Harvard Business Review.
FIVE SIMPLE STEPS TO READING POLICY RESEARCH is from The Great Lakes Center.
— Stephen Logan (@Stephen_Logan) May 21, 2016
Digital Promise Puts Education Research All In One Place is from MindShift.
Anthony Byrk coins a new -at least, to me – term, “practice-based evidence,” in his piece, Accelerating How We Learn to Improve. Here’s how he describes it:
The choice of words practice-based evidence is deliberate. We aim to signal a key difference in the relationship between inquiry and improvement as compared to that typically assumed in the more commonly used expression evidence-based practice. Implicit in the latter is that evidence of efficacy exists somewhere outside of local practice and practitioners should simply implement these evidence-based practices. Improvement research, in contrast, is an ongoing, local learning activity.
This recognition of context was also raised in a different…context by Five Thirty Eight in an interesting article headlined Failure Is Moving Science Forward.
John Hattie’s Research Doesn’t Have to Be Complicated is by Peter DeWitt.
— Stephen Logan (@Stephen_Logan) October 9, 2016
These next tweets from Daniel Willingham are on the same topic, and I’m adding them to the same list:
— Daniel Willingham (@DTWillingham) October 7, 2016
— Daniel Willingham (@DTWillingham) October 7, 2016
Mathematica Policy Research has released a simple twelve-page guide titled Understanding Types of Evidence: A Guide for Educators.
It’s specifically designed to help educators analyse claims made by ed tech companies but, as the report says itself, the guidance can be applied to any type of education research.
The Unit of Education is from Learning Spy.
Research “Proves” – Very Little appeared in Ed Week.
Additional suggestions are welcome.
If you found this post useful, you might want to consider subscribing to this blog for free.
You might also want to explore the 800 other “The Best…” lists I’ve compiled.