This week’s “Question Of The Week” at my Education Week Teacher blog relates to how we can tell the difference between good and bad education research. As a supplement to next week’s response on that issue, I wanted to bring together some helpful resources that might be understandable to other teachers and me.
You might also be interested in these related “The Best…” lists:
Here are my choices for The Best Resources For Understanding How To Interpret Education Research:
A primer on navigating education claims by Paul Thomas.
Matthew Di Carlo at the Shanker Blog has written quite a few good posts on the topic:
A Policymaker’s Primer on Education Research: How to Understand, Evaluate and Use It is from the Mid-continent Research for Education and Learning (McREL). Here’s a non-PDF version.
School Finance 101 often does great data analysis. Bruce Baker’s posts there, though, tend to be a little more challenging to the layperson, but it’s still definitely a must-visit blog.
Here’s a related post:
What Counts as a Big Effect? (I) is by Aaron Pallas. I’m adding it to The Best Resources For Understanding How To Interpret Education Research. Thanks to Scott McLeod for the tip, who also wrote a related post.
Why “Evidence-Based” Education Fails is by Paul Thomas.
How to Judge if Research is Trustworthy is by Audrey Watters.
The “Journal of Errology” has a very funny post titled What it means when it says …. Here’s a sample:
“It has long been known” means “I didn’t look up the original reference”
“It is believed that” means “I think”
“It is generally believed that” means “A couple of others think so, too”
Value-Added Versus Observations, Part One: Reliability is from The Shanker Blog.
Understand Uncertainty in Program Effects is a report by Sarah Sparks over at Education Week.
Limitations of Education Studies is by Walt Gardner at Education Week.
More Evidence of Statistical Dodginess in Psychology? is from The Wall Street Journal.
How To Tell Good Science From Bad is by Daniel Willingham.
When You Hear Claims That Policies Are Working, Read The Fine Print is from The Shanker Blog.
Beware Of “Breakthrough” Education Research is by Paul Bruno.
Why Nobody Wins In The Education “Research Wars” is from The Shanker Blog.
Why it’s caveat emptor when it comes to some educational research is by Tom Bennett.
Six Ways to Separate Lies From Statistics is from Bloomberg News.
Thinking (& Writing) About Education Research & Policy Implications is from Bruce Baker.
How to read and understand a scientific paper: a guide for non-scientists is from “Violent Metaphors.”
Word Attack: “Objective” is by Sabrina Joy Stevens.
How people argue with research they don’t like is a useful diagram from The Washington Post.
Twenty tips for interpreting scientific claims is from Nature.
5 key things to know about meta-analysis is from Scientific American.
Understanding Educational Research is by Walt Gardner at Ed Week.
Evaluation: A Revolt Against The “Randomistas”? is by Alexander Russo.
What Is A Standard Deviation? is from The Shanker Blog. I’m adding it to the same list.
Thanks to Compound Interest
Here’s how much your high school grades predict your future salary is an article in The Washington Post about a recent study. It’s gotten quite a bit of media attention. How Well Do Teen Test Scores Predict Adult Income? is an article in the Pacific Standard that provides some cautions about reading too much into the study. It makes important points that are relevant to the interpretation of any kind of research.
How qualitative research contributes is by Daniel Willingham.
Why Statistically Significant Studies Aren’t Necessarily Significant is from Pacific Standard.
The Problem with Research Evidence in Education is from Hunting English.
The U.S. Department of Education has published a glossary of education research terms.
If the Research is Not Used, Does it Exist? is from The Teachers College Record.
How to Read Education Data Without Jumping to Conclusions is a good article in The Atlantic by Jessica Lahey & Tim Lahey.
Here’s an excerpt:
A Draft Bill of Research Rights for Educators is by Daniel Willingham.
Which Education Research Is Worth the Hype? is from The Education Writers Association.
This Is Interesting & Depressing: Only 13% Of Education Research Experiments Are Replicated
Valerie Strauss at The Washington Post picked up my original post on the lack of replication in education research (This Is Interesting & Depressing: Only .13% Of Education Research Experiments Are Replicated) and wrote a much more complete piece on it. She titled it A shocking statistic about the quality of education research.
Usable Knowledge: Connecting Research To Practice is a new site from The Harvard School of Education that looks promising.
When researchers lie, here are the words they use is from The Boston Globe.
Education researchers don’t check for errors — dearth of replication studies is from The Hechinger Report.
How to Tell If You Should Trust Your Statistical Models is from The Harvard Business Review.
What You Need To Know About Misleading Education Graphs, In Two Graphs is from The Shanker Blog.
The one chart you need to understand any health study is from Vox. I think it has implications for ed research.
Small K-12 Interventions Can Be Powerful is from Ed Week.
Trust, But Verify is by David C. Berliner and Gene V Glass and provides a good analysis of how to interpret education research. Here’s an excerpt:
Frustrated with the pace of progress in education? Invest in better evidence is by Thomas Kane.
What Can Educators Learn From ‘Bunkum’ Research? is from Education Week.
Making Sense of Education Research is from The Education Writers Association.
The uses and abuses of evidence in education is not a research study, but a guide to evaluating research. It’s by Geoff Petty.
Education Studies Warrant Skepticism is by Walt Gardner.
Ten reasons for being skeptical about ‘ground-breaking’ educational research is from The Language Gym.
A Quick Guide to Spotting Graphics That Lie is from National Geographic (thanks to Bruce Baker for the tip).
A Trick For Higher SAT scores? Unfortunately no. is by Terry Burnham.
— José Picardo (@josepicardoSHS) June 11, 2015
Be skeptical. “. . .research relevant to education policy is always fraught with . . .problems of generalizability” https://t.co/DBXBDZ16XH
— Regie Routman (@regieroutman) June 2, 2015
How Not to Be Misled by Data is from The Wall Street Journal.
The Politics of Education Research & “What Works” Randomised Controlled Trials http://t.co/lLCKypKzt9
— Carl Hendrick (@C_Hendrick) June 24, 2015
— Corwin Australia (@CorwinAU) July 9, 2015
This is a good video (and here’s a nice written summary of it by Pedro De Bruyckere ):
How I Studied the Teaching of History Then and Now is by Larry Cuban.
Unexpected Honey Study Shows Woes of Nutrition Research is from The New York Times and has obvious connections to ed research.
Nearly all of our medical research is wrong is from Quartz, and can be related to education research.
A Refresher on Statistical Significance is from The Harvard Business Review.
FIVE SIMPLE STEPS TO READING POLICY RESEARCH is from The Great Lakes Center.
— Stephen Logan (@Stephen_Logan) May 21, 2016
Digital Promise Puts Education Research All In One Place is from MindShift.
Anthony Byrk coins a new -at least, to me – term, “practice-based evidence,” in his piece, Accelerating How We Learn to Improve. Here’s how he describes it:
The choice of words practice-based evidence is deliberate. We aim to signal a key difference in the relationship between inquiry and improvement as compared to that typically assumed in the more commonly used expression evidence-based practice. Implicit in the latter is that evidence of efficacy exists somewhere outside of local practice and practitioners should simply implement these evidence-based practices. Improvement research, in contrast, is an ongoing, local learning activity.
This recognition of context was also raised in a different…context by Five Thirty Eight in an interesting article headlined Failure Is Moving Science Forward.
John Hattie’s Research Doesn’t Have to Be Complicated is by Peter DeWitt.
— Stephen Logan (@Stephen_Logan) October 9, 2016
These next tweets from Daniel Willingham are on the same topic, and I’m adding them to the same list:
— Daniel Willingham (@DTWillingham) October 7, 2016
— Daniel Willingham (@DTWillingham) October 7, 2016
Mathematica Policy Research has released a simple twelve-page guide titled Understanding Types of Evidence: A Guide for Educators.
It’s specifically designed to help educators analyse claims made by ed tech companies but, as the report says itself, the guidance can be applied to any type of education research.
The Unit of Education is from Learning Spy.
Research “Proves” – Very Little appeared in Ed Week.
The Cookie Crumbles: A Retracted Study Points to a Larger Truth is a NY Times article that has implications for education research.
…But It Was The Very Best Butter! How Tests Can Be Reliable, Valid, and Worthless is by Robert Slavin.
Three Questions to Guide Your Evaluation of Educational Research is from Ed Week.
What We Mean When We Say Evidence-Based Medicine is from The NY Times, and has some relevance to education research.
— IES Research (@IESResearch) June 4, 2017
Very brief introduction to types of research in education – what would you change? pic.twitter.com/KyFrRqX0sO
— Harry Fletcher-Wood (@HFletcherWood) January 26, 2017
The seven deadly sins of statistical misinterpretation, and how to avoid them is from The Conversation.
— Dan Meyer (@ddmeyer) May 4, 2017
After I saw the above tweet, I asked for the source and received these replies:
See 2017 Wisconsin. https://t.co/575PeLsDvs
— Dan Meyer (@ddmeyer) May 4, 2017
Well, chapter 3 of my “Leadership for teacher learning” lays out the argument in detail…
— Dylan Wiliam (@dylanwiliam) May 4, 2017
Just to be clear I said that meta-analysis is inappropriate in education unless you have very similar interventions, which you usually don’t
— Dylan Wiliam (@dylanwiliam) May 9, 2017
What You Need To Know About Misleading Education Graphs, In Two Graphs is from The Shanker Institute.
When Real Life Exceeds Satire: Comments on ShankerBlog’s April Fools Post is from School Finance 101.
yeah I’m not a big fan of the weeks of learning conversion. My very, very rough, context-dependent rule of thumb for ed research effect sizes is
.01-.03 SDs = tiny
.04-.09 = small to modest
.1-.19 = moderate
.2 or greater = large
— Matt Barnum (@matt_barnum) January 3, 2018
Empirical Benchmarks for Interpreting Effect Sizes in Research is a useful report.
The statements of science are not of what is true and what is not true, but statements of what is known with different degrees of certainty. pic.twitter.com/D5VeDaMzwh
— Richard Feynman (@ProfFeynman) January 13, 2018
Here’s a good review of the common “pitfalls” in education research.
Effect Sizes: How Big is Big? is by Robert Slavin,
We Can’t Graph Our Way Out Of The Research On Education Spending is from the Shanker Blog.
Effect Sizes and the 10-Foot Man is by Robert Slavin.
The magic of meta-analysis is from Evidence For Learning.
— EdSurge (@EdSurge) May 18, 2018
— Eva Rimbau-Gilabert (@erimbau) May 17, 2018
The Effect Size Effect is from ROBIN_MACP.
Meta-analyses were supposed to end scientific debates. Often, they only cause more controversy is from Science Magazine.
What should we do about meta-analysis and effect size? is from the CEM Blog.
Congratulations. Your Study Went Nowhere. is from The NY Times.
The Problem with, “Show Me the Research” Thinking is by Rick Wormeli.
— Nikki Able (@nikable) October 27, 2018
The Whys and Hows of Research and the Teaching of Reading is by Timothy Shanahan.
The practical meaning of studies that find ‘no effect’ is from The Mindset Scholars Network.
Effect Sizes, Robust or Bogus? Reflections from my discussions with Hattie and Simpson is from Ollie Lovell.
Why you must beware of enormous effect sizes appeared in Schools Week.
How to interpret effect sizes in education is from Schools Week.
— Shawna Coppola (@ShawnaCoppola) February 25, 2019
Interpreting Effect Sizes In Education Research is from The Shanker Institute.
The Fabulous 20%: Programs Proven Effective in Rigorous Research is by Robert Slavin.
I have a few quibbles, including this description of effect sizes (in my book, .3 is usually quite large). For more, see this great resource from @MatthewAKraft https://t.co/MvGI69HrKm pic.twitter.com/pEdWtrBpAs
— Matt Barnum (@matt_barnum) April 27, 2019
Ed researchers might want to keep this in mind, too https://t.co/b5cDcRuAz5
— Larry Ferlazzo (@Larryferlazzo) May 10, 2019
Hands down my favorite line of his speech. https://t.co/ldCczr8Ypn
— Erica L. Green (@EricaLG) May 10, 2019
This analysis from @JohnFPane offers a real caution for journalists, researchers, and policymakers who describe effects sizes in terms of “years/days of learning” https://t.co/EKcOSo833K pic.twitter.com/cyzB4hqzH9
— Matt Barnum (@matt_barnum) June 12, 2019
Different researchers examining the same data to test the same hypothesis came to different conclusions.https://t.co/E1VDLT86Hi
— C. Kirabo Jackson (@KiraboJackson) June 7, 2019
Evidence For Revolution is from Robert Slavin.
#AcademicTwitter, we want our research to matter, to get used. The reality is that it often does not & this is largely our own fault. Here is sage wisdom from @RuthLTurley about why research does not get used & what we can do about it. #RPP @RPP_Network @FarleyRipple @wtgrantfdn pic.twitter.com/tVaWwkCOrB
— Matthew A. Kraft (@MatthewAKraft) July 16, 2019
Additional suggestions are welcome.
If you found this post useful, you might want to consider subscribing to this blog for free.
You might also want to explore the 800 other “The Best…” lists I’ve compiled.