I’m a data scientist who is skeptical about data is an interesting article at Quartz by Andrea Jones-Rooy.
She goes on to say:
When you encounter a study or dataset, I urge you to ask: What might be missing from this picture? What’s another way to consider what happened? And what does this particular measure rule in, rule out, or incentivize?
I’m adding it to The Best Resources Showing Why We Need To Be “Data-Informed” & Not “Data-Driven”
Thanks Larry, the latest Edition of “Educational Research and Evaluation” is totally devoted to the “Evidence based” question with particular emphasis on the meta-meta-analysis technique used by Hattie, Marzano and the Education Endowment Foundation (EEF).
Most authors in this Edition call into question all of the research using this method. Two of the authors Wrigley & McCusker detail a check list too-
Are the studies included relevant?
Are the effect sizes the results of interventions, or merely associations?
Are the effect sizes comparing the same things (e.g., comparison with an alternative intervention
or “business as usual”, rather than just before and after)?
Are the effect sizes at the same level (e.g., individual, group)
Are the interventions being compared of similar duration?
Are all the outcome measures used all measuring the same thing?
I’ve started using the first point – are the studies relevant?
In looking at Hattie and the EEF I’m amazed at how many studies they include that are NOT relevant, e.g., I’ve looked at the Studies Hattie used for Feedback here – https://visablelearning.blogspot.com/p/feedback.html