I’m a big proponent of Carol Dweck’s research on a growth mindset (see The Best Resources On Helping Our Students Develop A “Growth Mindset”). I use it with my students and, in fact, it’s a concept we push heavily school-wide in our Social Emotional Learning initiative. I’ve seen a number of students positively affected by it, and it’s provided me with a positive tool to improve my classroom’s environment.
So I was very pleased to see a recent study by Dweck and her colleagues finding that teaching students about a growth mindset can be very effective on a larger-scale.
However, almost simultaneous with the publication of that study, a detailed critique of it was also published, basically claiming that the data did not support the researchers’ conclusions.
Despite my continuing efforts to become more sophisticated in my understanding of the research behind these kinds of studies (see The Best Resources For Understanding How To Interpret Education Research), I don’t really understand the data and methodology of the study nor of the critique.
I had hoped that some of the 275 comments following the critique might provide me with some clarity, but I was amazed at how few of the comments actually related to the study itself. Most commented on topics as wide-ranging as global warming, El Nino, and speaker fees for academics. Reading those 275 comments is time I’ll never get back 🙂 .
I did, however, find three that seemed to provide some value, but I couldn’t understand them either.
I’m hoping that readers with far more knowledge of research data analysis might be able to enlighten me about these dueling claims, and will also be requesting the assistance of people who I know have greater knowledge in this arena.
The more I learn, the more I discover I don’t know.
What is your take on this research?
Larry,
I am not a researcher (yet) just a PhD student who has is statistically minded (I teach AP Stats and have had additional research classes in grad school). I preface my comments with this so no one will be misled to think I am an ‘expert’ in my comments.
With that caveat made, I think the criticism is too harsh. The author says, “Among ordinary students, the effect on the growth mindset group was completely indistinguishable from zero, and in fact they did nonsignificantly worse than the control group.” But, the graph that attends that statement is the RESIDUALS, or (observed-predicted) not the values themselves.
At that point, all of the values are above zero, and the confidence interval does not contain zero, so there is reason to say the results are significant. This same criticism comes up several times in the critics response, and each time he or she does not acknowledge the fact the data presented is of the residuals, not the actual values. Residuals can be very small, but positive residuals means the participants are outperforming the expectations.
There is reason to question the sampling method. They used a convenience sample at best because they asked teachers on social media and their website to volunteer their learners to be involved. A teacher / department who ignores the electronic methods of communication would not have been aware or interested in joining this cohort. They could and do argue that they still ended up with a representative sample of learners, but the lack of random sampling in their study does weaken the results.
I do agree with the author of the criticism about the post hoc testing, but since they found a significant result I don’t think the authors were doing a post hoc but instead an ad hoc testing to see which combination of factors were significant.
All in all, I don’t find the criticism to be devastating, but there is reason to question all research. I would like to see a larger effect size for the treatment groups. A larger effect size lowers the probability of a type II error, and shows the difference between the null and alternative hypothesis to be greater, which would indicate a greater impact of the treatments.
Just my two cents.
I just wanted to note that the study author responded to the detailed critique, and both are on Scott’s Blog:
http://slatestarcodex.com/2015/05/07/growth-mindset-4-growth-of-office/
I’m not going to try to summarize the statistical discussion there, but I will say that I find statistics to be a useful way to show conclusions to people who trust that you are applying the techniques correctly. On the other hand, it’s almost useless as method of creating that trust – there are too many ways to screw around that aren’t obvious, and it’s hard for even those with good backgrounds in statistics and math to fully understand the techniques and drawbacks used in this type of study.