In the next week or two, The Washington Post will be publishing a piece I’ve written about some recent examples of schools paying students cash for attendance and performing academic work.

While I was writing it, I revisited a well-known study by Roland Fryer that I’ve previously posted about (see The Problem With “Bribing Students” and More On The Problem With “Bribing Students”). One of the findings of that study that is often cited by supporters of this “cash incentive” idea was that paying second graders in Dallas resulted in “significant” gains in reading comprehension standardized tests and that a significant amount of that gain remained a year later.

Something about that finding always sounded fishy to me, but I just didn’t have it in me to plow through a nearly 200 page scholarly research paper. It’s also questionable if I would have understood what I was reading, either.

Fortunately, though, Dr. Stephen Krashen, the internationally-known language and literacy scholar, was interested and willing to analyze the research. Here is what he discovered in reviewing the section on the Dallas “success”:

Comments on Fryer, R. Financial Incentives and Student Achievement: Evidence from Randomized Trials Quarterly Journal of Economics, 126 (4): 1755-1998

S. Krashen, March 1, 2012

It is not correct to assume that this study demonstrated that incentives work and that the effect is lasting. Fryer (2011) paid second graders in Dallas $2 for each book they read and then passed the AR test on that book. Children were in general allowed to take each test only once, and had to score 80% or better correct to get credit. The duration of the study was one academic year. Students in the incentive program were compared to controls who were not in the program.

Incentives produced higher scores on the Iowa test for only one component, reading comprehension. Increases in vocabulary and language were not significant. When tested one year later, the effects were half that of the original effects and not significant.

This is hardly an overwhelming victory for incentives.

There are three major problems with this study:

The students were second graders. Second graders are not always independent readers. The easiest Goosebumps, for example, is at the third grade level.

They didn’t read very much: The average student earned $13.81. At $2 a book, this means that the students who got incentives read and passed AR tests on less than seven books during the entire year. And these are books for second graders, which means none of them were massive tomes. Is it possible that the comparisons read even less? (see below)

MOST SERIOUS. The incentive group did better than comparisons on one subtest, but we must ask “compared to what”? What did the comparisons do? The real question is whether an AR program with financial rewards is better than a literature-based print-rich program without incentives. Would children have done as well or better if they had just read the books without taking tests and getting paid?

This is the major flaw of all AR research, as I have argued in my reviews of AR research (see citations below).

AR has four components: (1) access to books, (2) provide time to read books, (3) take tests, (4) get rewards and the complete program is consistently compared to “traditional” instruction and is often (but not always) better. It is no surprise to see a program with all four components do better than one with none of them, but is this just because of the access and time dedicated to reading? Did the tests and prizes add anything?

There has been no attempt to see if components (3) and (4) add anything, no attempt to compare (1,2,3,4) with just (1,2). There is overwhelming evidence that the combination of (1) and (2) is in fact enough to produce excellent results, superior to traditional programs (Krashen, 2004), but the AR people have shown no interest in testing this simpler hypothesis.

Summary: Five out of six results were statistically insignificant. Only one was significant and the one significant result could have been because of more reading, not because of the tests and financial rewards.

Krashen, S. 2002. Accelerated reader: Does it work? If so, why? School Libraries in Canada 22(2): 24-26, 44.
Krashen, S. 2003. The (lack of) experimental evidence supporting the use of accelerated reader. Journal of Children’s Literature 29 (2): 9, 16-30.
Krashen, S. 2004a. A comment on Accelerated Reader: The pot calls the kettle black. Journal of Adolescent and Adult Literacy 47(6): 444-445.
Krashen, S. 2004b. The Power of Reading. Portsmouth: Heinemann and Westport: Libraries Unlimited.
Krashen, S. 2005. Accelerated reader: Evidence still lacking. Knowledge Quest 33(3): 48-49.
Krashen, S. 2007. Accelerated reading: Once again, evidence still lacking. Knowledge Quest September/October. 36 (1); 11-17

 

Thanks to Dr. Krashen for identifying the flaws in this report…..