Larry Ferlazzo’s Websites of the Day…

…For Teaching ELL, ESL, & EFL

“let some of the players with lower batting averages go”


Yesterday, I wrote a post (see “The message is to fire people sooner rather than later”) commenting on the big (non peer reviewed) study featured in The New York Times about the long-term impact on students of having “high value added” teachers.

One of the researchers was interviewed on the PBS News Hour last night, and a comment seems to me to point out a huge blind spot in the study. He said:

I think — you know, let me make an analogy here. Suppose you are managing a baseball team, say, the Boston Red Sox, and you’re trying to do as well as you can. You have players with different batting averages. One approach you might take is to bring the hitting coach out and try to raise the batting averages of the players you have.

But I think it also makes a lot of sense — and this will make sense to sports fans — that, on occasion, you might decide to let some of the players with lower batting averages go, and try to get somebody else who might do better.

Of course, no one argues that teachers who don’t improve their craft should stay in the profession. The key, though, is what are the indicators that demonstrate the effectiveness of a teacher. In this passage, he compares the batting average of a baseball player to the test scores of teacher’s students.

He doesn’t consider if the tests are accurate gauges of what students learn, he doesn’t consider the countless other examples of “value” that can be provided to students by a teacher — an appetite for being a life-long learner, skills in enhancing self-control, confidence to try new things.

Since he began the sports comparison, let’s run with it…A New York Times profile of a basketball player with poor statistics, but who every team wants anyway, included this passage:

Battier’s game is a weird combination of obvious weaknesses and nearly invisible strengths. When he is on the court, his teammates get better, often a lot better, and his opponents get worse — often a lot worse. He may not grab huge numbers of rebounds, but he has an uncanny ability to improve his teammates’ rebounding. He doesn’t shoot much, but when he does, he takes only the most efficient shots. He also has a knack for getting the ball to teammates who are in a position to do the same, and he commits few turnovers. On defense, although he routinely guards the N.B.A.’s most prolific scorers, he significantly ­reduces their shooting percentages. At the same time he somehow improves the defensive efficiency of his teammates — probably, Morey surmises, by helping them out in all sorts of subtle ways. “I call him Lego,” Morey says. “When he’s on the court, all the pieces start to fit together. And everything that leads to winning that you can get to through intellect instead of innate ability, Shane excels in. I’ll bet he’s in the hundredth percentile of every category.”

There are other things Morey has noticed too, but declines to discuss as there is right now in pro basketball real value to new information, and the Rockets feel they have some. What he will say, however, is that the big challenge on any basketball court is to measure the right things. The five players on any basketball team are far more than the sum of their parts; the Rockets devote a lot of energy to untangling subtle interactions among the team’s elements. To get at this they need something that basketball hasn’t historically supplied: meaningful statistics. For most of its history basketball has measured not so much what is important as what is easy to measure — points, rebounds, assists, steals, blocked shots — and these measurements have warped perceptions of the game. (“Someone created the box score,” Morey says, “and he should be shot.”) How many points a player scores, for example, is no true indication of how much he has helped his team. Another example: if you want to know a player’s value as a ­rebounder, you need to know not whether he got a rebound but the likelihood of the team getting the rebound when a missed shot enters that player’s zone.

This researcher’s mistake in believing in the primacy of test scores is not limited to him. It’s too bad, though, he chose to use that as the lens he looked through in interpreting his results.

Print Friendly

Author: Larry Ferlazzo

I'm a high school teacher in Sacramento, CA.


  1. I love this piece! As a Duke alum, I cheerfully reference SB and his value to his teams all the time when I talk with students and parents about how everyone matters–and that being recognized for your value by your community matters more than anything else. You articulate so well something I’ve been thinking about in more vague terms with teachers and testing insanity. I’m sending everyone I know to read your post!

  2. Pingback: Sunday lists. « Fred Klonsky

  3. You missed the point of the Battier article. The point of the Battier article was that the old statistics in basketball (points, rebounds, blocked shots, etc.) we not good metrics to determine if a player made his team win. There is a new generation of statistics more like the Jamesian, moneyball, sabermetrics (whatever it is called nowadays) that help to determine which basketball player helps the team to win.

    If you don’t like the value added metric, which do you propose?

  4. Pingback: “Let some of the players with lower batting averages go” | ruralteacher

  5. My god. This is turning sabermetrics on its head. Judge a player on batting average only? What about On-Base Percentage which is a more valuable indicator of success. Not to mention slugging percentage. So “no” you can’t evaluate a player or a teacher using one method.

Leave a Reply

Required fields are marked *.