This is a quote from a very interesting article that appears in The New York Times this morning headlined Armed With Data, Fighting More Than Crime. When I first started reading it, I thought it was just going to be another paean to the glories of data. And,in many ways, it is. But it’s more than that — it raises the issue of measuring the right things, and looking at the “big picture.”
Here’s an excerpt:
Ten years later, the program has evolved, said Chad Kenney, a CitiStat analyst. “It used to be a very agency-specific conversation — about getting agencies used to data, collecting data and making decisions based on data. As that became ingrained we realized most things don’t just touch one agency.” Now there is CleanStat, which looks at not just whether the Solid Waste department is picking up trash on time, but whether Transportation maintains street medians, whether the Parks department keeps parks tidy and whether Housing enforces relevant building codes.
“It’s a learning process,” said Kenney. “What we were measuring in 2002 is very different than what we look at now — we learn that actually, this metric isn’t the best metric for what we’re trying to accomplish.”
Applying this kind of perspective to student, teacher, and school assessment, we would look at many measures and not solely focus on test scores (see The Best Places To Learn What Impact A Teacher & Outside Factors Have On Student Achievement). And I don’t care how many times I hear the phrase “multiple measures,” the primary focus is still on test scores.
Another article in The Times last year by Robert Crease spoke to this point as well (see “Ontic” Versus “Ontological” Measurements). He viewed the idea of “measurement” in historical and philosophical terms and described two different kinds of measurement. On is “ontic,” which identifies “how big or small a thing is using a scale, beginning point and unit. Something is x feet long, weighs y pounds or takes z seconds.”
The other is “ontological.” It was defined it as involving “less an act than an experience: we sense that things don’t ‘measure up’ to what they could be.” The article shared a number of examples, and also cautioned against the danger of making “ontic” measurements into “ontological” ones, citing measuring teaching ability primarily through student test scores. Crease suggested that we:
…. ask ourselves what is missing from our measurements.
I’ve written before about the dangers of data (see The Best Resources Showing Why We Need To Be “Data-Informed” & Not “Data-Driven”) and other options for teacher and student assessment (see The Best Resources For Learning About Effective Student & Teacher Assessments and The Best Articles Describing Alternatives To High-Stakes Testing).
What are your ideas on how we can measure the “right things?”