I’ve written often about the importance of being “data-informed” and not “data-driven” (see The Best Resources Showing Why We Need To Be “Data-Informed” & Not “Data-Driven”). I learned about the difference from our school’s exceptional principal, Ted Appel. And I’m still learning…

Today, the California State Department of Education released state-wide passing rates for students taking the California State High School Exit Exam.

Ted had shared his thoughts about this topic after our school received our results, and I wanted to share them with readers:

Does Improved Outcome Data Reflect School Improvement?

Luther Burbank High School received back results from the 10th grade administration of the California High School Exit Exam (CAHSEE). As is our custom, we compared the results to our two previous years’ results. In our analysis we look at the percent who passed in English and math and the percent that met proficiency. (a higher standard). We also look at the median scores and quartile scores. Since our school is broken down into seven small learning communities, we look at this same data broken down for each SLC. Our intent, is to identify patterns that we can trace back to particular practice.

Our three year comparison of 2011 scores to 2009 scores, reflect a school-wide passing rate increase from 62% to 72% in English Language Arts. Proficiency went from 27% to 39%. In math passing went from 76% to 82%. Proficiency from 47% to 55%.

Most people observing such scores would conclude that the school had improved. But can such a conclusion be reasonably drawn? Certainly other factors, besides school improvement, can have a significant impact on improved scores, making it impossible to draw a causal relationship between scores and school improvement. The following are just a few of those factors:

• Did the number and level of English learners change substantially?
• What percentage of the students actually attended the school for the full two years prior to the test?
• Did other significant demographic characteristics change?
• Did the school start, increase or decrease any academic criteria based enrollment programs?

Factors, such as those mentioned above, are somewhat difficult to consistently track. I have never seen them in any discussion of schools that have made great jumps in outcome test scores. But to claim school success or failure, without an analysis of factors unrelated to program or practice, is without merit. It’s equal to conducting a scientific experiment without isolating one variable, and arbitrarily drawing a conclusion.

We will continue to look at outcome data, and analyze for trends and ideas about what is working and what is not. But we will not draw quick conclusions about success or failure based on these numbers. They will serve as a points of information on the spectrum of data that we review to help us mold and refine our programs and interactions and instruction.

I wonder how many “school reformers” are as reflective and careful with data as Ted is?