Lara Hoekstra is a very talented colleague whose work has appeared frequently in this blog and in my books. Her previous guest post, “Instead of seeing students as Far Below Basic or Advanced, we see them as learners,” discussed the annual writing assessment process we use at our school and is also on The Best Posts On Writing Instruction list.

Here’s a new post on our school’s writing assessment process that she’s written:

This is my eleventh year at Luther Burbank High School.  For most of this time we have been administering a school wide writing assessment in the fall and spring.  We developed this writing assessment process in conjunction with the Area 3 Writing Project, using the University of California’s writing assessment and an improvement rubric developed by the California Writing Project.  Each time we do it I am amazed by my colleagues and where it leads our thinking, our conversations, and our teaching.  But, as many of us have been together for ten or more years, we had fallen into a routine with the grading process and we had begun to stagnate as a department in terms of the writing assessment.  

This year, under our new department chair, Antoine Germany, our norming/scoring process was different than before.  The focus was not on scoring all essays within the given time frame, but more on setting the tone for the year and figuring out how we were going to use the information we gathered.

So often in the past we would get sidetracked by just trying to score all the essays, and then after scoring what felt like millions of essays, our conversations would become more focused on what we were seeing and not seeing.  It became a diagnostic tool and we were left with little time to focus our talk around how we would take this information into our classrooms and our teaching.  (And in case anyone thinks I’m criticizing the former department chair, I am, as I was the former chair.)

This year I began to see our assessment differently because of the changes Antoine brought to the two day process.  It occurred to me that our fall assessment really is about setting a tone of inquiry for our year and not a diagnostic tool. So often, when we used it as a diagnostic tool, we would slowly slip into judgments.  We used terms like students don’t know how to or they aren’t able to; although we have always used an improvement rubric and we have had multiple discussions around avoiding deficit language, we would still go there in our reflections.

It makes sense; scoring assessments is difficult work and by the end everyone is mentally exhausted. So often in education, we get tunnel vision when looking at tools, processes and strategies.  We want clear answers or trajectories.  Assessments become formative or summative, when in reality many assessments can be used as both; but that can be a difficult conversation to have because it is complex and messy.

Because Antoine kept talking about how we needed to focus more on figuring out how this would inform our teaching, I began thinking about it differently.  Our teaching becomes much richer when after norming and scoring we look at the results and begin to formulate questions.  Instead of thinking, “Our students aren’t doing x,” or, “Our 9th graders are doing y,” we develop questions around what we are noticing.  My thinking went much deeper when I would look at papers and state observations as questions.  My questions would range from, “Why am I seeing x?” to “How could I teach y?”  For me, just stating it differently allowed me to think about our students’ writing, and my teaching, in a deeper way.  I left those two days with a sense of excitement about the year and a feeling of wonder about what will happen when our spring assessment rolls around.