Larry Ferlazzo’s Websites of the Day…

…For Teaching ELL, ESL, & EFL

March 1, 2011
by Larry Ferlazzo
2 Comments

Interview Of The Month: Ted Appel, An Exceptional Principal

As regular readers know, each month I interview people in the education world about whom I want to learn more. You can see read those past interviews here.

Ted Appel is the exceptional principal at the school where I teach, Luther Burbank High School in Sacramento. It’s the largest inner-city high school in the city, and over half of our students are English Language Learners. Ted has also been interviewed by Learning First, and he and I have co-authored an article titled The Positive Impact Of English Language Learners At An Urban School.

What led you to teaching in the classroom, and what prompted your decision to become a principal?

I had been working in outdoor programs for youth at risk for a few years, which was very rewarding and fun, but I felt like school had a overpowering impact on a child’s feelings about being a successful person. I also became interested in the experiential education movement and wondered how it could be applied to classroom learning.

I went into administration because I believed I had received some good training in strong instructional practices and I thought I could have a broader impact by training other teachers in some of those strategies. I eventually became a principal because I realized it was important to have influence over the whole culture of the school in order to really impact the practices in the classroom.

What are the three best things you think you’ve done since you’ve become Burbank’s principal, and what might be three mistakes?

I think the best thing I’ve helped to do at Luther Burbank is create an environment where teachers who are committed to making a difference in students’ lives, have an opportunity to do that work. We’ve created structures, in which everyone has a part, that have resulted in an environment that is orderly, consistent, respectful and dynamic. As a result, we’ve also been able to attract the kind of idealistic, talented, innovative, committed people, an urban school needs in order to make a real difference in kids’ lives.

The other thing I try to do is talk to a lot of people, a lot. The decision making/improvement process is ongoing. I put a lot of ideas out into discussion, hear a lot of feedback and alternative ideas. I think this dynamic leads to a positive professional culture and results in good decisions and creative experiments.

The first big mistake I made when I started was to allow students to use cell phones in the halls during lunch and passing periods. There was an incredible outbreak of organized fights including people from off campus. The hall monitors came to speak with me after three weeks and said, “change the policy or we quit”. What I learned wasn’t just about cell phone rules. I learned that if I think it may be a good idea to make some kind of change, I needed to involve the people who have different perspectives and or would be affected by the decision.

I understand that I make a lot of decisions every day and so I make a lot of mistakes, or don’t do things as well as I could or should. I approach the job like a constant job interview. You try to anticipate issues or questions and prepare with the best approach you know. You often need to think on your feet for what you perhaps did not expect. And you constantly analyze what you said or did and realize how you could have approached it better.

For principals who want to spend some reflective time on their own practice, what might be some important questions you’d recommend they might want to ask themselves?

I think principals need to consider who they talk to. Are they sharing ideas and listening to teachers and staff or just other administrators at the site and central office?

Are the structures, rules, and customs of the school currently necessary and relevant or do they exist for reasons that have disappeared?

Do you believe in the programs and practices of your school, or are you just managing and complying with rules and regulations that have been handed to you?

In looking at the beliefs of those who often self-described as “school reformers,” what do you think might be helpful ideas and unhelpful ones, and why?

It seems that the basis of the current school reform movement, is the belief that teachers and schools are not sufficiently motivated to get better. Thus, competition, punishment and rewards geared to outcome goals are their “innovations” for change and improvement. I believe this creates perverse incentives to manipulate outcomes rather than encourage know how and motivate sound practice.

I also don’t think it is helpful to refuse to acknowledge that some students come to school with intellectual, social, and cultural advantages to be successful in school environments. Acknowledging this fact is not a surrender to poor results. It is merely recognizing what anyone working in a classroom sees every day. It also helps when trying to honestly analyze what is needed, in terms of different approaches and resources, to help students to be successful. We have no problem acknowledging this in art, music or athletics. Why is there such fear in acknowledging it in academics?

I think it can be valuable to give students nationally normed tests. But these tests should not be used to label schools as good or bad. They should be used as a means to evaluate practice and examine ways schools can get better at helping students improve in the skills being assessed.

Is there anything I haven’t asked you about that you’d like to share?

People want school improvement to come from a simple fix. With variables as complex as society itself, there will be no simple solutions for all schools and all kids. We need to approach improvement in education not as a fix but as an ongoing dynamic that is achieved through consistent commitment to a common ideal; all children, through education, are entitled to the widest possible array of intellectual, cultural, social, political and economic opportunity. This goal is certainly not easy, nor can we ever really know if it is fully realized. That understanding, that we will never have the absolute answer should not be a source of frustration, should be a source of energy and pride.

Thanks, Ted!

May 23, 2014
by Larry Ferlazzo
2 Comments

This Is One Of The Best Pieces I’ve Read On Teacher Evaluation: “The Problem with Outcome-Oriented Evaluations”

Thanks to Jack Schneider, I learned about a post by Ben Spielberg titled The Problem with Outcome-Oriented Evaluations.

It’s a great piece on teacher evaluation, and reflects important points that are seldom raised in discussions on the topic. He described the value of evaluating inputs, as opposed to outputs. In other words, most teacher evaluation discussion is focused on measuring student outcomes. But, as Ben points out, we often have far less control over those outcomes than is believed.

Interestingly, Ted Appel, the principal at our school, and I have been working on an article about this very same point.

With research showing that teachers really have so little impact on student achievement (see The Best Places To Learn What Impact A Teacher & Outside Factors Have On Student Achievement), how accurate can test scores be to assess a teachers effectiveness?

What Ben (and Ted and I) suggest instead is to identify a list of best teaching practices (and, as Ben mentions, being able to incorporate enough space for individual teaching styles) and evaluate teachers on if they are implementing them.

Here’s an excerpt from Ben’s post:

Smart-decisions-and

I am regularly frustrated by seeing teacher evaluation rubrics, including the much-ballyhooed one from The New Teacher Project, that are entirely focused on what students are doing, and little on actions taken by the teachers.

Here is a comment from Ted about Ben’s post on a related and critical point:

As the drumbeat for using outcome data as part of teacher evaluation becomes louder and more prevalent, the research on its’ irrelevance as a measure of teacher quality couldn’t be more clear. As the article by Ben Spielberg points out, the correlation between test scores and good teaching does not exist. What Ben does not point out is that even when school systems use test scores as “only a part” of a holistic evaluation, it infects the entire process as it becomes the piece is most easily and simplistically viewed by the public and media. The result is a perverse incentive to find the easiest route to better outcome scores, often at the expense of the students most in need of great teaching input.

Ted’s comment is particularly timely in light of our local union and District’s announcement today that we are beginning a process to develop a new teacher evaluation system.

Both Ted and I have previously written about the dangers of including any standardized test scores in a multiple-measure system. There are many, including the fact that it quickly becomes the “tail that wags the dog.”

You can see our thoughts about this at two previous posts:

The Problem With Including Standardized Test Results As Part Of “Multiple Measures” For Teacher Evaluation

How Our Principal Thinks Using Test Scores To Evaluate Teachers Will Hurt Students

What do you think?

I’m adding this post to The Best Resources For Learning About Effective Student & Teacher Assessments.

April 29, 2014
by Larry Ferlazzo
0 comments

Want To Know Why Our School’s Drop-Out Rate Plummeted By 50%?

bee

Here’s an excerpt from a Sacramento Bee story this morning:

High school dropout rates fell significantly last year across the Sacramento region, according to figures released Monday by the California Department of Education…

In Sacramento City Unified, Rosemont, Hiram Johnson and Burbank high schools saw dropout rates fall around 50 percent last year.

Burbank is the school where I teach.

I asked Ted Appel, our principal, two questions:

Is our reduction real? To what do you attribute it to?

His succinct response was:

    1. Don’t know
    2. Don’t know

I jokingly said that his answers were not clarifying. He answered:

We can agree it isn’t bad news and we can attribute it to anything we want and no one will question it. That’s the real value of outcome data with so many variables.

How much other school related data do you think should receive the same response?

March 22, 2014
by Larry Ferlazzo
0 comments

‘Start By Matching Student Interests, Then Build From There’

‘Start By Matching Student Interests, Then Build From There’ is my latest post at Education Week Teacher.

Educators Diana Laufenberg, Jeff Charbonneau, Ted Appel and special guest John Hattie share their suggestions for the five best teaching practices that educators can implement.

Here are some excerpts:

The-more-times-students

Students-that-know-you

By-focusing-on-broader

High-impact-passionate

March 20, 2014
by Larry Ferlazzo
0 comments

Different Teachers, Different Classrooms, “but the thinking & learning going on inside students’ heads is the same”

What-your-classroom

Ted Appel, the principal at our school, made that comment when we were discussing that we can’t “fire our way to the Top or test our way to the Top” but, instead, we need to focus on “practicing to the Top.” In other words, we need to emphasize helping teachers hone their craft.

At the same time, we discussed how there is not necessarily universal agreement on what those “best practices” should be, and that’s when Ted commented on two of us who have very different teaching styles.

Coincidentally, the “question-of-the-week” at my Education Week Teacher column is “What Are Five Best Practices Teachers Can Implement?”

Many well-known and respected practitioners are contributing guest responses to that question, and I hope readers will also contribute. It will be interesting to see if there are any common denominators.

Also coincidentally, The Wall Street Journal published an article titled Two Economists on School Reform: We Know (A Few) Things That Work. It’s about a new book, Restoring Opportunity, and two economists examine three different schools to see how they achieve success. They claim to have identified some common practices, but I haven’t gotten a chance to read the book yet so can’t say for sure what I think of their findings (though I’m certainly skeptical of their assertion that Common Core is a key one and I do have a decidedly skeptical view of economists and education).

Interestingly, they created an infographic summarizing their book, which is embedded below using Pinterest (which means it won’t show-up in an RSS Reader).



Let me know if you think that, despite there being different communities, different students, and different teaching styles, there are some universals to good teaching practice…..

November 6, 2013
by Larry Ferlazzo
0 comments

Student-Created Prompts As A Differentiation Strategy

'Thought on School 2.0' photo (c) 2007, Wesley Fryer - license: http://creativecommons.org/licenses/by-sa/2.0/

I’m very good at differentiating instruction to make lessons more accessible to students facing learning challenges.

Differentiating the other way, however, is another story. And one of my goals this year is to get better at providing a more intellectually stimulating environment for some of my students who want it and/or who I think need it.

As our principal, Ted Appel, succinctly put it, these kind of strategies might fall into two broad “camps” — one that might entail different materials or even a different location and, the other, having students do something different with the same materials everyone else is using.

One way I’ve done the former in the past and during this year is with the formation of independent book discussion groups, which I describe (with supporting materials) here.

A new strategy I’m trying is expanding on an idea suggested by my talented colleague Jeff Johnson, who has his students develop prompts to which they would respond.

After asking which students might be interested in doing more intellectually challenging assignments that are tied to the goals they have made for themselves (which, in my ninth-grade classes, is often “become a better writer”), I asked them to make a list of things they were interested in.

Next, I had individual conversations with them during our silent reading time, pointing out that they had identified they wanted to become better writers. I reminded them that we talked a lot about how good readers often ask questions of their reading, and also reminded them about discussions we had about Bloom’s Taxonomy.

I then gave them a copy of a list of question-starters from Bloom’s (it’s the third page) and told them that sometimes when there might be times when other students are doing one thing, I might ask them to create their own writing prompt using one of the higher-order question-starters. One example I used was if we were reading about tornadoes, they could choose the question-starter “How can you improve_________?” and they might fill in the blank with “tornado shelters.” They would then write a one paragraph response to that prompt using the “ABC” outline (Answer the question; Back it up with a Quotation; Make a Comment or Connection — you can read more about it at My Best Posts On Writing Instruction). It would have to be something in which they had genuine interest. I also told there might be times I’d ask them to create a prompt from the list of things they listed as interests.

I’ve just tried it a little so far, and it’s gone well.

I’d love to hear other ideas from readers about realistic differentiation strategies you’ve used to help your students who desire/need more of an intellectual challenge….

November 2, 2013
by Larry Ferlazzo
0 comments

A Good Quote Describing Our School’s IB Program

Today, The Sacramento Bee ran a lengthy article headlined In bid to keep students, Sacramento districts launch IB programs.

Though our inner-city school’s program is over ten years-old, it only got a tiny mention, but that mention definitely shows how we are different:

Elsewhere in Sacramento City Unified, Luther Burbank High School began offering IB courses to students in the graduating class of 2006. Burbank Principal Ted Appel said the program has increased the number of students attending college and staying enrolled. But he said the campus has a different mind-set than other IB diploma schools that aim to attract far-flung students.

“This is one of the real urban IB programs and, unlike many others, we’re not trying to draw kids from other districts,” Appel said. Instead, the large majority of IB students there live in the district.

December 30, 2012
by Larry Ferlazzo
0 comments

The New York Times Has Discovered The Perils Of Being Data-Driven — I Just Wish Arne Duncan Would, Too

I have written regularly about the dangers of being “data-driven” instead of being “data-informed.” In fact, I have a lengthy, and popular, post titled The Best Resources Showing Why We Need To Be “Data-Informed” & Not “Data-Driven.”

Here’s my previously posted summary of how our school’s extraordinary principal, Ted Appel, explains the difference:

If schools are data-driven, they might make decisions like keeping students who are “borderline” between algebra and a higher-level of math in algebra so that they do well in the algebra state test. Or, in English, teachers might focus a lot of energy on teaching a “strand” that is heavy on the tests — even though it might not help the student become a life-long reader. In other words, the school can tend to focus on its institutional self-interest instead of what’s best for the students.

In schools that are data-informed, test results are just one more piece of information that can be helpful in determining future directions.

Today, The New York Times has published an extensive article headlined Sure, Big Data Is Great. But So Is Intuition. I’m just going to share a few excerpts here that can easily apply to some of the damage data-driven “school reform” efforts are doing to us:

Claudia Perlich, chief scientist at Media6Degrees, an online ad-targeting start-up in New York, puts the problem this way: “You can fool yourself with data like you can’t with anything else……

…..A major part of managing Big Data projects, he says, is asking the right questions: How do you define the problem? What data do you need? Where does it come from? What are the assumptions behind the model that the data is fed into? How is the model different from reality?

…..It’s encouraging that thoughtful data scientists like Ms. Perlich and Ms. Schutt recognize the limits and shortcomings of the Big Data technology that they are building. Listening to the data is important, they say, but so is experience and intuition. After all, what is intuition at its best but large amounts of data of all kinds filtered through a human brain rather than a math model?

September 30, 2012
by Larry Ferlazzo
3 Comments

How Our Principal Thinks Using Test Scores To Evaluate Teachers Will Hurt Students — What Are Your Thoughts?

Earlier today, I posted about New York Principal Carol Burris’ great article on how using test scores to evaluate hurts students.

I sent it to our school’s principal, Ted Appel (who has shared his thoughts on educational policy in previous posts here) to get his reaction. He liked it a lot, and shared these ideas on other ways their use in teacher evaluation can hurt students:

Using test scores to evaluate teachers can also have a significant impact on the inclusion of Special Education students. Fully including students with learning and other disabilities is not only a civil right, but pedagogically supported by research. While many special education students may struggle academically in “regular” education classes, and score poorly on standardized tests, there are significant social and emotional benefits to inclusion. Evaluating teachers based on test scores may create resistance to full inclusion for non-pedagogic reasons. And in a strange twist on the argument, some teachers may want special education students in their classes who take the CMA (California Modified Assessment) Students taking these tests may provide a boost to test scores and result in a better evaluation. [These teachers may or may not be the best placement for those students] Either way, what is clear is that using test data for teacher evaluations distorts decision-making away from the best interest of the student.

Using test data to evaluate teachers has another significant impact on middle and high school course selection for students. Schools already rig the system by placing students in less challenging courses in the belief that they will score better in lower level courses. This is particularly true in math and science, which is greatly detrimental to student preparation for college admission and success.

Do you have other ideas on how using test scores in teacher evaluations hurts students?

May 19, 2012
by Larry Ferlazzo
0 comments

Washington Post Ranks Our High School Among Top Ten Percent In U.S.

(UPDATE: Our principal, Ted Appel, has a pretty strong reaction to Jay’s list. Here is his response to it:

“These kinds of lists are meaningless, certainly not helpful, and quite possibly destructive to pushing schools towards practices which are focused on helping students get smarter and healthier.  Two years ago we were on a list of schools described as ‘dropout factories.’ And now, two years later, without doing anything substantially different, we are listed among the top nine percent of high schools in the country only because a different metric was used.  This seems to be a blatant example of how these types of quantitative evaluations lack substance.”)

Yes, rankings and schools are not a good mix. They can be easily manipulated in the worst ways and can seduce schools into doing things that are geared towards getting high-rankings that might not be in the best interests of student learning (similar to the dangers of being data-driven).

Given all that, sometimes it can’t hurt to be on one of those lists, especially if you don’t do anything specifically to get on it.

And being named today on The Washington Post’s Jay Mathews list of America’s most challenging high schools is not a bad one for our school, the largest inner-city high school in Sacramento, to be on. Even more so since teaching to any kind of standardized test is not encouraged and, instead, thanks to the leadership of our principal, Ted Appel, we focus on developing life-long learners.

And we certainly didn’t solicit being named — Jay called our principal out of the blue yesterday asking for some more information.

I still don’t quite understand how Jay comes up with his list each year, which highlights ten percent of the high schools in the United States, though he does offer this explanation. Here is his post on this year’s list. He also did a Q & A about it last year.

And here is our school’s page on the list.

Of course, there’s some irony here since, because of our School Board’s refusal to respect a judge’s ruling, we’re in line to lose twenty-one of our teachers who received their final lay-off notice three days ago.

August 24, 2011
by Larry Ferlazzo
0 comments

This Is Why Our School is “Data-Informed” & Not “Data-Driven”

I’ve written often about the importance of being “data-informed” and not “data-driven” (see The Best Resources Showing Why We Need To Be “Data-Informed” & Not “Data-Driven”). I learned about the difference from our school’s exceptional principal, Ted Appel. And I’m still learning…

Today, the California State Department of Education released state-wide passing rates for students taking the California State High School Exit Exam.

Ted had shared his thoughts about this topic after our school received our results, and I wanted to share them with readers:

Does Improved Outcome Data Reflect School Improvement?

Luther Burbank High School received back results from the 10th grade administration of the California High School Exit Exam (CAHSEE). As is our custom, we compared the results to our two previous years’ results. In our analysis we look at the percent who passed in English and math and the percent that met proficiency. (a higher standard). We also look at the median scores and quartile scores. Since our school is broken down into seven small learning communities, we look at this same data broken down for each SLC. Our intent, is to identify patterns that we can trace back to particular practice.

Our three year comparison of 2011 scores to 2009 scores, reflect a school-wide passing rate increase from 62% to 72% in English Language Arts. Proficiency went from 27% to 39%. In math passing went from 76% to 82%. Proficiency from 47% to 55%.

Most people observing such scores would conclude that the school had improved. But can such a conclusion be reasonably drawn? Certainly other factors, besides school improvement, can have a significant impact on improved scores, making it impossible to draw a causal relationship between scores and school improvement. The following are just a few of those factors:

• Did the number and level of English learners change substantially?
• What percentage of the students actually attended the school for the full two years prior to the test?
• Did other significant demographic characteristics change?
• Did the school start, increase or decrease any academic criteria based enrollment programs?

Factors, such as those mentioned above, are somewhat difficult to consistently track. I have never seen them in any discussion of schools that have made great jumps in outcome test scores. But to claim school success or failure, without an analysis of factors unrelated to program or practice, is without merit. It’s equal to conducting a scientific experiment without isolating one variable, and arbitrarily drawing a conclusion.

We will continue to look at outcome data, and analyze for trends and ideas about what is working and what is not. But we will not draw quick conclusions about success or failure based on these numbers. They will serve as a points of information on the spectrum of data that we review to help us mold and refine our programs and interactions and instruction.

I wonder how many “school reformers” are as reflective and careful with data as Ted is?

April 1, 2011
by Larry Ferlazzo
0 comments

March’s Best Posts

I regularly highlight my picks for the most useful posts for each month — not including “The Best…” lists. I also use some of them in a more extensive monthly newsletter I send-out. You can see back issues of those newsletters here and my previous Best Posts of the Month at Websites Of The Month.

These posts are different from the ones I list under the monthly “Most Popular Blog Posts.” Those are the posts the largest numbers of readers “clicked-on” to read. I have to admit, I’ve been a bit lax about writing those posts, though.

Here are some of the posts I personally think are the best, and most helpful, ones I’ve written during this past month (not in any order of preference):

February 5, 2011
by Larry Ferlazzo
1 Comment

“Two Steps Back” Is A This American Life Episode Everybody Should Listen to — Especially Superintendents & Principals

Two Steps Back is a several year old episode of “The American Life.” It tells the tale of an innovative Chicago school with a collaborative principal getting the life crushed out of it by the desire for uniformity and control by the central office (it includes an interview with then Superintendent Arne Duncan) and a compliant new principal. Thanks to our school’s principal, Ted Appel, for letting me know about it.

And, while you’re at it, here’s a good piece about the importance of collaboration by Walt Gardner titled Principals as Saviors.

January 28, 2011
by Larry Ferlazzo
10 Comments

The Best Resources Showing Why We Need To Be “Data-Informed” & Not “Data-Driven”

'Marveling at the Data' photo (c) 2013, DoDEA - license: http://creativecommons.org/licenses/by/2.0/

Last year, two very talented educators — Ted Appel, the extraordinary principal we have at our school, and Kelly Young, creator of much of the engaging curriculum we use at our school through his Pebble Creek Labs — brought-up the same point in separate meetings with teachers at my school: The importance of not being “data-driven” and, instead, to be “data-informed.”

These conversations took place in the context of discussing the results of state standardized tests. Here’s the point made by Ted:

If schools are data-driven, they might make decisions like keeping students who are “borderline” between algebra and a higher-level of math in algebra so that they do well in the algebra state test. Or, in English, teachers might focus a lot of energy on teaching a “strand” that is heavy on the tests — even though it might not help the student become a life-long reader. In other words, the school can tend to focus on its institutional self-interest instead of what’s best for the students.

In schools that are data-informed, test results are just one more piece of information that can be helpful in determining future directions.

Since that conversation took place, I’ve written several posts about the topic. I thought it might be useful to bring together several related resources.

Here are my choices for The Best Resources Showing Why We Need To Be “Data-Informed” & Not “Data-Driven”:

First, I’m going to list the post I wrote immediately after that conversation – “Data-Driven” Versus “Data-Informed”

Next, a Dilbert cartoon that Alexander Russo shared on his blog:

Dilbert.com

The cartoon reminded of what the New York judge said earlier this month when he ruled that the School District can publicly release the names of teachers and their “Teacher Data Reports.” Here is what the judge said (and I kid you not):

“The UFT’s argument that the data reflected in the TDRs should not be released because the TDRs are so flawed and unreliable as to be subjective is without merit,” the judge wrote, citing legal precedent that “there is no requirement that data be reliable for it to be disclosed.”

Data-Driven…Off a Cliff is the title of an excellent post by Robert Pondiscio.

An article in Educational Leadership is a year-old, but it’s new to me and certainly worth sharing. It’s called The New Stupid, and has the subtitle “Educators have made great strides in using data. But danger lies ahead for those who misunderstand what data can and can’t do.” It’s written by Frederick M. Hess.

It’s an article worth reading (though I do have concerns about some of its points), and relates to what I’ve written about being “Data-Driven” Versus “Data-Informed.”

Here are a couple of excerpts:

…the key is not to retreat from data but to truly embrace the data by asking hard questions, considering organizational realities, and contemplating unintended consequences. Absent sensible restraint, it is not difficult to envision a raft of poor judgments governing staffing, operations, and instruction—all in the name of “data-driven decision making.”

and…

First, educators should be wary of allowing data or research to substitute for good judgment. When presented with persuasive findings or promising new programs, it is still vital to ask the simple questions: What are the presumed benefits of adopting this program or reform? What are the costs? How confident are we that the promised results are replicable? What contextual factors might complicate projections? Data-driven decision making does not simply require good data; it also requires good decisions.

The Truth Wears Off: Is there something wrong with the scientific method? by Jonah Lehrer is an exceptional article from The New Yorker. David Brooks from The New York Times wrote a nice summary of the article:

He describes a class of antipsychotic drugs, whose effectiveness was demonstrated by several large clinical trials. But in a subsequent batch of studies, the therapeutic power of the drugs appeared to wane precipitously.

This is not an isolated case. “But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain,” Lehrer writes. “It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable.”

 

 

The world is fluid. Bias and randomness can creep in from all directions. For example, between 1966 and 1995 there were 47 acupuncture studies conducted in Japan, Taiwan and China, and they all found it to be an effective therapy. There were 94 studies in the U.S., Sweden and Britain, and only 56 percent showed benefits. The lesson is not to throw out studies, but to never underestimate the complexity of the world around.

Talking To Students About Their Reading (& Their Data) is a post I’ve written.

“Using data for progress, not punishment”

In a Data-Heavy Society, Being Defined by the Numbers is by Alina Tugend at The New York Times.

Data-Driven Instruction and the Practice of Teaching is by Larry Cuban.

The Obituaries for Data-Driven ‘Reform’ Are Being Written is by John Thompson.

California Governor Puts the Testing Juggernaut On Ice is by Anthony Cody at Education Week.

Making the wrong “Data-Driven Decisions” is by Carl Anderson (thanks to Dean Shareski for the tip).

Data-Driven To Distraction appeared on Larry Cuban’s blog.

Larry Cuban has written another interesting post titled Jazz, Basketball, and Teacher Decision-making. John Thompson relates it to school data at Thompson: Duncan Can Shoot — But Can He Rebound?

“Not everything that matters can be measured”

“You Are Not An Equation” (And Neither Are Your Students)

Policy by Algorithm is a nice post over at Ed Week.

Professional Judgment: Beyond Data Worship is by Justin Baeder at Education Week.

This Is Why Our School is “Data-Informed” & Not “Data-Driven”

Bias toward Numbers in Judging Teaching is by Larry Cuban.

The False Allure Of Statistics is by John Thompson.

‘Moneyball’ and making schools better is by John Thompson.

Here’s Another Reason Why We Need To Be Data-Informed & Not Data-Driven

Data Gone Wild

“Why Do Good Policy Makers Use Bad Indicators?” is by Larry Cuban.

New Hope for the Obama/Gates School of Reform is by John Thompson.

“It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept,” Stephen Wolfram says. To which I say, “I’m looking at your data, and you know what’s amazing to me? How much of you is missing.”

This is the last paragraph of Robert Krulwich’s article at NPR, titled Mirror, Mirror On The Wall, Does The Data Tell It All? In it, he compares authors of books, one by Stephen Wolfram, creator of a the Wolfram search engine, and Bill Bryson, author of a biographical account of growing up in Iowa. The column, though not specifically about schools, hits a “bulls-eye” on our current data-driven madness.


What Does “Stop & Frisk” Have To Do With What’s Happening With Our Schools?

What Does The NYPD Have In Common With Many Data-Driven Schools?

Tired of the Tyranny of Data is by Dave Orphal.

Big Data Doesn’t Work if You Ignore the Small Things that Matter is from The Harvard Business Review.


Test Scores Often Misused In Policy Decisions
is from The Huffington Post.

The Data-Driven Education Movement
is from The Shanker Blog.

Data Overload

Invisible Data is from Stories From School.

Don’t Let Data Drive Your Dialogue is from The Canadian Education Association.

“The Goal Is The Goal”

On the Uses and Meaning of Data is by David B. Cohen.

Friday Thoughts on Data, Assessment & Informed Decision Making in Schools is from School Finance 101.

The New York Times Has Discovered The Perils Of Being Data-Driven — I Just Wish Arne Duncan Would, Too

Here’s a Part One and Part Two series of posts on the use of data in education, and they’re both from Larry Cuban’s blog.

Data: No deus ex machina is by Frederick M. Hess & Jal Mehta.

Bill Gates is naive, data is not objective is by Cathy O’Neil and is really good.

Bill Gates and the Cult of Measurement is by Anthony Cody.

Sure, Big Data Is Great. But So Is Intuition. is from The New York Times. Here’s an excerpt:

It’s encouraging that thoughtful data scientists like Ms. Perlich and Ms. Schutt recognize the limits and shortcomings of the Big Data technology that they are building. Listening to the data is important, they say, but so is experience and intuition. After all, what is intuition at its best but large amounts of data of all kinds filtered through a human brain rather than a math model?

At the M.I.T. conference, Ms. Schutt was asked what makes a good data scientist. Obviously, she replied, the requirements include computer science and math skills, but you also want someone who has a deep, wide-ranging curiosity, is innovative and is guided by experience as well as data.

“I don’t worship the machine,” she said.

Beware the Big Errors of ‘Big Data’ is from Wired.

The NYPD Probably Didn’t Stop All That Crime

Data-Informed Versus Data-Driven PLC Teams is from All Things PLC.

David Brooks, who generally loses all coherence when he writes explicitly about education issues, has just written an eloquent case for the importance of being data-informed, and not data-driven. Read his column titled What Data Can’t Do. Here’s an excerpt:

The Problem with Our Data Obsession is from MIT.

Data Without Context Tells a Misleading Story is from The New York Times.

Big Data is “not a replacement for the classic ways of understanding the world”

Quote Of The Day: “Data & data sets are not objective”

“Big (Dumb) Data” is by John Thompson.

Data are no good without theory is from The Washington Post.

The Perils of Economic Thinking about Human Behavior is from School Finance 101.

What You’ll Do Next is by David Brooks

At the risk of being accused of taking a “cheap shot,” I just can’t resist embedding two segments from The Colbert Show about the now well-known mistake by the two economists whose work has been cited endlessly to support austerity. And I can’t resist adding it to this list:

Quote Of The Day: “The Dictatorship of Data”

How The NBA Finals Taught A Lesson About Not Being “Data-Driven”

Second Quote Of The Day: The Dangers Of Being “Data-Driven”

The Great Lakes Center has released an excellent report on Data-driven Improvement and Accountability. The Washington Post published an excerpt, Six principles for using data to hold people accountable.

The Tyranny of the Datum is by John Kuhn.

Garbage In, Garbage Out: Or, How to Lie with Bad Data is from Medium.

How ‘data walls’ in classrooms can humiliate young kids is by Valerie Strauss.

Select Your Conclusions, Apply Data is from The Shanker Blog.

How ‘platooning’ and data walls are changing elementary school is from The Washington Post.

Big data: are we making a big mistake? is from The Financial Times.

Ainge: Analytics Sometimes Leads To Shortcuts is from RealGM Basketball.

Misusing Test Data is from Renee Moore’s blog.

Additional suggestions are welcome.

If you found this post useful, you might want to consider subscribing to this blog for free.

You might also want to explore the over 600 other “The Best…” lists I’ve compiled.

December 12, 2010
by Larry Ferlazzo
7 Comments

Why I’m Afraid The Gates Foundation Might Be Minimizing Great Tools For Helping Teachers Improve Their Craft

I support developing more effective ways to evaluate teachers — using multiple measures.

What I don’t support, however, is the present effort by the Gates Foundation that’s spending millions of dollars using student scores on standardized tests as THE MEASURE used to evaluate teachers.

I have no objection to scores from existing standardized tests being a part — a small part — of those multiple measures. If present efforts to create a “new generation” of state assessments actually invite teachers to work with them and develop more accurate performance-based assessments, I would have no objection to their proportional weight being increased — a little.

Accomplished California Teachers (of which I am a member) published a report earlier this year that I think accurately reflects my thinking on teacher evaluation:

To support collaboration and the sharing of expertise, teachers should be evaluated both on their success in their own classroom and their contributions to the success of their peers and the school as a whole. They should be evaluated with tools that assess professional standards of practice in the classroom, augmented with evidence of student outcomes. Beyond standardized test scores, those outcomes should include performance on authentic tasks that demonstrate learning of content; presentation of evidence from formative classroom assessments that show patterns of student improvement; the development of habits that lead to improved academic success (personal responsibility, homework completion, willingness and ability to revise work to meet standards), along with contributing indicators like attendance, enrollment and success in advanced courses, graduation rates, pursuit of higher education, and work place success.

I’ve written at the Washington Post what these ideas look like on the ground at our school (see The best kind of teacher evaluation).

I’m not going to spend a lot of time here reviewing the reams of research that have shown how evaluating teachers using student test results are unstable and inaccurate.  You can find more than enough evidence for that at The Best Resources For Learning About The “Value-Added” Approach Towards Teacher Evaluation.

But right now my big concerns about the Gates Foundation efforts are how I fear they might be minimizing two key tools that can have a huge impact on improving teacher effectiveness — videotape and student surveys.

As I’ve previously written (There Are Some Right Ways & Some Wrong Ways To Videotape Teachers — And This Is A Wrong Way) Gates is funding a massive effort to videotape teacher lessons and then have them evaluated by people who have never visited the school nor have any kind of relationship with the teacher, and rate them using checklists and correlate them to value-added scores.

Contrast that way with how videotape is being used to universal acclaim at our school (led by principal Ted Appel) where a talented consultant (Kelly Young at Pebble Creek Labs), who has been working with us for years, meets with us to review an edited version of a taped lesson, with us initially giving our own critique and reflections followed by his comments. This process is entirely outside of the official evaluation process, and is focused on helping teachers improve their craft. It has been one of the most significant professional development experiences I’ve had. At my request, Kelly and I subsequently showed the video and shared our critique with my class, which was a transforming experience for all involved. Teacher Magazine will be publishing my account of that class period in early January.

As part of their massive project, Gates is also having thousands of students complete anonymous surveys evaluating their teachers and, you guessed it, correlating the answers to student test scores.

I’m a huge fan of getting student feedback. In fact, I’ve posted My Best Posts On Students Evaluating Classes (And Teachers). To help students see that I take their responses seriously, I always reprint the results in this blog (you can see them and the questions at that “The Best…” list) and email the results to teachers and administrators at my school.

But I want to know more from students than what Gates is asking. I want to know if they think I’m patient and if they believe I care about their lives outside of school. Yes, I certainly want to know what they think I could do better, and I also want to know what they think they could do better. I want to learn if they think their reading habits have changed and, for example, when I’m teaching a history class, are they more interested in learning about history than they were prior to taking the class. I want to find-out what they believe are the most important things they learned in the class and, for many, it might be learning life skills like the fact their brain actually grows when they learn new things or the fact that they had in them the capacity to complete reading a book or writing an essay for the first time in their lives. And, in the discussion that follows (one thing I learned as an organizer is that a survey’s true use is as a spark for a conversation) we discuss all these things and many more, including the differences between what might be what we like to do best and what we learn the most from.

By trying to connect videotaping teachers to anonymous checklist evaluators and test scores, and doing the same to student surveys, I fear the Gates Foundation may succeed in framing the public conversation about these tools as just a means to one end — better scores on assessments that don’t accurately measure learning.

This minimizes these potentially powerful tools, contributes toward seeing both teachers and students as replaceable widgets, and unfortunately reinforces a school reform debate where many worship at the alter of multiple choice test results.

Using videotaped teacher lessons and student surveys for the primary purpose of connecting them to teacher evaluation by test scores is like using a Stradivarius and a Grand Piano to play “Mary Had A Little Lamb” to evaluate the musician.  In both instances, the tools have far more value to everyone if  used in more expansive ways.

No, we all deserve better…

(Here’s a link to the article I wrote about my evaluation)

December 4, 2010
by Larry Ferlazzo
11 Comments

There Are Some Right Ways & Some Wrong Ways To Videotape Teachers — And This Is A Wrong Way

Today, The New York Times is running two articles on videotaping teachers for evaluation purposes. They are:

Teacher Ratings Get New Look, Pushed by a Rich Watcher

Video Eye Aimed at Teachers in 7 School Systems

They both talk about a Gates Foundation-funding effort to videotape teacher lessons and then have them evaluated by people who have never visited the school nor have any kind of relationship with the teacher, and rate them using checklists.

Here’s a criticism voiced in the article that I agree with wholeheartedly:

Randi Weingarten, president of the American Federation of Teachers, which has several affiliates participating in the research, also expressed reservations. “Videotaped observations have their role but shouldn’t be used to substitute for in-person observations to evaluate teachers,” Ms. Weingarten said. “It would be hard to justify ratings by outsiders watching videotapes at a remote location who never visited the classroom and couldn’t see for themselves a teacher’s interaction and relationship with students.”

I’d call this a wrong way to use videotape of teachers.

I’ve previously written about what I think  is a right way to use videotaped teachers (Now, This Is What A Useful & Effective Teacher Assessment Might Look Like).

Our school, led by principal Ted Appel, has begun having Kelly Young, an extraordinarily talented consultant on instructional strategies who we have been working with for years, videotape our lessons (I’ve written much about Kelly in this blog). He then meets with us to review an edited version of the tape, with us initially giving our own critique and reflections followed by his comments. This process is entirely outside of the official evaluation process, and is focused on helping teachers improve their craft.

This process has been universally acclaimed by teachers so far, and it has been one of the most significant professional development experiences I’ve had.

As I mentioned in that previous post on my videotaped lesson, I had suggested to Kelly that we show the video and discuss the critique with my class as an experiment.

We did this a few days ago, and it was truly an amazing one hour.

I’ve written an article for Teacher Magazine about what happened, and they’ll be publishing it after the holidays. After reading it, I think you’ll agree that there are far better ways to use videotaped lessons than what the Gates Foundation is planning.

October 22, 2010
by Larry Ferlazzo
2 Comments

ELL Teaching Methods Can Help All Students

Our school’s principal, Ted Appel, and I co-wrote an article a couple of years ago on how English Language Learners were an asset to our entire school in many ways, including helping all our teachers become better instructors to everybody (see The Positive Impact Of English Language Learners At An Urban School).

It sounds like at least one other school feels the same way. Read today’s Washington Post story, At Sugarland Elementary, language lessons are key to all learning, to learn how they are effectively using ELL instructional methods with their entire student body:

“We see kids retaining the information better than they were before,” the school’s principal said. “We see them really connecting lessons to prior learning.”