There’s already plenty of chatter about Sunday’s LA Times article on the value-added scores of LAUSD teachers, and certainly more to come (comments blowing up here). With access to seven years of math and English scores for hundreds of thousands of 3rd through 5th grade students (under California Public Records Act), the Times hired RAND researcher Richard Buddin to conduct a value-added analysis on LAUSD teachers. Over the next few weeks, and likely beyond that, the Times promises to publish the findings of this analysis in articles and via a full database. For thousands of LAUSD teachers, this means they should expect to see their names and scores in their morning paper. For parents and the rest of the public, it means they will have more information about public school teachers’ performance than ever before.
The local teachers union is not surprisingly the first to say that this isn’t fair or right—what employee, public or private, would agree to having their performance data published in the local, or any, newspaper? Why should teachers? Others are now weighing in with a similar message, some making the reasonable points that value-added can only measure the performance of certain teachers (those with students who are tested, or about 1/3) and that value-added isn’t (yet?) a reliable system for evaluating performance, given the tests they are based on. Others are saying the Times’ analysis and release of this data is one of the best things that has happened for education in a long time—making data more transparent and useful for the public.
But hearing that teacher performance data will become public is different from actually seeing it on the front page of a newspaper. Having spent time studying Tennessee’s value added system, and all of the privacy protections that surround it, I was pretty shocked to see that John Smith, one of the teachers profiled in the Times’ article, isn’t a pseudonym but a real teacher—and one of the “bad” ones. For all of the debate and antagonism that is sure to arise in LA (and nationally) over the release of this data, it is John Smith’s reaction to his poor scores that is among the most important moments in this article. “Obviously what I need to do is to look at what I’m doing and take some steps to make sure something changes.” Another teacher with poor scores responded similarly, according to the Times article, saying that “for better or worse, testing and teacher effectiveness are going to be linked.… If my student test scores show I’m an ineffective teacher, I’d like to know what contributes to it. What do I need to do to bring my average up?”
Maybe John Smith is raging mad about this article. But his reported comments, and that of the other “bad apple”, show none of the resistance and fear of performance data that teachers are often said to have. There is, in fact, zero debate that we should but don’t have a good way to evaluate the performance of the 3 million plus teachers that matter so much for student success. A big breakthrough story, and a database founded on millions of test scores, is sure to remind us, or force us to see, that not all teachers are alike, that some are incredibly effective and others are not. To go public with this information is indeed unprecedented. A friend, who asked not to be named, tongue in cheek, since she teaches 4th grade in LAUSD and expects to see herself named soon enough, said she thought the published results would fuel a sudden surge of lobbying for their children to get placed with certain teachers and not with others. It’ll make waves for sure, she said, and added that she worries her principal won’t be able to handle all the added ”drama” that this will bring, particularly in the first weeks of school, when there’s supposed to be a fresh start and there is already so much to do.
Will this publicity help us get to better evaluation systems, ones that will last, and will not only help us understand why some teachers do better than others, but also enable teachers to share and use performance data to improve? Maybe it’s a push or a shove in that direction? Or maybe it’s a distraction from the more difficult and tedious work–creating performance standards and rubrics, designing processes for sharing performance data, regularly, and developing systems of management and professional development that align with this data–that is already underway in some places?