Publishing Teacher Rankings

Publishing Teacher Rankings

Why?

A few weeks ago something happened in education that may have a profound effect on the future of teaching.  New York State Court of Appeals ruled that public school teachers’ individual performance assessments could be made public; subsequently, they were published in many papers.  I must agree with software mogul and philanthropist Bill Gates said in a New York Times Op-Ed on February 22, 2012, …it is a big mistake”.

According to the New York Times in their February 224, 2012 article, City Teacher Data Reports Are Released”, Douglas N. Harris, one of the economists at the University of Wisconsin who designed the city’s ranking system, said that releasing the data right now “…strikes me as at best unwise, at worst absurd”.

I can agree that assessment for students are necessary and I can agree with evaluations for teachers and administrators. I also agree that ineffective employees, whether they educators, administrators or other staff, should be terminated, but I also believe in due process in those terminations.

What greatly concerns me about the new teacher evaluations is something called value-added assessment (VA). A value-added assessment is a method of teacher evaluation that theoretically measures the teacher’s contribution in a given school year by comparing current school year test scores of their students to the scores of those same students for the previous school year. Many states, like Florida, are incorporating or are requiring by law that value-added assessment estimates be factored into teacher evaluations.  The Albert Shanker Institute (ASI) recently looked at the State of New Yorks teacher rankings and has shown that most of the imprecision of value-added rankings stems from random error. No state is factoring these random errors in the calculations or algorithms. Many believe that there is a great deal of inaccuracies in value-added ratings.  ASI explained that like a political poll, the error margin tells us the range within which that teacher’s real effect falls, which I believe we cannot know as many obstacles teachers deal with are not factored into the VA. Unlike political polls, which rely on random samples to get accurate estimates, VA error margins tend to be huge. In one example from New York City, where the average margin of error was plus or minus 30 percentile points, meaning that a New York City teacher with a rating at the 60th percentile may actually be anywhere between the 30th and 90th percentiles. This means that using the VA student testing data we cannot know whether a teacher is above or below average.

Diane Ravitch, author of the bestselling “The Death and Life of the Great American School System,” George Bush’s former educational policy analyst, and now research professor at New York University says, shared in her Educational Week Blog Post “The Problems With Value-Added Assessment” on October 5, 2010, “…value-added assessment should not be used at all. Never!  It has a wide margin of error. It is unstable. A teacher who is highly effective one year may get a different rating the next year depending on which students are assigned to his or her class. Ratings may differ if the tests differ. To the extent it is used, it will narrow the curriculum and promote teaching to tests. Teachers will be mislabeled and stigmatized. Many factors that influence student scores will not be counted at all”.

Dr. Ravitch explains that since the 1920s merit pay programs have been used and abandoned and are now back in vogue. Recently a Vanderbilt University’s National Center for Performance Incentives study found that after a three-year trial, the researchers concluded that the teachers that had VA performance pay did not get better student results than those that did not or those who were not in line to get a bonus. Dr. Ravitch states in Educational Week Blog Post “Merit Pay Fails Another Test” on September 28, 2010, “Merit pay made no difference. Teachers were working as hard as they knew how, whether for a bonus or not”.

As a result of Florida being awarded the Race to the Top Grant, Citrus County, along with the state of Florida, had been moving towards significant changes in the teacher evaluation process by incorporating the value-added models. Then, in the 2011 Florida Legislative Session, SB736  (known now as the “Student Success Act”) dramatically moved up the implementation and phase- in time from four years to less than four months.  Districts scrambled to develop or finish developing an entirely new teacher evaluation system required by SB736. The Florida system now requires 50% of a teacher’s evaluation to be based on students’ performance/assessment tests, meaning the FCAT or other measurements approved by district teams. Sometimes these assessments linked to the teacher evaluation are not even in the content area that is part of the teacher’s area of certification or related to courses taught by that teacher.  In addition, within four years every subject and course taught at every grade level, including pre-kindergarten, will have a specifically designed assessment that must include a value-added model or growth model component for measuring the teacher’s influence on learning.

All new teachers must receive an evaluation of “Effective” or “Highly Effective” each year using students test scores in order to receive any pay increase.  That will become effective in 2014-2015.  On the surface  this may  seem appropriate to those outside of the education system because performance- based pay is the way of the “real world” but what was not even understood by many, including some legislators, are the many factors that teachers and administrators have to deal with that they have NO control over.  For example, many teachers have less than 50 minutes a day with a student in a class setting.  Obviously, teachers cannot control the home environment that the student lives in or the support system they have at home.  Many students have learning challenges which have not been previously identified and the state has made the identification of those students more difficult so that testing modifications cannot be made to assist those students in being more successful on their test.  In addition, many teachers’ specialty areas are not even tested. For example, if you are an Geometry teacher in high school this year, your students’ performance will be graded on those students’ reading FCAT scores, which in the end affects the Geometry teacher’s evaluation. Why? Because there is are no specific Geometry EOC proficiency scales for this year. The current guidelines say you must use student performance data and if that is not available, then you use the next best data available when evaluating the teacher.  Physical Education, Art, and Music teachers will have their performance based on the whole school’s value-added measurement of all FCAT and/or End-of-Course assessments. So for this year, even if they are a top -performing teacher, they can never get a performance score that reflects their individual, direct performance impact because they cannot get a score greater than the whole school’s grade.

As a result of all of this, teachers’ rankings are not based on clear data.  Excellent teachers and administrators could be labeled as ineffective, when in reality the data does not tell you the whole story and/or, maybe not even the correct story.

It is my hope that Florida does not go down this same road as the state of New York.



Comments are closed.