Here are a couple more pieces worth reading:
Great statistical analysis from Gary Rubenstein:
Then I ran the data again. This time, though I used only the 707 teachers who were first year teachers in 2008-2009 and who stayed for a second year in 2009-2010. Just looking at the numbers, I saw that they were similar to the numbers for the whole group. The median amount of change (one way or the other) was still 21 points. The average change was still 25 points. But the amazing thing which definitely proves how inaccurate these measures are, the percent of first year teachers who ‘improved’ on this metric in their second year was just 52%, contrary to what every teacher in the world knows — that nearly every second year teacher is better in her first year. The scatter plot for teachers who were new teachers in 2008-2009 has the same characteristics of the scatter plot for all 13,000 teachers. Just like the graph above, the x-axis is the value-added score for the first year teacher in 2008-2009 while the y-axis is the value-added score for the same teacher in her second year during 2009-2010.
There are many other very serious problems with the TDRs. There is no question that as testing becomes more and more high stakes, with teachers’ jobs dependent on student test scores, in many many schools, the curriculum will be narrowed. I believe it will lead to a widening of the “achievement gap” since it will be much easier for high performing middle class or upper middle class schools with very involved parents to resist the impulse to narrow the curriculum. With low performing schools, the temptation will be greater as they face state and city sanctions that can result in school closure. In all elementary schools, it will be harder to get senior teachers to teach grades 4-5…until of course everyone is tested in every grade, which will just make it hard to get knowledgeable people to teach in public schools at all!