I didn’t realize how strongly news of Michelle Rhee’s firings resonated until several people who don’t even live around here asked this weekend what I thought of them. “Is this a big deal, or not?” they said. I explained how in theory getting fired for performance reasons isn’t shocking, but in teaching it is. (Less, though, than we make it out to be. While it is rare, I know a lot of principals who are successful at “encouraging people to leave,” or whatever they call it.) Given that the D.C. goings-on are getting national attention, I figured I would offer up a few things worth considering:
1. The number of teachers didn’t faze me. But I would like to see more reported on the actual implementation of IMPACT and efforts like it. When you hear officials talk about them, they may make sense. But teachers can give us a clue if the observations, evaluations and feedback are taking place as advertised. That’s important to know. I have heard D.C. officials concede bumps in implementation; just how bumpy would help people judge whether they think the firings were hasty or not.
2. Do the IMPACT scores (most of which, for practical reasons, are not yet based on test scores) correlate with student outcomes? I believe the district is working on figuring this out—in the cases where they can; officials have not yet figured out how to measure performance of teachers and students in certain situations, classes, grades. When they answer this question, I hope they release the data publicly.
3. This is a hard nut to crack, harder than interviewing teachers to find out if their five observations and half-hour consultations with master educators happened, and harder even than correlating past outcomes with evaluations. Perhaps only the geeks among you care to continue with me here: It is important to ask whether IMPACT scores are predictive—if this is truly a valid and reliable measure. If IMPACT works as designed, it tells us how well a teacher did. But does it tell us how well she or he will do? This is worth asking not just because in the early years teaching quality improves with experience, but because I am not sure the value-added research tells us whether we know anything about teachers can repeat their successes. Will the evaluation of IMPACT consider that, say, by randomly assign teachers so that we can compare the performance of students with high-scoring teachers versus low-scoring ones?
If so many eyes are on the program, they’ll need something solid to look at.