Is value added fair? Part One: What is VAM?

The Houston Federation of Teachers (HFT) has filed suit against the Houston Independent School District claiming that"good teachers are receiving poor evaluations because of a grossly flawed value-added methodology [VAM]." At the heart of their complaint are several concerns about VAM methodology that pop up seemingly wherever VAM is imployed and, if the courts find for HFT, threaten to undermine VAM's credibility nationwide.  The states and school districts using VAM deserve to know how fair the concerns the charges HFT levels against VAM are.  Over the next few posts, AEM will try to translate some of the more technical aspects of the charges against VAM into plain English.  Before we can do that, we should understand some basics about VAM.  

The Education Value-Added Assessment System (EVAAS) used in Houston and other VAM techniques measure teacher performance as a function of student improvement on standardized tests, relative to the level of improvement of similar students.  For example, let's say Jimmy is a white, male fourth grader on a free- or reduced-price lunch who scores a 15 on his state's end-of-year (EOY) third grade test.  Any VAM worth its salt will judge the performance of Jimmy's fourth-grade teacher by comparing Jimmy's score on his fourth-grade EOY test with every other white, male fourth grader on a free- or reduced-price lunch who scores a 15 on his third grade test.   For simplicity, let's call this group Jimmy's peer group.

Let's say Jimmy's peer group, to whom Jimmy is identical in numerous demographic and previous achievement categories, gets an average score of 18 on their fourth grade tests.  EVAAS will judge Jimmy's teacher based on Jimmy's score relative to that average.   If Jimmy gets a 22 on his fourth grade test, EVAAS finds that Jimmy grew more than his peers.  If he gets a 14, EVAAS says he grew less.  EVAAS generates a score for each teacher based on the performance of all her students.  In most cases, teachers who can ensure students keep pace with their peer group will receive ratings of average. So if a teacher has 17 students who grew more than their peers, 8 who grew at the same rate, and 3 who grew less, EVAAS will likely rate that teacher as having done a good job.

From even this simplified overview, we can see a couple points that are critical to understanding VAM and its critics.  First, EVAAS and any decent VAM does not punish teachers who teach low performing students.  Calculations of VAM necessarily include a measure of prior student achievement to establish an expectation of how much that student should grow going forward.  VAM will evaluate a teacher of struggling students based on their progress above their respective starting points.

Second, VAM is inherently competitive.  Remember, VAM compares the growth Jimmy makes to the growth of his peer group in other classes.  In effect, teachers are competing with each other to see who can wring the most growth out of similar students.  Fortunately, this competition should not effect building-level cooperation.  EVAAS and other mainstream VAMs compare student results using an advanced and complicated set of statistical controls that allows it to make comparisons across all teachers whose schools use the EVAAS system.  The success or failure of the teacher across the hall has an infinitesimally small impact on my success or failure.  Nonetheless, everyone should be clear that some teachers win in VAM and some lose.

If you are confused, you are not alone.  One of the charged most frequently leveled against VAM is the, as HFT puts it, "incomprehensible and secret formula" by which it is calculated.  In our next post, we will answer how incomprehensible and how secret EVAAS and other VAM methods really are.

Elizabeth Sobka