Tuesday, March 29, 2011

The Milwaukee Parental Choice Program: The importance of comparing apples to apples


As a long-time supporter of choice in education, whether it be in the form of public charter schools, vouchers, or tuition tax credits, I feel the need to respond to the recently released test results showing that Milwaukee's voucher students are performing worse than other Milwaukee public school students. Certainly such results are not good news for those on the side of choice. We of course would prefer to see those results reversed. But as John Witte pointed out in the article, one year of state test results "isn't going to be the death knell of vouchers." A closer look at the data, and the conclusions that can and can't be drawn from it, is warranted.

First, an admission. It is my understanding that there are some terrible choice schools in the MPCP. This is unfortunate, and it is my hope that parents who chose those schools realize that their chosen school is not cutting it and pull their children out. This is, after all, the way choice is supposed to work, penalizing those schools that aren't performing, while rewarding those that are. Time will tell on this, but it bears watching.

Second, it's important to remember the nature of statistics. As one of my favorite books shows, you can frequently utilize statistical data to show support for whichever side you want on almost any issue. What is mentioned in this article is that choice students are performing worse on state standardized tests than other MPS students. There is no disputing that data. However, one may dispute the conclusions that might be drawn from that data. And the most obvious conclusion that one might draw is that choice doesn't work. So allow me to challenge that.

As Witte explained in the article, "In order to study achievement growth and gain, you have to study individual students over time." Data of one single year of test scores does not do that, underscoring what we really should be measuring, which is in fact how much did a choice student improve over time in their chosen school vs. similar MPS students who are not part of the choice program? The current data doesn't measure achievement growth, and it compares voucher students to all other MPS students – or, more appropriately, all other low-income MPS students. Yet the voucher students still perform worse, even when more appropriately comparing them to this latter category. This is not what choice proponents would expect. So what is going on here?

It is vitally important to note that we are looking at global data here. Rather than comparing a voucher student's performance to all other (or all other low income) MPS students, it would seem more appropriate to compare a voucher student's achievement gains to the achievement gains of all other similar students in the school that the voucher student originally transferred from. This would certainly tell us if choice works or not.

Why is this a critical point? It is critical because not all MPS schools are failing. Those that are not failing raise the average test scores. But where do you think most voucher students come from: The successful schools, or the failing schools? Exactly. Yet the data does not permit us to make this comparison, because it is too broad, and can't be fine-tuned enough to do this.

Consider this hypothetical example: Sally is in an underperforming MPS school, and Sally's reading scores on a standardized test are 45, while her school's average is 47. Sally enters the voucher program and moves to a private school. After a few years her score improves to 53 while the average from her old school improves to 49. Sally increased her score by 8 points in her new school, while those in her old school only improved by 2 points over the same time period. One could make the reasonable conclusion that the choice program benefitted Sally.

But now consider if the overall average test scores for all MPS students increased from 55 to 56 over that same time period. If we compare Sally's recent final test score of 53 to the new MPS average of 56, we would conclude that choice did not benefit Sally, because her score in her choice school is lower than the MPS average.

From this you can clearly see the importance of making sure we are comparing apples to apples. Comparing students who are prone to come from schools with particular characteristics (ie, they are failing) to students from the entire system, which includes both failing and non-failing schools, can lead to erroneous conclusions. Studying individual student gains over time, rather than at a single point in time, is also critical, as is looking at other factors that do not show up on standardized tests such as graduation rates and safety.

Now, I must admit, none of what I have laid out here proves that the Milwaukee choice program is successful. What I am pointing out is that the data presented in the article is not sufficient for drawing a firm conclusion regarding the success or failure of the program either way, and it's important we do not draw conclusions or base policy on data that doesn't compare apples to apples.

In addition, when evaluating the results of choice it is important to look at the preponderance of the available evidence, rather than just at one particular study or program. For further research on that I would recommend this book, which covers numerous rigorous studies on choice and charter schools over the past decade. The overwhelming majority of these studies do indeed support that notion that choice works.

As for the Milwaukee voucher program, I believe the verdict is still out on that, with mixed results from the various studies that have been done to date. There have been some positive results reported in terms of graduation rates, and in terms of competition-induced improvement to existing public schools, as well as negative results such as what we see in the test score data today. It all bears watching as more data comes in.