Search This Blog

2.12.13

Move beyond SPS data to understand vouchers' impact


As Louisiana progresses through its second year of its statewide Student Scholarships for Education Excellence program, data produced still can’t reveal whether the program is improving significantly the lot of children or what effect if any it may have, even as it succeeds on a cost basis.



Some data came out this week from the Department of Education, in the form of School Performance Scores for students who accepted vouchers through the program. Children who once attended subpar schools are eligible to receive state money to attend a nonpublic or higher-ranked public school, if space is available. DOE for each school where adequate data could be collected computed a ranking of that voucher cohort, as if it were its own school, and released that data.



They showed a wide range of success for the cohorts treated as schools, but overall most were not terribly different from the underperforming environments that the students had left. Almost half were scored at the D or F level, equivalent by definition to the scores of their previous schools (students at C-ranked public schools also are eligible for the program, but only if space is available after the pool of students from the lower-ranked schools in an area is exhausted).

Unfortunately, this tells little about the program’s impact, for a few reasons. First, because cohorts at some schools were too small for reliable testing – qualifying schools have at least 40 students in testing grades or at least 10 students per grade – these had to be excluded. Second, testing, which is a large component to SPS calculations below high school, begins only in the third grade and many voucher students are below that. For this reason, not much more than half of the students in the program can provide data for these calculations.



Third, while for elementary schools the SPS basically is achievement made on tests, another small factor enters in to scoring in middle schools and is makes up a minority of the score for high schools. Thus, this enters into the measure components not directly tied to and/or quantifiable of academics that detracts from telling if the program is doing what it’s supposed to – improve academic achievement of students relative to that of the public schools in which they had attended. Finally, given that students on which data could be derived had spent several years at least in the public schools, one year probably is not enough time to make a definitive judgment on the program’s effects (more time enriching the data available also will take care of the limited amount of data, as program officials estimate that within a few years about 90 percent of students enrolled will have data available on their performances conceptualized as a “school” within a school).



However, the largest impediment for use of this data in trying to evaluate this objective is that there is no comparative use of it. Alone, these data tell us nothing because of the numerous confounding factors that only comparison can parse out.



As examples, a voucher cohort could have come into a nonpublic school as the worst performers from a previous school, meaning they start from the worst baseline with really low absolute scores that nonetheless improved. Or, students from really poor schools by SPS could end up in a cohort in a school that performs merely poorly, where any improvement as a result of the program is masked because individual tracking isn’t occurring. Also, we can’t entirely be sure that extraneous factors don’t influence the results, such as more motivated parents disproportionately comprising the family population using the vouchers, which then might explain any boost in achievement, not the program.



Therefore, the only way for policy-makers to know whether the program is providing better education than without it is to create three groups of students and compare not their absolute achievement, but relative growth year over year. One would be children of families that did not try to use vouchers and were at the school (or feeder school) the previous year. Another would be those who did use vouchers to go on to another school. Lastly would be the group of families who applied for vouchers but either did not choose to use them or who tried but failed to get their children into another school because of space limitations.



For all three groups, test scores for previous years would be compared to their current year. The comparison between those who never pursued a voucher and those who did successfully tells of the program’s impact combined with familial motivation, which is why the last group also would have their scores compared, to factor out this cause and leave a pure measure of change brought about by the program only.



Even this unadulterated measure of change needs to be taken in context. For example, let’s say after analysis of these individual (as opposed for the SPS aggregate) data there was no significant change statistically speaking between the performances of children who stayed in public schools and those using vouchers. But given that on average the typical voucher recipient costs the state over three thousand dollars fewer than for a public school child for an education quality roughly the same, then the program still is worthwhile because it delivers results at least as good for less money, thus more efficiently.


Yet until these kind of data are made available and analyzed, no definitive determination can be made about whether it has a positive academic impact (it apparently has others to the upside, such as, in a separate poll, almost all voucher-using families reporting satisfaction with their schools). DOE should tend to this task as quickly as possible.

No comments: