Search This Blog

14.10.15

CCSSI/PARCC critics unwisely try to shoot messenger


The predictable disappointing results from Louisiana’s first round of Partnership for Assessment of Readiness for College and Careers testing has led to an equally forecastable palaver from critics of education reform in the state built upon killing the messenger rather than a desire to improve children’s educational attainment.



Superintendent John White released results using a scale he hoped that the Board of Elementary and Secondary Education would ratify, presumably aligned with what the other ten states and District of Columbia that comprise PARCC will use, which it did. Louisiana is the first state to embark upon evaluating the results so it must anticipate here, and also when all others have done the same it is expected that Louisiana’s students in the grades 3-8 who took the tests in the aggregate will be among the lowest scorers; the state’s students usually perform near or at the bottom of state’s on scores of the National Assessment of Educational Progress, the test given to samples of students in all states mandated by the federal government.



With White announcing that only 22 to 40 percent of students hit the benchmarks, this set off a chorus of carping from observers more generally against reform and those specifically critical of PARCC testing because of its connection to the Common Core States Standards Initiative. Perhaps most obsequiously attuned to criticism was state Rep. Brett Geymann, who declared on this basis that the whole of Common Core, introduced in full last year in Louisiana classrooms, had failed.

Note that Geymann employs as his analytical device for this conclusion what is called in toolkit of research methods a “one-shot case study.” That is, you measure something and then impute causation from an unknown treatment without any attempt to use experimental designs, which would test two groups, treat one of them, then test both and look for significantly different change between the two, isolating maximally all other possible extraneous causal mechanisms. In other words, the design favored by Geymann tells us nothing about the effectiveness of Common Core.



Rather, what was measured in testing earlier this year is one data point of what needs to be a multi-year interrupted time series design. If Common Core works, over the next few years PARCC testing scores in the aggregate should move in a way that indicate greater and better learning. To make the design quasi-experimental, a test such as NAEP (in operation from 1969 and deemed as a good indicator of learning) should have scores prior to last year compared in change to future scores. If change in NAEP Louisiana scores on its reading and mathematics sections continues in a positive direction, and better still at an increasing rate, this not only tells us the intervention of Common Core did what it was intended to do, but it also would provide construct validity for the PARCC test – that it measures what it’s supposed to.



While this might lead to the temptation to which some PARCC/Common Core critics have surrendered themselves to that alleges the NAEP should serve as the benchmarking device that would provide far inferior measurement for three reasons. One, NAEP is not designed to test in the same way as is PARCC; the former’s questions pretty much ask for an answer to a straight-forward question while the latter ask for an answer and explanation for it to questions worded more abstractly. Thus, NAEP questions serve as imperfect proxies of what Common Core goals are in learning.



Also, NAEP is given only every other year for only two grade levels (4 and 8) relevant for use as feedback to educators (it also has 12th grade assessments, but these results cannot be used to pinpoint strengths and weaknesses of the cohort and correct for the latter as almost all of it will not return to secondary education). By contrast, PARCC essentially tests in all grades (eventually) from 3-11. And, only math portions for each directly are comparable.



So it’s entirely premature to declare Common Core a failure or even to blame PARCC for what likely will turn out to be scores at or near the rear compared to the other 11 participants, which have all of traditionally higher-, middle-, and lower-scoring states involved. More specifically Common Core, and more generally educational reforms enacted in the last few years, cannot be evaluated validly until at least a couple of more years pass; a whole generation would be preferable. The plug can be pulled earlier, but the risk is Louisiana may have been on to something good but never knew it.



Of course, there are those who will have input into policy and/or implementation on this issue such as Geymann, several candidates for and appointees to BESE, and still other running for the Legislature, that see Common Core as one or all of a federal government intrusion, a front for reputedly greedy corporate interests, an attack on the power and privilege enjoyed by special interests, a disruptive element demanding more ability and commitment from educators than some are willing to give, or as a confirmation of a message for psychological or ideological reasons some don’t want to hear – Louisiana’s system of education doesn’t work well, a realization which not only buttresses the need for recent reforms but also invites even more of the same in ways they find counter to their own interests. They will try to delegitimize or to explain away the low performance from this initial round of measurement relative to the state’s ambitious goals by trying to fault PARCC and/or Common Core.


But with the existing data there’s no way to infer at this stage that Common Core and/or PARCC have out-and-out failed. Only by staying the course defined by demanding standards can this be determined. Any assertions to the contrary are uninformed nonsense.

No comments: