I confess: I am the pointy-headed academic Tom Dunn dings (June 4, 2018) for MY presentation to the Ohio State Board of Education on computer scoring of student essays. I described how automated scoring is routinely used in everyday contexts outside of education and why, for Ohio student assessments, automated scoring is highly accurate, provides schools with actionable information about student achievement quickly, and is a huge cost saver for Ohio taxpayers.
Although I tried to make a technical topic as accessible as possible, I accept Superintendent Dunn’s critique that I did not target it at a sixth-grade level. I don’t think that was his primary criticism, however, and I wish he would have read more than the first two sentences before suggesting a educational policy solution.
I think the superintendent’s main issue is this: He says that “the state continues to misuse test data to draw incorrect conclusions and create false solutions to the problems children who are not successful face.” Although he didn’t elaborated, I wondered: What are the incorrect conclusions that are being drawn? I don’t think he’s saying that, when a student’s essay is scored as “weak” let’s say, that the student has in truth actually mastered the Ohio writing standards. He also doesn’t seem to be saying that extra writing instruction for students identified as weak writers is a “false solution.”
What he does appear to be saying is that students who are not successful face many challenges that lead to that lack of success—perhaps problems like poverty; or that they may have predominately newer/less experienced/or less effective teachers; or home environments that are not as advantaged, literacy-rich, or supportive as the home environments of students who are more successful.
I don’t know; he didn’t develop this critical point. But if that’s the case, then I think the false solution is actually his: no kind of scoring, no different set of writing standards, no change in testing mode (paper and pencil vs. computer, for example), or any other test-related factor is going to address the kinds of problems he—and all of us—are concerned about. In fact, we could do reduce testing in schools and there would still be stronger and weaker writers. Even ending all testing won’t magically make kids better writers. There would still be differential home and school experiences for kids that are contributing to the differences that the tests don’t cause, just reveal.
In the end, I hope that the superintendent and I — or any reasonable and concerned citizen in Ohio — don’t see the issues very differently. Children who are not successful often face a lot of challenges that many people are working hard to address; we can clearly do more. Where we might disagree is that the rhetoric of blaming test scoring procedures — or testing itself — is a serious reform idea. If our common goal is to produce kids that are more capable of, for example, the kind of critical analysis, organization, and expression required for a college- and career-ready student, then simply asserting that burning the yardstick is a sound reform solution is the real rainbow, unicorn, fairy tale.
Gregory Cizek is a professor of educational measurement and evaluation at the University of North Carolina.