Chelsea Schneider and Tony Cook, writing for the Indy Star, report that the latest round of ISTEP scoring is unreliable. In particular, it appears that there were flaws in the computer program used by CTP McGraw Hill to score our kids’ tests. They knew about the problem but proceeded anyway. To make matters worse, company officials and test scoring supervisors don’t agree on what happened.
Company executives would not speak with The Indianapolis Star, but in a letter Tuesday to the Indiana Department of Education, Executive Vice President Ellen Haley downplayed the problem. She said the issue “was very rare” and “did not affect student scores.”
Seven supervisors who spoke with The Star disagreed. All said they believed the problem was more widespread. Two estimated that tens of thousands of test questions were likely given incorrect scores. Others said it is difficult to put a number on the problem, but it was pervasive enough to merit rescoring the potentially impacted tests.
The company wouldn’t speak with the newspaper and has an obvious interest in minimizing their failures. I don’t know the supervisors or what their biases may be. So, at the moment, the company appears to be the less credible source.
The State’s education policy makers have put a lot of their eggs in this testing basket. Schools, teachers, and students are evaluated based on this testing metric. It was already a problem that we didn’t have a lot of confidence that we were measuring the right things. Now we don’t even know if the measuring tool itself is reliable.
The problem had to do with entering the scores. If an evaluator entered multiple scores quickly using a key pad, the system would record the same score twice — discarding one of the scores entered. A fix was eventually implemented, but apparently re-scoring the tests compromised by the computer error would be expensive and the company opted not to do it.
“We are just assuming that CTB is giving us the correct information,” Warren Township Schools Superintendent Dena Cushenberry said. “There is no checks and balances to the system.”
So, to paraphrase the old question about who watches the watchmen: who is testing the testers?
We’re wasting time and money and not adding much to the education of our kids.
HoosierOne says
Exactly- your last sentence encapsulates teacher experience across the state. Of course, since the overlords at the statehouse are all knowing, they rely on data to 1) stigmatize our kids, 2) judge our teachers, 3) financially penalize the same education professionals, 4) destroy school reputations in their communities and 5) potentially takeover those “failing schools”. By whom? The very people who designed the system that fails.
It would be a farce, if it weren’t so serious to millions of people across the state.
The $24 Million spent on this year’s test might just have bought us a public-minded General Assembly, if we’d used it that way, instead of enriching the “reform” crowd and the “education” companies… and the next test company, Pearson isn’t much better on reliability, as we see from other states.
Stuart says
So the problem was at the beginning, entering the scores, and it didn’t happen in a predictable fashion? The worst possible problem, as far as I’m concerned, because that’s beginning raw data. You can fix the scoring program and re-run it, but not when you don’t know whether it’s good stuff or crap going through the system. Sort of punches a hole in the idea that machines are more reliable than people. Calls for a do-over, but that’s a pretty expensive do-over, and that’s not counting the angry kids, parents and teachers who are already miffed at the whole process. Now, instead of talking about incompetent parents, teachers and kids, let’s focus in on our incompetent overlords.
Don’t be quick to judge or you will be subject to the same level of judgment.