Picky, Picky, Picky ... Really?


So, guess you heard New Jersey missed out on Race to the Top because they didn't follow the directions properly? The Gothamist (a site I sort of like) has a good overview and the Star-Ledger has the actual reviews. Had New Jersey provided the right budget years, they may have received an additional 4+ points in their application. It turns out that they missed the last funded slot by ... 3 points - a potential 400 million dollar error.
So, my question is why be so picky. Yes, it was a dumb (or ill-conceived) move by New Jersey's Department folks, but why punish the kids in New Jersey over a technicality? There is probably more to the story, but I think it is indicative of a larger point worth considering through this Race to the Top process.
The Fed. was just flat too picky. Normally, being picky and accountable is a good thing, so I hate to complain about it, but I do think in this instance the DOE was too rule-bound in granting the points to grant the awards.
The whole concept of awarding "points" for different components in a state plan struck me is childish. This is not a math test. Nor was this a research grant. There are no necessarily right or wrong answers in educational innovation. Even charters (which wound up doing us in here in Kentucky) have not been proven to be a right answer. So, when we here in Kentucky say we have a waivers system or other charter-like concepts ... there was no credit even though for all we know our answer was just as right as any other answer on this concept.
Lines have to be drawn somewhere and due process demands procedures be established, so I understand the argument for the process they established. But, nothing required them to be so picky in assigning the points. The pickiness wound up being a punishment for many children in the US.
Update: Now, some speculation that one particular judge scored some proposals low and it may have impacted some states. For instance:
Further review of KY RTTT scores today shows combination of 0 points on charter and low scoring judge impacted rank. Similar issue for CO
P.S. - Yes, some of this post is a result of sour-grapes ... I'll admit it. [Grumble, grumble] But, there is a legitimate point in there somewhere, I hope.
Reader Comments (3)
Yet unpicky in other ways: nobody cares that they decided not to internationally benchmark the Common Core ELA standards, which is explicitly called for in the RttT application.
I find the possible lack of to inter-rater reliability far more disturbing than any pickiness.
Failure to control for concordance among reviewers would sabotage the Department of Education's own efforts to bring scientific rigor to education research and practice.
Great points both Tom and Scott. Love your point Scott about inter-rater reliability. That's classic.
Do what I say ... not what I do.