So, EdX is pushing out an <a href=”http://www.nytimes.com/2013/04/05/science/new-test-for-computers-grading-essays-at-college-level.html?pagewanted=all&_r=1&”>automated essay grading system</a>. This is interesting. The advantages of reliable and helpful automated grading are obvious: Less (tedious) work for profs, faster feedback to students, and (potentially) less pointless variance in the marking (and feedback).
I have a fair bit of interest in this. I’ve been working with Mark van Harmelen on a system for “smooth” assessment, i.e., to make it easier to grade essays. I’ve mostly stayed away from automation of various sorts because I didn’t think it was feasible and NLP is not my thing. With my PhD student Tahani, I’m working on automatically generating multiple choice questions (MCQs), which is probably as hard as grading essays is (for the instructor).
Plus, I love talking about writing but I had grading. Period. I intensely dislike it.
So, potentially auto-essay grading is a good thing. Essay question are fairly easy to write and if they were equi-easy to grade, hurrah!
Let me express some skepticism, even sight unseen. Obviously, they may have beaten all problems and I’ll be wrong. If so, great. My life becomes easier.
But it’s highly unlikely that this is going to be a very useful tool. Consider grammar checkers. Or even spelling checkers. This is a hugely easier task and afaict they still all suck. It’s not even clear to me that they are useful for non-native speakers. I know they aren’t often used by non-native speakers, at least, not the ones who take my classes, even when I suggest that they do. Since one of the things I tend to include in my rubrics is a point for mechanics, I would love it if I could autocheck that. Thus far, I don’t seem able to.
Now, with a very structured rubric and a lot of tuning and a fallback to human mode, it might be possible to do something. But it’s unclear that it will be a net gain and it’s even less clear that it will be instantaneous. Even more so, instant feedback is only a good thing if it helps.
My colleague, John Latham, is a big proponent of Face to Face marking wherein the student hands in a bit of work and you mark it with them sitting there. There’s quite a lot to be said for that. One thing it doesn’t really do, however, is scale very easily.
Calibrated peer review is a really interesting alternative which has the added advantage of training students how to “grade” (i.e., critique) writing. It does seem to scale. So that seems like a more promising alternative.