October is marking season at my university: midterms, essays, tests and quizzes all crowd into the space between the end of September’s silly season (frosh week, reunion weekend, general football mayhem) and the final date to add or drop courses. The wind and rain rush in, the leaves come down…and we all end up buried, by Halloween, under piles and piles of papers.
This year I’ve been trying something new with my undergraduate class in performance studies; for the first time I’m marking essays explicitly against a pre-existing rubric, one I’ve made freely available to everyone on the assignment page of our online learning portal. I’ve used marking rubrics regularly for the last few years – they were mandatory at Queen Mary, where I taught from 2012-2014, and I found them clarifying and productive. But this is the first time I’ve used a rubric as a literal marking tool, rather than just as a general set of guidelines for our reference.
(What’s a rubric? Click here for a really helpful explanation.)
My typical marking pattern until now has been some variation on this:
- post a rubric for students for written assignments, so that they know broadly what I expect in terms of content, structure, research, grammar, and style;
- read papers without consulting the rubric carefully, assuming it implicitly reflects what I already know to be true about good, bad, and ugly essays;
- write up comments without direct reference to the rubric, and assign a grade.
I suspect a lot of us mark this way, whether we realise it or not. And this is not, of course, to say our comments on student papers are not fulsome, reflective of our rubrics, or written with care; I personally pride myself on providing clear feedback that describes a paper’s intention, where that attention is achieved, where it is not achieved, and what students may adjust in order to advance up the grade scale. I’ve also experimented several times with laddering assignments, with using peer feedback on drafts, and with various other techniques to lower the writing stakes and make the process of editing and improving written work more transparent and accessible.
(I’ve written about, and hosted some terrific guest posts on, assessment challenges in the past; click here for the archive.)
So I clearly care a lot about assessment – about getting it right and giving students the intel about their work that they need to improve. But rubrics? Not so much with the caring about rubrics, maybe. I suspect I’m a bit jaded, like many of us, because rubrics look on the surface like yet another measurement tool we’re being forced to use in order to fit our teaching labour into boxes that can be ticked by our senior administrations and the governments who control their purse strings. They are probably such a thing. But they are also something else: they are a clear, consistent way to communicate student success and mitigate student failure, on our own terms. (Let’s not forget: most of us still have the freedom to set our own rubrics. For now, anyway.)
And, as I discovered, they are also a great way for us to learn key information about our own marking tendencies and the assumptions underpinning them.
Marking with the rubric changed the pattern I describe above. My process now goes something like this:
- import the rubric bullet points into my existing marking template;
- read the papers with those bullets explicitly in mind;
- comment in equal measure on each bullet;
- assign a rough grade zone to each bullet (IE: this aspect of your work is at “B” level, or at “A-” level, etc);
- average the bullets to arrive at a final grade.
In case you’re having trouble picturing this, here’s a screen shot of my template, with some (anonymised) feedback in place:
The first thing I realised after using the rubric in this way? I’ve historically given far too much weight to some aspects of student work, and too little to others… even though my rubrics have always implied that all aspects – content, structure, research, and grammar/style – are equally valuable. So I’ve been short-changing students who, for example, have good research chops but poor writing and structuring skills, because the latter makes the former harder to recognise, and without a rubric to prompt me I’ve simply not been looking hard enough for it. I’ve also, without question, been over-compensating students with elegant writing styles; less impressive research labour becomes less visible if the argumentation runs along fluidly.
Right off the bat, then, my use of the rubric as a marking guide both levelled the playing field for my students, and allowed me to come face to face with one of my key marking biases.
The second thing I realised was that marking in this rubric-focused way is a real challenge! I am a decorated editor and an astute reader of colleagues’ work, but that doesn’t mean I’m a perfect grader – not by any means. Reading novice scholarly work (aka, student work) with care requires keeping a lot of balls in the air at once: where’s the structure slipping out of focus; when is research apparent, when is it there but not standing out, when is it obviously absent; how much is poor grammar actually impeding my understanding, as opposed to just pissing me off (a different level of problem).
To do the juggle well, I’ve discovered, I have to slow down. Except… I have trained myself (as we all have – neoliberal university survival skill!) to read student work very quickly, make some blanket judgements along the way, and then produce a final grade driven as much by feel as by careful analysis of the paper’s strengths and weaknesses. When I was forced to put feeling aside and look back at all of a paper’s component parts, I as often as not saw that the grade I “felt” at the end was right was not, in fact, what the rubric was telling me was fair.
So add a few minutes more per paper, then. But where to snatch them from? It’s not as if I’m rolling in free time over here…
Thankfully, the rubric came to my rescue on this one, too. My third discovery: I could write fewer comments, and more quickly, yet still provide comprehensive feedback. The rubric language I include on each page of typed assessment stands in nicely for a whole bunch of words I do not need to write anew each time, and it standardises the way I frame and phrase my comments from student to student. That’s not to say everyone gets the same feedback, but rather that my brain is now compartmentalising each piece of assessment as I read, and is more quickly able to put those into the comment “boxes” at reading’s end.
Plus, in order to keep feedback to a page yet also include the rubric language in each set of comments, I’m writing less per paper, period. I doubt this is a bad thing – students generally don’t read all the feedback they receive from us, if they read any of it. Placing my responses to their papers directly in the language of my already-stated expectations, and offering smaller, more readable chunks will, I hope, get more students reading more of their feedback, and using it too. (I have plans to survey them on this in the last week of classes – stay tuned.)
As luck would have it, just as I was thinking this post through I came across a compelling discussion by Lynn Nygaard that uses “mirroring” as a metaphor to explain assessment labour. Nygaard’s ideas got me thinking about other ways I might transform my marking’s efficiency and effectiveness in future; although her focus is on feeding back to colleagues and grad students, I think it has some real applicability to undergraduate assessment too. I’ll share some of her provocations and reflect on them (ha!) in part two of this post, next week.
Until then, happy midterms!
Kim