Kelsey’s post last week, about managing end-of-term course evaluations, struck a chord with me. I’ve fretted about course evals for as long as I can remember; when the results come in, months after a class has ended, I get that panicked feeling in my stomach, the same one I used to get before a big paper was returned back when I was an undergrad myself.
Yes, reader: as a lifelong resident of the ivory tower, I worry about whether or not my students are going to give me an A.
I’ve been teaching full time for almost 15 years now; I learned long ago that grades are an uneven, cruelly dopamine-laden way to measure student achievement. And yet – despite reams of literature that reflect the fallibility of end of term course evaluations, and their remarkable capacity to rehearse systemic biases based on race and gender – I can’t seem to stop myself scanning my results for the numbers and praying for rain.
After reading Kelsey’s post, I found myself reflecting on my relationship with course evals. Certainly there’s the stuff above, the unhealthy craving for the dopamine hit that comes with a positive response. But there’s also more.
Like many colleagues working toward best pedagogical practice, I’ve tried a range of different ways to gauge student experience at different points in the term. I’ve used my own, anonymous, mid-term evaluations, especially early on, when I wasn’t sure if anything I was doing in the classroom was working. I’ve invited students to reflect on their most and least favourite in-class activities, and even to vote for what we should or should not do on a given day. Recently, I’ve started using participation reflection papers, where, twice per term, students upload a 250-300 word piece (in any form they want – I stress this isn’t an essay) that considers how class is going in light of our course’s posted participation rubric.
My university (like yours, probably) has also gotten into the “better feedback” game: Western now has an online portal where students complete their evaluations and can access loads of information about what they are used for, plus helpful tips for effective feedback. This portal has a login tool for instructors, where we can add questions to the standard form, check response rates for open evals, and more. Students are incentivized to feed back with a gift card draw, guideline documents, and videos demonstrating the process. The system is very consumer-oriented, like most things in the neoliberal university, but it’s also far more user-friendly and open than the paper-based, computer-marked, sealed-enveloped systems of old.
What does all this fresh focus on good feedback mean? Is it translating into systemic change, or just lipsticking the pig? As I struggle myself with meaningful feedback that doesn’t send me into the “please give me an A!!” tailspin, I wonder.
And so, wondering, I turn to Facebook.
Over the weekend I asked colleagues on FB to let me know what they did to “hack” the course evals system at their joint; judging by the responses to that post, the answer was not that much. Certainly we insist to our students that their feedback matters; we offer time in class to fill forms in; we add questions when possible. Some of us, like Kelsey, take the initiative to ask different, not-formally-sanctioned questions, including at mid-term. But we are busy, and we are tired, and course evaluations are JUST ONE MORE THING that we need to worry about as the term rockets to a close.
In this evaluation exhaustion, we share much in common with the students, as I soon learned.
After spamming my colleagues, I asked some former students to feed in. My question to them was as follows:
More thinking about course evals. I’d love to hear from recent former students. Did you treat them seriously? As a chore? Were you cynical about their value? In a world of constant online reviews, etc, how do traditional evaluations rate?
The results I got here were fulsome, and very diverse. Two students told me they were committed optimists who took the exercise very seriously. Another told me his sister was a lecturer while he was at school, and therefore he understood from the inside what the stakes for professors were, which coloured his perception of evaluations. As he noted, from that both-sides perspective, he felt it was essential to be able to justify not giving a teacher top marks. (A welcome attitude, one that takes a teacherly perspective to teacher “grading”.)
Still another student confessed to using evaluations to reward good teachers and dig a bit at the bad ones, knowing that his feedback had a potential professional impact for both. (YIKES, but totally fair – that’s what we are asking students to do, right??)
Finally, one of my best-ever students shocked me by revealing that she did not give a flying frankfurter about any of it, and probably hadn’t filled out most of her evals anyway. (She really dug the gift card incentive, though.)
These diverse responses about the experience of course evaluations converged at one point, however: Timing. As cranky-pants Camille* (above), after confessing to eval ennui, added:
“if administration wants to have a genuine dialogue with students about how certain classes/professors may or may not be working, why don’t evals happen halfway through a semester? This gives everyone time to adjust on the fly. No one cares in the final weeks of class because nothing can be done to help the students that were struggling all along. The idea of course evals is wonderful, although I don’t think the way the system is currently set up ‘helps’ the students in any way.”
Mid-term check-ins are increasingly typical, but they aren’t yet the norm. At Western, instructors are invited to do an “optional” mid-term check in, but even though I’m fully committed to student feedback, I’ve never taken the option.
The timing thing stands out for me here not because it’s a great idea (OF COURSE IT IS), but because it gets at deeper issues, which Camille nicely bulls-eyes in the above comment. Do we want evaluations to be part of a dialogue about teaching and learning? If so, why do they still work like a multiple-choice, one-way street? Do we want evaluations to be materially helpful? If so, what are they doing at the end of the semester? We need to frame them, locate them, and structure their relationship to classes, to departments, and to the university community as a whole very differently if this is actually our aim.
After all this fulsome feedback from Camille, Jake, Jonas, Jack, and Thalia appeared in my FB feed, a couple of colleagues weighed in. One, playwright and Weber State theatre professor Jenny Kokai, wrote about her recent experiences on a committee rethinking evaluations at her school. (NB: there are a lot of these projects afoot, which I discovered when I went snorkelling for some of the research before writing this post. I was particularly impressed by the documentation around the recent pilot project at the University of Waterloo, just up the highway from my house.)
Dr Kokai pointed out that research reveals mid-semester feedback focuses on class effectiveness, while later semester feedback is generally tied to grade expectations. She also noted that metacognitive questions – about, say, students’ learning practices, and their parallel commitments to their own class labour – tend to offer a more holistic picture of student experience, while also benefitting students as a reflection experience.
I’ve realized over the course of preparing this post that it’s exactly this last thing – encouraging metacognitive reflection – to which I’ve turned my attention. As a teacher, it’s where I want to put my time and energy.
Why don’t I take the mid-term “feedback” option Western gives me? I’m too busy reading and writing back to students’ mid-term participation reflections!
In these documents I invite students to think about what’s working and not working for them in their current participation practice – I’ve taken to framing participation, and studenting in general, as a practice, in the same way I call my teaching a practice. (I repeat this to students as often as possible. All we can all ever do is PRACTICE!) These reflections are not anonymous documents, but – as with peer review, a post for another day! – I don’t think student feedback need be anonymous to be useful. In my class, you can get full participation marks only if you engage with the participation reflection exercise, but other than that these documents are not graded, and nobody is discouraged from being frank and clear about both strengths and weaknesses. Students write these reflections to themselves and to me, in the lowest-stakes possible way, and reveal where their wins and their struggles are; I then use that feedback as an opportunity to make suggestions, check in, validate their perceptions, and invite them to come sit down in office hours to figure stuff out. At the very least, I gain some tools that allow me to check in with them, in class, repeatedly until the end of the semester.
This week, our last of term at Western, both of my classes will do a guided reflection in class, where I will ask three slightly different questions: what went really well after your last check in? What didn’t get off the ground? And, most importantly: what have you learned about your own experience of learning that you can take with you into next term?
These reflections cannot replace fully anonymous feedback, of course, but they model the kinds of questions, and invite the kinds of mutual and dialogic class investments, that all evaluation tools need to aim for. The next step is to shift our evaluation structures systemically so that “feedback” becomes actual dialogue, and leads to a better understanding of what it takes to sustain a healthy learning environment from both ends.
*Thanks to Camille, and to everyone who responded to my queries, for their reflections and for granting me permission to cite them here.