Evaluate me!

This past week I prepared my first set of “module reports”. Here in the UK (or at least at my school, Queen Mary), at the end of each term instructors and convenors take some time to reflect on what happened during their modules (aka courses), to examine and comment on trends in the student evaluation data, and to share future plans for each module. In my department there’s a standardised template for this task, and after looking at it online I was kind of dreading filling it in (six times over). Finally, last Monday, and with permission from the colleague in charge of collating the reports, I decided to chuck the template out and write about each of my modules in an extended way in my teaching journal. I then imported that writing into a Word doc and sent it to her. Job done!

And now I have to ask myself: template tantrum aside, what took me so long?

The exercise was terrific. By far, the most useful (and satisfying) part of it involved looking seriously – which is to say carefully, and for longer than 15 minutes – at the student evaluation data for each course I taught in 2012-13. The jury remains perennially out on whether or not eval data can tell us anything useful about student experience and/or professorial teaching skill (for two recent articles, one on each side of the debate, look here and here). Nevertheless, I’ve always taken student evaluations seriously, read them (I thought) with some care, and marked up my hard copies before sticking them in a dedicated folder in my filing cabinet for future reference. I realized this past week, though, that never before have I taken a dedicated (and not a short) period of time to really look properly at both the data and the written student comments on my evaluations, and to cross-reference student experiences across several concurrent courses.

Doing this (slow, measured) cross-referencing was really eye-opening. I know I’m good at certain key teaching elements (being engaging in front of the class; managing group activities effectively; connecting with students who can’t quite articulate what they need, and helping them to figure out a path forward), but I’m less good (as I suspect we all are) at working out how exactly I can improve my teaching in those areas the evaluations flag (often nebulously) as potential problems. That’s likely, in part, because in the past when I’ve read my evaluations just for me I’ve been drawn to the good stuff (of course), and I’ve worked actively to minimize the impact on me of the not so good stuff (of course, again). As I noted in my post back in April on failure, it’s a fairly typical human reaction to receive criticism alongside an immediate urge to mitigate it; without doubt, in the past when I’ve read student evaluations I’ve done so with an eye to absorbing the good, pushing past the bad, and getting on with other things (aka forgetting about it). This time around, because of the demands of the “module report” exercise, I had to spend a good chunk of time observing, accounting for, and then writing about both the good I’d achieved and the places I’d failed to achieve what I was hoping to do.

What did I learn? For the most part, things I knew already, but only intuitively and for myself: that I’d often felt rushed in my seminar on Naturalism (a number of students asked in their evals for longer seminars, which was gratifying [!] and also took some of the sting out of the comments saying that they had felt rushed, too); that the course I ran on gender and power in early modern drama had been taken over a bit too completely by its experimental archive component, leaving some students feeling that they hadn’t really completed the course they’d signed up for; and that a number of students weren’t completely clear on my marking criteria. I also discovered trends that are likely evidence of more global difficulties in our department and even higher up the chain: for example, students struggle to understand aims and objectives in all of my courses, even though “aims” and “objectives” receive their own headings on both my (extensive) course outlines and on our department’s virtual learning environment module pages. For me, the take-away here is that students may need more help navigating our new virtual learning software, and probably also that they ignore a good chunk of my outline documents (maybe those should be shorter, and easier to navigate, too…).

The other thing I realized while reading, reflecting on, and writing about my evaluation data is that I’m not satisfied with the evaluation documents we use here at QM (and that’s not just a QM problem – I’m not satisfied with any of the evaluation forms I’ve ever filled in, or handed out for filling in, as a student or as a teacher). To minimize marking labour these are multiple-choice forms, with not a huge amount of space made for students to reflect in writing on their experiences; the questions are generic, enormous, and need to apply easily to vast numbers of different kinds of courses in order to be cross-reference-able across the entire university (and beyond). I understand, in other words, why the forms look the way they do, but the fact remains that, for courses driven by intellectual curiosity and creativity, complex research questions, and extended pieces of reflective (and research-led) writing, answering “yes”, “no”, or “maybe” to very general questions about teacher preparedness, methodology, and resources can only tell us so much about student experience. I know it’d be a lot more work to program and evaluate written reflections, coding those reflections for key words and themes (in 2008 I led a qualitative data-driven teaching study at Western University, so I do really get what a pain in the ass this kind of work can be), but I suspect we’d learn more, and more useful things, from text-centred evaluations in programs like English, Drama, and elsewhere in the Humanities, if not also in the “hard” sciences and engineering.

More importantly: in courses driven by creative thinking, writing, and performing, as all of mine are, evaluations that ask students to “grade” their instructors on the same terms – that is, via creative thinking and writing – as those by which we grade them might bring a welcome sense of fairness to the evaluation process. After all, if I stand up in front of the class and tell my students that their assessment of my work really matters to me, and is an important part of our shared classroom labour, I’d like to be able to hand out those darn forms knowing that I’m asking them to offer me the kind of feedback I’ve proudly offered them, and that I will, indeed, be taking it seriously.

Kim

PS: mid-term evaluations, how to create them, and how best to use them… the subject for another post, perhaps. Meanwhile, check out the terrific evaluative resources provided by the Cornell Centre for Teaching Excellence here.

Advertisements
This entry was posted in Still learning and tagged , by Kim Solga. Bookmark the permalink.

About Kim Solga

I am a university professor currently based in Hamilton, Ontario. I teach theatre and performance studies at Western University; previously, I was Senior Lecturer in Drama at Queen Mary, University of London. On Wordpress, my teaching blog is The Activist Classroom; I'm also a regular contributor to the popular blog, Fit is a Feminist Issue.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s