Course Evals 2.0: Things We Can Do Now to Make a Flawed Process Better

Kelsey’s post last week, about managing end-of-term course evaluations, struck a chord with me. I’ve fretted about course evals for as long as I can remember; when the results come in, months after a class has ended, I get that panicked feeling in my stomach, the same one I used to get before a big paper was returned back when I was an undergrad myself.

Yes, reader: as a lifelong resident of the ivory tower, I worry about whether or not my students are going to give me an A. 

I’ve been teaching full time for almost 15 years now; I learned long ago that grades are an uneven, cruelly dopamine-laden way to measure student achievement. And yet – despite reams of literature that reflect the fallibility of end of term course evaluations, and their remarkable capacity to rehearse systemic biases based on race and gender – I can’t seem to stop myself scanning my results for the numbers and praying for rain.

After reading Kelsey’s post, I found myself reflecting on my relationship with course evals. Certainly there’s the stuff above, the unhealthy craving for the dopamine hit that comes with a positive response. But there’s also more.

Like many colleagues working toward best pedagogical practice, I’ve tried a range of different ways to gauge student experience at different points in the term. I’ve used my own, anonymous, mid-term evaluations, especially early on, when I wasn’t sure if anything I was doing in the classroom was working. I’ve invited students to reflect on their most and least favourite in-class activities, and even to vote for what we should or should not do on a given day. Recently, I’ve started using participation reflection papers, where, twice per term, students upload a 250-300 word piece (in any form they want – I stress this isn’t an essay) that considers how class is going in light of our course’s posted participation rubric.

My university (like yours, probably) has also gotten into the “better feedback” game: Western now has an online portal where students complete their evaluations and can access loads of information about what they are used for, plus helpful tips for effective feedback. This portal has a login tool for instructors, where we can add questions to the standard form, check response rates for open evals, and more. Students are incentivized to feed back with a gift card draw, guideline documents, and videos demonstrating the process. The system is very consumer-oriented, like most things in the neoliberal university, but it’s also far more user-friendly and open than the paper-based, computer-marked, sealed-enveloped systems of old.

What does all this fresh focus on good feedback mean? Is it translating into systemic change, or just lipsticking the pig? As I struggle myself with meaningful feedback that doesn’t send me into the “please give me an A!!” tailspin, I wonder.

And so, wondering, I turn to Facebook.

Over the weekend I asked colleagues on FB to let me know what they did to “hack” the course evals system at their joint; judging by the responses to that post, the answer was not that much. Certainly we insist to our students that their feedback matters; we offer time in class to fill forms in; we add questions when possible. Some of us, like Kelsey, take the initiative to ask different, not-formally-sanctioned questions, including at mid-term. But we are busy, and we are tired, and course evaluations are JUST ONE MORE THING that we need to worry about as the term rockets to a close.

In this evaluation exhaustion, we share much in common with the students, as I soon learned.

After spamming my colleagues, I asked some former students to feed in. My question to them was as follows:

More thinking about course evals. I’d love to hear from recent former students. Did you treat them seriously? As a chore? Were you cynical about their value? In a world of constant online reviews, etc, how do traditional evaluations rate?

The results I got here were fulsome, and very diverse. Two students told me they were committed optimists who took the exercise very seriously. Another told me his sister was a lecturer while he was at school, and therefore he understood from the inside what the stakes for professors were, which coloured his perception of evaluations. As he noted, from that both-sides perspective, he felt it was essential to be able to justify not giving a teacher top marks. (A welcome attitude, one that takes a teacherly perspective to teacher “grading”.)

Still another student confessed to using evaluations to reward good teachers and dig a bit at the bad ones, knowing that his feedback had a potential professional impact for both. (YIKES, but totally fair – that’s what we are asking students to do, right??)

Finally, one of my best-ever students shocked me by revealing that she did not give a flying frankfurter about any of it, and probably hadn’t filled out most of her evals anyway. (She really dug the gift card incentive, though.)

These diverse responses about the experience of course evaluations converged at one point, however: Timing. As cranky-pants Camille* (above), after confessing to eval ennui, added:

“if administration wants to have a genuine dialogue with students about how certain classes/professors may or may not be working, why don’t evals happen halfway through a semester? This gives everyone time to adjust on the fly. No one cares in the final weeks of class because nothing can be done to help the students that were struggling all along. The idea of course evals is wonderful, although I don’t think the way the system is currently set up ‘helps’ the students in any way.”

Mid-term check-ins are increasingly typical, but they aren’t yet the norm. At Western, instructors are invited to do an “optional” mid-term check in, but even though I’m fully committed to student feedback, I’ve never taken the option.

The timing thing stands out for me here not because it’s a great idea (OF COURSE IT IS), but because it gets at deeper issues, which Camille nicely bulls-eyes in the above comment. Do we want evaluations to be part of a dialogue about teaching and learning? If so, why do they still work like a multiple-choice, one-way street? Do we want evaluations to be materially helpful? If so, what are they doing at the end of the semester? We need to frame them, locate them, and structure their relationship to classes, to departments, and to the university community as a whole very differently if this is actually our aim.

After all this fulsome feedback from Camille, Jake, Jonas, Jack, and Thalia appeared in my FB feed, a couple of colleagues weighed in. One, playwright and Weber State theatre professor Jenny Kokai, wrote about her recent experiences on a committee rethinking evaluations at her school. (NB: there are a lot of these projects afoot, which I discovered when I went snorkelling for some of the research before writing this post. I was particularly impressed by the documentation around the recent pilot project at the University of Waterloo, just up the highway from my house.)

Dr Kokai pointed out that research reveals mid-semester feedback focuses on class effectiveness, while later semester feedback is generally tied to grade expectations. She also noted that metacognitive questions – about, say, students’ learning practices, and their parallel commitments to their own class labour – tend to offer a more holistic picture of student experience, while also benefitting students as a reflection experience.

I’ve realized over the course of preparing this post that it’s exactly this last thing – encouraging metacognitive reflection – to which I’ve turned my attention. As a teacher, it’s where I want to put my time and energy.

Why don’t I take the mid-term “feedback” option Western gives me? I’m too busy reading and writing back to students’ mid-term participation reflections!

In these documents I invite students to think about what’s working and not working for them in their current participation practice – I’ve taken to framing participation, and studenting in general, as a practice, in the same way I call my teaching a practice. (I repeat this to students as often as possible. All we can all ever do is PRACTICE!) These reflections are not anonymous documents, but – as with peer review, a post for another day! – I don’t think student feedback need be anonymous to be useful. In my class, you can get full participation marks only if you engage with the participation reflection exercise, but other than that these documents are not graded, and nobody is discouraged from being frank and clear about both strengths and weaknesses. Students write these reflections to themselves and to me, in the lowest-stakes possible way, and reveal where their wins and their struggles are; I then use that feedback as an opportunity to make suggestions, check in, validate their perceptions, and invite them to come sit down in office hours to figure stuff out. At the very least, I gain some tools that allow me to check in with them, in class, repeatedly until the end of the semester.

This week, our last of term at Western, both of my classes will do a guided reflection in class, where I will ask three slightly different questions: what went really well after your last check in? What didn’t get off the ground? And, most importantly: what have you learned about your own experience of learning that you can take with you into next term?

These reflections cannot replace fully anonymous feedback, of course, but they model the kinds of questions, and invite the kinds of mutual and dialogic class investments, that all evaluation tools need to aim for. The next step is to shift our evaluation structures systemically so that “feedback” becomes actual dialogue, and leads to a better understanding of what it takes to sustain a healthy learning environment from both ends.

*Thanks to Camille, and to everyone who responded to my queries, for their reflections and for granting me permission to cite them here.

End of Term Evaluations & Student Feedback – Part I

This is the first part of a two-part post. As an end of term treat, next week will feature a roundtable post with more evaluation hacks from instructors across the teaching spectrum!

Alongside stacks of unmarked essays and the promise of candy cane flavoured lattes, the final weeks of November mean the end of classes. And, the end of classes mean it’s every instructor’s favourite time of year: it’s course evaluation time.

bb59aab0-27d6-47ae-bc97-4eb860ac2648

As anyone in higher education knows, teaching evaluations have conventionally played a significant role in hiring, promotion, and tenure processes. Theoretically, they provide students the opportunity to report on their experiences with an instructor, giving institutions key information about what happens in courses across university campuses.

Practically, they are far murkier.

There is plenty of evidence (see: here, here, and here) that suggests that teaching evaluations are frequently inflected by biases and gender biases in particular. To boot, they are designed like standardized tests (often complete with institutional grey and blue colour schemes). And, frankly, the questions are usually, ahem, unhelpful in terms of actual pedagogical feedback.

evals_0

I find all of this annoying.

I’m currently a postdoctoral researcher and contract instructor, so whether I like it or not, evaluations matter for my career. At the same time, I’m at a point in my teaching where I genuinely want feedback. And, I really want feedback about things that course evaluations aren’t designed to gather, like assignment creation and the success or failure of specific activities.

So, last year, I decided to solicit end of term feedback from students in addition to their course evaluations. This isn’t super radical. I, and many other teachers, do mid-term check-ins. Nevertheless, I thought I’d share the process and list of questions as a resource.

These questions were for a small, seminar-based performance studies class. The class was comprised of upper year students and took place once a week for three hours.


  1. What reading did you enjoy the most/get the most out of this semester? Why?
  2. What reading from BEFORE reading break (so, Kelsey selected) did you enjoy the least/get the least out of this semester. Why?
  3. What worked for you about the co-facilitation project?
  4. Was the co-facilitation assignment a better or worse experience for you than a traditional individual or group presentation? Why?
  5. Was there an element of the co-facilitation project that hindered your leaning?
  6. Did the reading responses support your learning? Why or why not?
  7. Was there an in-class activity that you vividly remember? Which one? Why?
  8. Is there anything else you’d like to share with me?

On the final day of class, I paired my usual speech about course evaluations (they matter) with my introduction to this set of questions.

Wanting to give my students the same freedom to respond to these questions as their course evaluations, I also arranged for one of my students to collect the informal evaluations, put them in a sealed envelope, and to hand them off to a colleague to keep until after grades were submitted.

iStock-542551546

When semester was over, I collected the envelope and was both pleased and surprised with the depth of feedback I received: the co-facilitation project was generally helpful for learning but also a bit complex on the ground; there was one too many historiography readings, and students took away unexpected nuggets from the class.

Most importantly, unlike my teaching evaluations, which are generally written about me, the feedback was written to me. This meant that it was phrased so that I could read it constructively, and in combination with my evaluations, the students’ insights offered a really helpful perspective for moving forward in my teaching practice.

 

 

Grading Participation

By Signy Lynch

When I checked my inbox early this December and saw a notification for the latest Activist Classroom blog post, entitled ‘OMG CAN SOMEONE PLEASE TELL ME HOW TO GRADE PARTICIPATION???,’ it felt like a sign. As I came across this post, I was just finalizing the syllabus for the first course I would ever teach, Perspectives on Contemporary Theatre, a fourth-year seminar at York University.

I had great fun designing the syllabus, but had been hesitating over the participation section—I, like Kim, had been preoccupied with how best to grade my students on participation, and how to do so in a way that might motivate and elicit meaningful engagement from them. There were a number of factors to consider. While a seminar class, the course I was teaching was quite large, with 37 students. In order to fairly evaluate participation, I felt I needed some way to increase my engagement with them on a individual level. I also knew I wanted students to be given credit for and incentive to engage with the readings, as an important focus for class discussion.

Participate.jpg

Image: Some people raising their hands. Participation?

After deliberation, and inspired by some of the resources that Kim posted, (particularly the first point in this post, on grading participation through written assignments) I settled on a participation grade made up of the following three components:

  1. In-class participation and attendance (the “typical” way)
  2. Commenting at least once per week on another student’s Instagram response post
  3. Written reflections on participation, conducted in class three times throughout the semester.

To emphasize the importance of engagement, particularly in a seminar course, I made participation worth 20% of students’ final grade; though I didn’t assign a strict proportion of that 20% to each section to allow myself (and the students) some flexibility.

One key theoretical influence on this formula for participation was the principle of universal design. Universal design provides students multiple ways to engage in the course, shows them multiple representations of material, and allows them multiple avenues through which to express their learning (here’s a great primer on universal design in higher education, for those unfamiliar). Incorporating universal design into course design is a more inclusive way to teach that respects students’ differences as learners, both in ability and interest.

art artistic bright color

Image: a variety of coloured pencils. Universal design design appeals to a variety of learners. Photo by Pixabay on Pexels.com

Since the first component of my participation grade breakdown—in-class participation—is fairly traditional, I’ll spend a little more time elaborating on components two and three.

Component Two – Instagram Comments

Component two asked students to leave at least one comment per week on a classmate’s Instagram post, in connection to an existing Instagram response journal assignment. For a total of 30% of the final grade, I asked students to post weekly short (200-250 words) responses on Instagram to a passage of their selection from one of that week’s readings.

The posts were due the Friday before each Monday class, and the comments due right before class, ensuring that students had ample time to review each other’s posts and select one to which to respond.

Instagram Post Assignment.png

Screenshot of a post from the course Instagram account, describing the journal assignment

In the past (despite teaching theatre!) I’ve heard from many students that speaking up in class is a real barrier to their participation. Thus in asking students to contribute through written comments I offered them alternate mode of communication (inclusive design!), while at the same time generating content that could be drawn on to round-out in-class discussion. Unlike the Instagram journal posts themselves, these comments were graded for completion rather than substance, to further reduce barriers to participation.

This component turned out fairly well overall. One student wrote to me in their response that posting on Instagram, “feels less formal than posts on Moodle, and I’ve noticed myself and my peers feel more comfortable responding to each other.” Through this component some great conversations happened on Instagram; however, I do wish there had been a bit more consistency in students’ comments and a slightly higher level of involvement—for some students the exercise often felt quite cursory.

Component Three – Writing Exercises

The third part of my students’ participation mark was derived from short written reflections (taking around ten minutes each) conducted at three different times during the term. I had students respond to some questions on a piece of paper, which was then placed in in an unsealed envelope. The idea was that students would review their own writing as the semester went on and base their subsequent responses on their earlier goals and thoughts.

A central goal of this component of participation was to give individual students a chance to reflect on and define what meaningful participation meant to them. In so doing, I hoped to activate students’ intrinsic motivation by asking them to find meaning in the work they were doing for the course.

Importantly, these writing exercises were framed as reflective exercises. I told students that for this component they would be evaluated primarily on their reflection on participation and not on the participation itself, encouraging honesty.

high angle view of paper against white background

Image: a stack of envelopes as used in this exercise. Photo by Pixabay on Pexels.com

Another key part of this was that students shouldn’t be concerned about writing answers they thought would please me, but should examine their own feelings and preferences. Perspectives on Contemporary Theatre was not a required course. Despite this, I discovered from the first participation exercise I conducted that while many students were interested in the course content, some were primarily taking it because they needed a fourth-year credit that fit their schedule. I wanted to recognize and honour the fact that students were taking the course for many different reasons and may have had different priorities or assigned value differently than me. Thus through this component, I could give students points for effort, while also recognizing different types of effort and rewarding students for thinking on their own terms.

The questions I asked in each exercise are below:

Participation Exercise #1 – (near beginning of term)

  1. Why are you taking this course?
  2. What are your expectations from the course/what do you hope to get from it?
  3. Has this course aligned with your expectations/diverged from them so far? In what way(s)?
  4. What does meaningful participation in this course mean to you? (This response should consider your above answers. One example could be: ‘I don’t like to talk in class, but I want to really engage with the readings by taking detailed notes’)
  5. What two specific goals will you set for yourself regarding your participation in this course?

Participation Exercise #2 – (at midterm)

  1. How have you done with your goals so far? (Remember, I’m not evaluating you on whether you meet them but on your ability to reflect on them, so please answer honestly.)
  2. What factors have affected your participation?
  3. Review your goals. Are they specific and measurable? Are they still useful/in line with what you consider to be meaningful participation? If necessary, rewrite them and say what you’ve changed and why.
  4. What steps will you take going forwards to ensure you meet these goals?

Finally, as an optional part 5, you can weigh in with me and let me know how the course is going for you. This is your chance to give me feedback about your experience so far–whether it’s, ‘I wish we would watch more videos,’ or ‘I’m confused!’ etc.

Participation Exercise #3 – (end of term)

  1. How have you done with both general participation and your specific goals in this course?
  2. What factors have affected your participation?
  3. Are you okay with your level of participation? Why/why not?
  4. What would you change about your participation in this course if you could?
  5. If you were grading yourself on participation in this course, what grade would you give yourself and why?

In addition to the prompts for self-reflection, these exercises offered students some opportunities to provide feedback to me. Specifically questions 2 and 3 in exercise one and the optional number 5 in exercise two serve this goal. (For the final reflection I asked students to provide feedback through the course evaluations.) Collecting this feedback allowed me to address student concerns, and adjust in-class activities to student preferences, which I hope made students feel they had some some say in the course and that I valued their opinions and and experiences. At the same time my asking for feedback demonstrated to students that I was trying to be reflexive about my teaching practice in the same way I was asking them to reflect on their participation.

These written reflections also gave me some useful insight into students’ attitudes and feelings about participation in the course, so that I could then try to better it. When I heard from a number that the fear of being wrong was a major factor in their hesitance to contribute to in-class discussions, I was able to bring up this point in seminar, and talk it through with my students, and also to critically examine my own behaviour to see how it might be contributing to those feelings. I think one influencing factor was the difficulty of some of the readings, so I made sure to re-articulate that the material was meant to be challenging, that I was in no way expecting them to understand it all, and that they shouldn’t feel stupid if they were struggling with it.

person uses pen on book

Image: Hand holding a pen and writing in a journal. Creating opportunities for students to engage in reflection was important to me. Photo by rawpixel.com on Pexels.com

Another student wrote, “I’m a little confused on how else to participate other than agreeing or disagreeing on the subjects at hand,” which served as a launching point for a productive group discussion on what forms participation could take. Some of students’ suggestions on this subject, both in class and in their journals, really impressed me. One student who admitted they avoided class discussions for fear of being wrong suggested that a way to get around this could be asking questions rather than trying to answer them. In an ideal world, they would feel comfortable with both actions, but here came up with a productive middle ground.

Finally, students’ discussion of meaningful participation not only guided their self-reflection, but also aided in my evaluation of them. Students’ observations on what meaningful participation meant to them, played a large factor in my assessment of the first participation component, their in-class participation. For example, if a student expressed difficulty with speaking up in class and didn’t include it in their definition of participation, I paid more attention to what their stated goals were, their in-class attentiveness and group work, and weighted their Instagram comments a little more heavily in determining their grade.

Final Thoughts

Overall, I found this three-part system very useful. It helped me to connect with my students and to understand them a lot better as individuals. Through the third component in particular, I learned a lot about their individual goals and the struggles they were facing, which put me in a much better position to evaluate their participation.

This experiment confirmed to me that relying solely on my own perceptions of students to grade participation is not enough, and I will continue to experiment with this model going forwards. While this iteration of it worked out fairly well for this particular course, variations or other approaches entirely might be better suited for courses with different formats.

Thanks to Kim for inviting me to reflect through this blog post. I hope this reflection is of use to some of you, and feel free to share your thoughts or own experiences with me in the comments!

thank you text on black and brown board

Image: a chalkboard reading ‘thank you’. Photo by rawpixel.com on Pexels.com

signy-lynch-profile

Signy Lynch is a SSHRC-funded PhD Candidate in Theatre and Performance Studies at York University. Her research interests include political performance, diversity in theatre, spectatorship, affect, and theatre criticism. Her dissertation investigates how direct audience address in contemporary performance in Canada can help audience members and performers to negotiate the complexities of twenty-first century life. She has published work in Canadian Theatre Review, alt.theatre, and CdnTimes and is a member of the board of directors of Cahoots Theatre.

 

OMG CAN SOMEONE PLEASE TELL ME HOW TO GRADE PARTICIPATION???

This is a cry for help.

It’s the end of term. I’m absolutely thrilled: welcome back, weekday drinking! And I’m really tired. Where’s my pillow at, again?

But I’m also staring at my computer screen. Because I’ve got 40 students in my terrific Toronto: Culture and Performance class, and they’ve all been superb and committed and present, and now I have to give them “participation” grades.

Ah, participation. What exactly is it “testing” for? If you’re like me you’ve probably not spent enough time thinking about that question, or considering what we are trying to measure and reward with the inevitable “10% participation” line in the syllabus – the one that carries over from year to year with hardly a thought or a tweak.

That laziness comes home to roost this time of year. Because they can’t all get 100%, now, can they?

raise_hand_1510699859

So I’m being a touch disingenuous here. I’ve actually thought about participation a fair bit. In most of my classes it is a category pegged to real work and effort, not a nebulous thing that lets me quietly reward students I appreciate more than others, or unconsciously punish those who have pissed me off. (Yes, we all do this. No, we don’t mean to. Think about it.)

For example: in my OTHER fall term class, my second-year performance studies seminar, participation works like this.

We have a class blog. (All the class prep and para-discussion goes on the blog.) Every Monday I post a “prompt” related to the week’s reading, viewing, or topic in general. I ask the students to engage with an aspect of the work under consideration, and to do so in writing or by posting video or other media. I emphasize that this work should demonstrate a fulsome (not just passing) engagement with the topic or material – IE: that it should take more than a minute or two to do. But I also emphasize it is not “graded”; students should feel free to experiment, write as much or as little as they wish without fear of making grammatical errors, and take a risk if they wish (there are no wrong answers!). I place a deadline on the responses – they must be completed an hour before class – and I always incorporate them into my class prep, so it’s clear they’re not just make-work things.

The rule for this fall’s seminar was: respond to 5 prompts over 13 weeks and earn 100% in participation. That’s 20% per prompt. Come to class every day, prepared and on time, and keep your grade. Miss class without accommodation? Each miss takes 5% off your running total. Miss more than three classes without accommodation, and lose all your participation grades for the class.

Screen Shot 2018-12-05 at 2.50.46 PM.png

My logic for this structure was as follows. Coming to class matters a lot: seminars thrive on group discussion. Being prepared matters for the quality of discussion we have, and being on time is simply respectful. But the quality of in-class discussion is profoundly enhanced by thinking carefully and richly in advance about the work we’re going to do there – that’s the spirit of the flipped classroom in action. So the prompts were my way of saying: here’s something we’re really going to talk about. And the students’ responses were a way of saying: this is where we think we want to go with this. We’re into it!

And that, really, is what I am “testing” with participation: the willingness to have a real, considered, respectful conversation about a syllabus topic – to put something real into it, and get something real out of it.

Versions of this participation rubric have worked well for me over the past few years: sometimes the pre-prepped action relates to a prompt response; sometimes it takes the form of a performance. I’ve been learning and tweaking as I go, but I’ve been trying hard to eliminate the guesswork. Participation grades function best when they are pegged to rubrics, and when they reward heartfelt effort and genuine engagement with as much of the subjective stuff on my end either eliminated or curbed by the hard evidence of a student’s work on behalf of the course.

Flash forward to TOCAP, the big class on the screen in front of me. I didn’t do what I describe above for this class: too big; too much work. UGH! So what did I say about participation? I checked the outline just now. It says this:

To earn 100% for participation – and you really truly can (it happens all the time) – do the following things:

  • Come to class. Every day. If you have to miss, ensure you have accommodation from your academic counsellors (see below).
  • Read the stuff we’re reading. Think carefully as you’re reading. Maybe read it twice if it’s a challenge. Take some notes! Bear in mind that the reading load for this class is not heavy; readings have been scheduled to give you lots of opportunities to make time for them, and there are built in re-reading opportunities if you want to take them.
  • Contribute to class. This doesn’t mean talking a lot; talking a lot usually means you’re not paying attention to how much space you’re taking up. It also doesn’t mean nevertalking, though: lots of us are shy, but there will be many different ways in this class to share thoughts – including via silent writing, group chats, peer-to-peer conversations, and more. If you’re a shy person and you’re working hard to contribute, we will notice.
  • Take some risks! Falling on your arse doesn’t mean failing the course: it means you have to get up and try again. A risk is worth it if you learn something valuable about yourself in the process. And risks can be small: like speaking up when normally you don’t, or keeping mum when normally you talk over others. Risks can also mean trying to create a video when normally you wouldn’t, or writing your essay well in advance and bringing it to Kim or Courtney to talk about, when normally you’re a last-minute person. Taking a risk means actively taking up an invitation made by our class to push yourself a bit, rather than just showing up for the sake of it. Give it a try.

This all sounds great, and I’m sure it was reassuring. But it’s also not a rubric; it says NOTHING about how I’m going to measure these things. And that’s a problem – because right now I have to measure them.

Staring at the screen in low-level panic, I’m reminded that I need to figure out how to scale up my participation rubric experiments and fast.

There are best practices out there of course: here’s a good one from Faculty Focus this past May; here are four collated in a short article published by the Teaching Commons at Lakehead University in Thunder Bay, ON. (I’m fond of the first one here, but click the second link in that bullet in order to read both the first noted article by Weimer, and the response by Slapcoff.) But the problem of scale still arises: in large classes, grading participation is significant extra work – or can be perceived that way (certainly at this time of the term, and certainly right now by me!).

This is why Slapcoff and Weimer’s linked reflections (in the first item above, as mentioned) make great sense to me: as writing assignments about participation, they offer excellent ways for students to reflect meta-cognitively on their classroom practice in a format we A&H professors are used to grading, and grading quickly. Better still, if these are (as Weimer suggests) papers written primarily for completion and reflection (like my students’ blog prompt responses), they need not be long, and they need not be marked for grammar. Feedback can happen in a peer-to-peer structure, or at strategic points in the term when life’s not too busy. It might be most fruitful, in fact, to schedule mid-term check-in meetings with students, where they bring a participation reflection with them, and talk them through in office hours. If the class is big, perhaps setting one or two sessions aside for this reflection work makes sense, too.

Options, for sure, if not solutions. What think you, dear readers? What do you do in larger-class scenarios to measure participation? What works, what’s too much work? What’s definitely not worth doing? Thoughts very welcome.

Kim

Active learning in the graduate seminar room

This past autumn I taught my first graduate seminar in almost eight years; as a result of sabbaticals, career moves, and then my labour establishing a new undergraduate theatre studies program at Western, I had had neither the time nor the opportunity to teach graduate students (Brits: that’s postgrads to you) since summer 2009. I was excited to get back into the seminar room with smart MA and PhD candidates, but I was also a bit daunted.

I find graduate teaching a mixed blessing. On one hand: smart students, well read, self-selecting into a challenging program. We can expect them to be prepared; we can expect them to be keen; we can expect them to participate. On the other, though, there’s the whiff of imposter syndrome all around us in grad seminars: every student is eyeing every other student, wondering if they know enough, if they are smart enough. Showing off can ensue; oneupmanship happens whether students intend it to or not. Fraught dynamics emerge; and there I am, the prof who ALSO fears she doesn’t actually know enough to be teaching graduate students, caught in the middle, trying to keep the discussion on track.

(Imposter syndrome never goes away; you just learn to cope better with it. Sorry.)

impostor-syndrome-cartoon-823x1024

With years between me and my last graduate outing, I had some questions for my peers as I prepared the syllabus: how much reading is too much? Not enough? Are we still assigning One Seminar Presentation and One Final Essay, or have assessments evolved? In general the consensus was: 100 pages per week, give or take; seminar presentations always; one or two essays as you prefer.

The goal, as ever, was to make discussions in the room rich, but prep not too onerous. Grad seminars, the logic goes, should involve the prof and the class preparing the reading, and then coming to the room with questions and ideas to propel a discussion. Profs aren’t prepping lectures (or, most aren’t), and the onus is on the group to find useful things to say about each set of readings each week.

Pure, unadulterated active learning.

Except, well… maybe not. As I planned my new course (“Performance and the Global City”; please email me if you’d like a copy of the syllabus!) I spent a lot of time thinking back to my earlier graduate seminar experiences, both as a teacher and as a student. I realized that the traditional seminar model creates some barriers to access that reveal its limits as an active learning environment.

First of all, good discussions require a fair bit of curation; it’s not enough to come to class with a handful of talking points and/or questions for the room and assume everyone will be able to jump in and dig deep, just like that. (Quiet students will always struggle with the “so, what did we think?” opener, and, no, it’s not them, it’s us.)

Second, certain voices dominate class discussions because they have been trained by existing learning protocols to do so; those voices are comfortable with minimal prompting, and they aren’t always aware of how much space they are taking up. For profs keen to get a rousing discussion going around the seminar table, those voices are a godsend; we may complain to each other in the halls or over drinks about the students who dominate our discussions, but without the keeners who can kill airtime, our under-curated discussions can stall and leave us exposed.

Finally, can I just say that the traditional graduate seminar presentation is more often than not boring as heck? Does anyone actually enjoy listening to anyone else read a paper for 20 minutes at a go? What – other than how to write a clever paper and deliver a very dull conference presentation – do we imagine we are teaching our postgrads with this kind of assessment?

OK, so I know I’m being hard on tried and true models here, and if your graduate seminars run conventionally but very well then I’m really glad, and I would not want to stop you from carrying on with them. But the more I thought about the grad seminar status quo, the more I knew I didn’t want to do it again. So I hatched a new plan.

I decided to import a bunch of flipped-classroom active learning techniques from my undergraduate classes into my new graduate seminar.

This shift manifested in two key ways. First, student presentations were styled as peer teaching presentations, not research presentations. Every student was required to teach one article over the course of the term to the rest of the class, and students were required to work in pairs for this task. Further, I explicitly asked them not to create a lecture, but instead to frame the teach with an active learning exercise.

Here’s the brief for the peer teach I included in the syllabus:

PEER TEACHING EXERCISE

Once this term you will work in pairs to lead the class in an exploratory exercise based on one of our readings. The goal: to help you to try out different ways to connect students with challenging material. For that reason, I ask you not to prepare a lecture-style statement for this task; you should of course have thoughts about your reading you would like to draw out, but the point of this exercise is not to tell us what they are.

Here’s how the task will work:

  • By Wednesday at NOON of your week to teach, you will post to OWL a provocation (maybe a question, maybe not…) based on ONE of the readings for that week. Let Kim know in advance which reading you will focus on.
  • Your classmates will offer preliminary reflections on your provocation on OWL over the following 24 hours. You should read and note these reflections.

You will then prepare a learning exercise to help us explore your provocation.

There are lots of exercises to choose from; you might want to consult some research on “active learning” or the “flipped classroom” to help you out – the Teaching and Learning Centre at Weldon can help with this, or (of course!) you can have a chat with Kim to discuss some options. Your exercise need not be complicated, but it should be more than you simply asking everyone, “so, what did you think?”

When you come to class on Thursday, you will run your exercise, and then debrief it. Here, you can incorporate your classmates’ preliminary responses as much or as little as you feel will be productive.

You will have a total of 30 minutes for your teach. (NOTE: this is actually not a lot of time! Use it with care.)

Clear as mud? Don’t worry! Kim will model this task in our second week. If you’re still stuck, though, ask yourself this question: did a teacher ever do a really useful, cool thing in class that really stuck with you? What was that cool thing?

Second, not only did I model a variety of peer teaching exercises for the students in the second class of the term, in order to give them a concrete sense of how their own teaching sessions could work, but I continued to incorporate group-based and pairs-based learning exercises in my own teaching week to week in order to make those things normative in our seminar room.

We’d do think/pair/share work, we’d use “world cafe” or long table-style discussions, and one week we even debriefed our field trip to Detroit by creating team maps of the experience on flip chart paper, trying to draw connections between our on-the-ground experiences and the ideas conveyed by our readings about the city.

(Candid snaps of the students at the Museum of Contemporary Art Detroit – Sebastian, Lacey, Sharon, Emily and Robyn)

The students came along, gamely, for the ride – although they were understandably hesitant at first. I made a point of leaving my office door wide open to them as they prepared for their teaches, and after each teach I’d invite the presenters to come for a debrief, where we’d talk about what went well and what didn’t, and where they could be free to ask me all kinds of questions about active learning models.

Students consistently reported to me that they enjoyed the teaching exercise, found it unusual but productive; nevertheless, I couldn’t shake the feeling they were just humouring me. After all, grad seminars are supposed to be complex, serious learning environments… and we were mostly just having a good time. My imposter syndrome gurgled away in the pit of my stomach. Could they really be taking this seriously, getting as much out of it as they were getting out of the modernist theory and poetry seminar up the hall?

When my seminar evaluations landed in my email inbox last week, that gurgle erupted once more. Here was the moment of truth: What They Really Thought about our flipped seminar, all those small group discussions and messing about with coloured markers.

To my genuine surprise and utter delight, the evaluations universally praised the experience. I was astonished; students called our class a “refreshing and dynamic break” from the traditional model, a “comfortable and open learning environment” where everyone “could express their opinions and ideas without fear of judgement.” This one below is my favourite, because it tells me I achieved everything I had wanted to do, and also more than I’d hoped:

Through her use of active learning in her teaching practice, Kim fostered a deeply collaborative class environment. It was an environment where it felt safe to fail, which made it all the more generative – we were able to take risks, offer partial thoughts, and hash them out together.

I really appreciated that she encouraged using creative practices in our assignments, especially given the course material. Being able to engage in the practices that we were locating in our readings and field trips was a really valuable research method for me – that Kim gave us the latitude to work outside the boundaries of more traditional methods really enhanced my experience in this course.

Last Friday I had lunch with one of the students from the class, Emily Hoven. I told Emily about the evaluations and my surprise at their unwavering support for the flipped seminar model; I then asked her if she could talk to me a bit about what in particular she had found productive (or even not productive) about the model.

Her reply confirmed my own suspicions and chimed with the data on the evaluations.

She noted, first, that there’s a spirit of competition in graduate seminars that is not always helpful; everyone’s trying to say the next smart thing. That can make for brilliant, lively discussions, but can also make for intimidation and fear. In our class, she pointed out, we all worked together in a more equitable way; as a result, feelings of competitive angst lessened considerably.

Next, she pointed out that, as an undergraduate, she’d had a lot of experience with flipped classrooms, and thus our classroom felt both familiar and safe. Never mind that the model was unlike other grad seminars; it was like enough to active learning that many students are now experiencing at university that it provided a sense of grounding for students who might otherwise be struggling. She noted that likely this was not true for all the students in the class, but my guess is it’s also more true for many than we might think. As active learning becomes more common at the undergraduate level, we should consider its value as continuity at the graduate level, especially for Master’s students who are undergoing a sea change in their learning experiences and expectations as they enter grad school for the first time.

Finally, Emily’s comments, along with those on the evaluations, reminded me of what I found to be the most positive peer-teach outcome of all: it required everyone to renegotiate the vocal dynamic in our seminar space. Remember above, when I noted that certain voices tend to dominate seminars because they’ve been trained to do so by extant pedagogical models? In our classroom, new models driven by different learning dynamics meant quieter voices were invited actively into the learning space; shifting the room’s architecture (figuratively, but frequently literally, too, as we moved furniture to facilitate different kinds of group work) changed the default “permissions” of our seminar space, to productive effect.

In one of my favourite peer teaches of the term, this shift became glowingly evident as the most vocal person in the room and one of the quietest worked together; the former student actively placed herself in the peer teach’s supporting role in order to make space for her peer to take centre stage.

It was remarkable evidence of the power of genuine “active learning” in the graduate classroom to help everyone feel a little less like an imposter, and a little more like an empowered knowledge-maker.

Feeling grateful,

Kim