This is a cry for help.

It’s the end of term. I’m absolutely thrilled: welcome back, weekday drinking! And I’m really tired. Where’s my pillow at, again?

But I’m also staring at my computer screen. Because I’ve got 40 students in my terrific Toronto: Culture and Performance class, and they’ve all been superb and committed and present, and now I have to give them “participation” grades.

Ah, participation. What exactly is it “testing” for? If you’re like me you’ve probably not spent enough time thinking about that question, or considering what we are trying to measure and reward with the inevitable “10% participation” line in the syllabus – the one that carries over from year to year with hardly a thought or a tweak.

That laziness comes home to roost this time of year. Because they can’t all get 100%, now, can they?


So I’m being a touch disingenuous here. I’ve actually thought about participation a fair bit. In most of my classes it is a category pegged to real work and effort, not a nebulous thing that lets me quietly reward students I appreciate more than others, or unconsciously punish those who have pissed me off. (Yes, we all do this. No, we don’t mean to. Think about it.)

For example: in my OTHER fall term class, my second-year performance studies seminar, participation works like this.

We have a class blog. (All the class prep and para-discussion goes on the blog.) Every Monday I post a “prompt” related to the week’s reading, viewing, or topic in general. I ask the students to engage with an aspect of the work under consideration, and to do so in writing or by posting video or other media. I emphasize that this work should demonstrate a fulsome (not just passing) engagement with the topic or material – IE: that it should take more than a minute or two to do. But I also emphasize it is not “graded”; students should feel free to experiment, write as much or as little as they wish without fear of making grammatical errors, and take a risk if they wish (there are no wrong answers!). I place a deadline on the responses – they must be completed an hour before class – and I always incorporate them into my class prep, so it’s clear they’re not just make-work things.

The rule for this fall’s seminar was: respond to 5 prompts over 13 weeks and earn 100% in participation. That’s 20% per prompt. Come to class every day, prepared and on time, and keep your grade. Miss class without accommodation? Each miss takes 5% off your running total. Miss more than three classes without accommodation, and lose all your participation grades for the class.

Screen Shot 2018-12-05 at 2.50.46 PM.png

My logic for this structure was as follows. Coming to class matters a lot: seminars thrive on group discussion. Being prepared matters for the quality of discussion we have, and being on time is simply respectful. But the quality of in-class discussion is profoundly enhanced by thinking carefully and richly in advance about the work we’re going to do there – that’s the spirit of the flipped classroom in action. So the prompts were my way of saying: here’s something we’re really going to talk about. And the students’ responses were a way of saying: this is where we think we want to go with this. We’re into it!

And that, really, is what I am “testing” with participation: the willingness to have a real, considered, respectful conversation about a syllabus topic – to put something real into it, and get something real out of it.

Versions of this participation rubric have worked well for me over the past few years: sometimes the pre-prepped action relates to a prompt response; sometimes it takes the form of a performance. I’ve been learning and tweaking as I go, but I’ve been trying hard to eliminate the guesswork. Participation grades function best when they are pegged to rubrics, and when they reward heartfelt effort and genuine engagement with as much of the subjective stuff on my end either eliminated or curbed by the hard evidence of a student’s work on behalf of the course.

Flash forward to TOCAP, the big class on the screen in front of me. I didn’t do what I describe above for this class: too big; too much work. UGH! So what did I say about participation? I checked the outline just now. It says this:

To earn 100% for participation – and you really truly can (it happens all the time) – do the following things:

  • Come to class. Every day. If you have to miss, ensure you have accommodation from your academic counsellors (see below).
  • Read the stuff we’re reading. Think carefully as you’re reading. Maybe read it twice if it’s a challenge. Take some notes! Bear in mind that the reading load for this class is not heavy; readings have been scheduled to give you lots of opportunities to make time for them, and there are built in re-reading opportunities if you want to take them.
  • Contribute to class. This doesn’t mean talking a lot; talking a lot usually means you’re not paying attention to how much space you’re taking up. It also doesn’t mean nevertalking, though: lots of us are shy, but there will be many different ways in this class to share thoughts – including via silent writing, group chats, peer-to-peer conversations, and more. If you’re a shy person and you’re working hard to contribute, we will notice.
  • Take some risks! Falling on your arse doesn’t mean failing the course: it means you have to get up and try again. A risk is worth it if you learn something valuable about yourself in the process. And risks can be small: like speaking up when normally you don’t, or keeping mum when normally you talk over others. Risks can also mean trying to create a video when normally you wouldn’t, or writing your essay well in advance and bringing it to Kim or Courtney to talk about, when normally you’re a last-minute person. Taking a risk means actively taking up an invitation made by our class to push yourself a bit, rather than just showing up for the sake of it. Give it a try.

This all sounds great, and I’m sure it was reassuring. But it’s also not a rubric; it says NOTHING about how I’m going to measure these things. And that’s a problem – because right now I have to measure them.

Staring at the screen in low-level panic, I’m reminded that I need to figure out how to scale up my participation rubric experiments and fast.

There are best practices out there of course: here’s a good one from Faculty Focus this past May; here are four collated in a short article published by the Teaching Commons at Lakehead University in Thunder Bay, ON. (I’m fond of the first one here, but click the second link in that bullet in order to read both the first noted article by Weimer, and the response by Slapcoff.) But the problem of scale still arises: in large classes, grading participation is significant extra work – or can be perceived that way (certainly at this time of the term, and certainly right now by me!).

This is why Slapcoff and Weimer’s linked reflections (in the first item above, as mentioned) make great sense to me: as writing assignments about participation, they offer excellent ways for students to reflect meta-cognitively on their classroom practice in a format we A&H professors are used to grading, and grading quickly. Better still, if these are (as Weimer suggests) papers written primarily for completion and reflection (like my students’ blog prompt responses), they need not be long, and they need not be marked for grammar. Feedback can happen in a peer-to-peer structure, or at strategic points in the term when life’s not too busy. It might be most fruitful, in fact, to schedule mid-term check-in meetings with students, where they bring a participation reflection with them, and talk them through in office hours. If the class is big, perhaps setting one or two sessions aside for this reflection work makes sense, too.

Options, for sure, if not solutions. What think you, dear readers? What do you do in larger-class scenarios to measure participation? What works, what’s too much work? What’s definitely not worth doing? Thoughts very welcome.



Feed back to me (part 2)

Last week I offered some thoughts on marking with the rubric as a close guide and feedback framework; today I want to share some nifty feedback advice from Lynn Nygaard, the author of Writing for Scholars: A Practical Guide to Making Sense & Being Heard (Sage, 2015). Just as I was contemplating the difference using the rubric is making for me as a grader, her ideas about one-to-one feedback crossed my desk via the ever-reliable Tomorrow’s Professor listserv, to which I’ve belonged since 2001 (thanks, Jenn Stephenson!).

I was struck in particular by two pieces of intel in Nygaard’s piece: the importance of asking questions during the feedback process, and the value of offering feedback face-to-face (as opposed to solely in written form).

The context for the chunk of Nygaard’s book that was excerpted on the TP listserv is “peer reviewing” – the process through which scholars offer one another comments and assessment during the publishing process. (When you read that something is “peer reviewed”, it means experts in the field have read the material, assessed it based on a range of criteria from quality of research to quality of argumentation, and deemed it valuable enough to be shared with other experts in the field as well as the broader public.) For Nygaard, this context includes both graduate students (IE: feeding back to supervisees who are completing dissertation work) as well as peers whose work we might be asked to comment on for publication.

So undergraduate students aren’t the explicit focus here, but as I mentioned last week I think we can extrapolate easily for undergraduate constituencies – after all, good marking practices are good marking practices, full stop.

The first insight in Nygaard’s excerpt that grabbed me was:

Do not underestimate the importance of asking questions.

We hector students about this all the time, right? ASK QUESTIONS. THERE ARE NO BAD OR WRONG QUESTIONS! Questions are the route to a good paper, a strong experiment; research questions are more important than thesis statements. (Or, to nuance that a bit: good research questions yield better thesis statements.)

But how many of us have thought to ask questions in our comments for students on their written work? It’s not atypical for me to pepper students with questions after an in-class presentation, but those questions rarely make it into the typed feedback. In fact, I tend to focus on declarative statements (“your paper/presentation would have been stronger had you X”) when I write up my comments – asserting my knowledgeable opinion rather than keeping the feedback student-centred. So Nygaard is suggesting something provocative here, I think, when she encourages the asking of questions as feedback.


Now, Nygaard stresses that these need not be complex questions, or even content-driven ones. When we respond to student work, remember, we’re offering, usually, feedback on practice as much as (or even more than) content: how well students ask questions themselves, identify the parameters of their study, structure their articulation of the data or their reading of the text they are presenting. At their best, then, feedback questions might drive back to basics, focusing on the sorts of things students tend to skip past in an effort to get to the finished product. Nygaard offers the following samples for questions to ask a (student) writer:

What is the most interesting thing you have found so far?
What are you finding most difficult to write about?
What is it you want people to remember when they are finished reading this?
What interested you in this topic to begin with?

Now, if these questions sound chatty, it’s because they are. And here’s Nygaard’s other key insight (for me): what if feedback were offered orally more often?

When we speak to colleagues and graduate students, often we do so in our offices, face to face. Undergrads, by contrast, get sheets of paper or pop-up windows on their computer screens with some typed stuff and a grade. Easy to distance, easy to dismiss.

But, as Nygaard notes, the value of feeding back in person is significant. It gives the feedback (and not just the grade itself) real stakes. And, more important, it offers an opportunity for dialogue that is integral to the producing of stronger, future work:

…if you deny the other person a chance to explain, you rob them of an opportunity to achieve greater clarity for themselves – because there is no better way to understand something than to explain it to someone else.


Reading this reminded me, ironically, not of supervisions with my own grad-school advisers, but of encounters with a dear and influential undergraduate instructor, the feminist and queer theorist Dianne Chisholm. Dianne is an Oxbridge graduate, and every time a paper was due she had us all into her office, one by one, Oxbridge-style to read our essays aloud to her and receive our feedback in person.

We were, of course, TERRIFIED of this entire process (and kinda terrified of Dianne, too). But we also adored her, because she offered us the opportunity to learn, grow, and get better – she proved that to us time and again, by giving us her time and her attention.

Now, I’m not saying that we should all take every undergraduate assignment in like this; it’s time consuming and really only works in seminar-sized groups. But it does have key benefits that we ought not to dismiss. For one thing, it places the onus squarely on the student to absorb and respond to feedback – to do something with it, even if only for a few minutes. To imagine the better version of the paper in front of them, maybe.

Nygaard goes on to write:

…remember that your job is to help the author, not to make yourself look good.  Your ultimate measure of success is the degree to which the author walks away knowing what to do next, not the degree to which you have made your expertise apparent.

Declarative comments on written work (like the one I offer as an example above) tend toward the “me expert, you not so much” end of the spectrum; they demonstrate that I know stuff and that you don’t yet know quite enough of it. But guess what? We’re in the scholarship business, with the hierarchy professor//student more or less entrenched; the “knowing//knowing less” binary is sort of a given. So what if we took it as that given and moved on, instead asking questions and offering meaningful advice to students that could drive their work forward and upward? This might happen on paper, or in an office-hour debrief, or – maybe best of all? – in a mix of the two.

At minimum, what if we aimed to provide more feedback to undergraduates that simply indicated that this particular assignment, even returned, graded, to them, is *not* the end after all? Nygaard offers the final, following thought:

Even if you are meeting informally with a colleague, try to end the session by asking, “So, what is your next step?”

The perfect question for us all, really.



Feed back to me (part 1)

October is marking season at my university: midterms, essays, tests and quizzes all crowd into the space between the end of September’s silly season (frosh week, reunion weekend, general football mayhem) and the final date to add or drop courses. The wind and rain rush in, the leaves come down…and we all end up buried, by Halloween, under piles and piles of papers.


This year I’ve been trying something new with my undergraduate class in performance studies; for the first time I’m marking essays explicitly against a pre-existing rubric, one I’ve made freely available to everyone on the assignment page of our online learning portal. I’ve used marking rubrics regularly for the last few years – they were mandatory at Queen Mary, where I taught from 2012-2014, and I found them clarifying and productive. But this is the first time I’ve used a rubric as a literal marking tool, rather than just as a general set of guidelines for our reference.

(What’s a rubric? Click here for a really helpful explanation.)

My typical marking pattern until now has been some variation on this:

  • post a rubric for students for written assignments, so that they know broadly what I expect in terms of content, structure, research, grammar, and style;
  • read papers without consulting the rubric carefully, assuming it implicitly reflects what I already know to be true about good, bad, and ugly essays;
  • write up comments without direct reference to the rubric, and assign a grade.

I suspect a lot of us mark this way, whether we realise it or not. And this is not, of course, to say our comments on student papers are not fulsome, reflective of our rubrics, or written with care; I personally pride myself on providing clear feedback that describes a paper’s intention, where that attention is achieved, where it is not achieved, and what students may adjust in order to advance up the grade scale. I’ve also experimented several times with laddering assignments, with using peer feedback on drafts, and with various other techniques to lower the writing stakes and make the process of editing and improving written work more transparent and accessible.

(I’ve written about, and hosted some terrific guest posts on, assessment challenges in the past; click here for the archive.)

So I clearly care a lot about assessment – about getting it right and giving students the intel about their work that they need to improve. But rubrics? Not so much with the caring about rubrics, maybe. I suspect I’m a bit jaded, like many of us, because rubrics look on the surface like yet another measurement tool we’re being forced to use in order to fit our teaching labour into boxes that can be ticked by our senior administrations and the governments who control their purse strings. They are probably such a thing. But they are also something else: they are a clear, consistent way to communicate student success and mitigate student failure, on our own terms. (Let’s not forget: most of us still have the freedom to set our own rubrics. For now, anyway.)

And, as I discovered, they are also a great way for us to learn key information about our own marking tendencies and the assumptions underpinning them.

Marking with the rubric changed the pattern I describe above. My process now goes something like this:

  • import the rubric bullet points into my existing marking template;
  • read the papers with those bullets explicitly in mind;
  • comment in equal measure on each bullet;
  • assign a rough grade zone to each bullet (IE: this aspect of your work is at “B” level, or at “A-” level, etc);
  • average the bullets to arrive at a final grade.

In case you’re having trouble picturing this, here’s a screen shot of my template, with some (anonymised) feedback in place:

Screen Shot 2016-10-26 at 14.03.31.png

The first thing I realised after using the rubric in this way? I’ve historically given far too much weight to some aspects of student work, and too little to others… even though my rubrics have always implied that all aspects – content, structure, research, and grammar/style – are equally valuable. So I’ve been short-changing students who, for example, have good research chops but poor writing and structuring skills, because the latter makes the former harder to recognise, and without a rubric to prompt me I’ve simply not been looking hard enough for it. I’ve also, without question, been over-compensating students with elegant writing styles; less impressive research labour becomes less visible if the argumentation runs along fluidly.

Right off the bat, then, my use of the rubric as a marking guide both levelled the playing field for my students, and allowed me to come face to face with one of my key marking biases.

The second thing I realised was that marking in this rubric-focused way is a real challenge! I am a decorated editor and an astute reader of colleagues’ work, but that doesn’t mean I’m a perfect grader – not by any means. Reading novice scholarly work (aka, student work) with care requires keeping a lot of balls in the air at once: where’s the structure slipping out of focus; when is research apparent, when is it there but not standing out, when is it obviously absent; how much is poor grammar actually impeding my understanding, as opposed to just pissing me off (a different level of problem).

To do the juggle well, I’ve discovered, I have to slow down. Except… I have trained myself (as we all have – neoliberal university survival skill!) to read student work very quickly, make some blanket judgements along the way, and then produce a final grade driven as much by feel as by careful analysis of the paper’s strengths and weaknesses. When I was forced to put feeling aside and look back at all of a paper’s component parts, I as often as not saw that the grade I “felt” at the end was right was not, in fact, what the rubric was telling me was fair.


So add a few minutes more per paper, then. But where to snatch them from? It’s not as if I’m rolling in free time over here…

Thankfully, the rubric came to my rescue on this one, too. My third discovery: I could write fewer comments, and more quickly, yet still provide comprehensive feedback. The rubric language I include on each page of typed assessment stands in nicely for a whole bunch of words I do not need to write anew each time, and it standardises the way I frame and phrase my comments from student to student. That’s not to say everyone gets the same feedback, but rather that my brain is now compartmentalising each piece of assessment as I read, and is more quickly able to put those into the comment “boxes” at reading’s end.

Plus, in order to keep feedback to a page yet also include the rubric language in each set of comments, I’m writing less per paper, period. I doubt this is a bad thing – students generally don’t read all the feedback they receive from us, if they read any of it. Placing my responses to their papers directly in the language of my already-stated expectations, and offering smaller, more readable chunks will, I hope, get more students reading more of their feedback, and using it too. (I have plans to survey them on this in the last week of classes – stay tuned.)

As luck would have it, just as I was thinking this post through I came across a compelling discussion by Lynn Nygaard that uses “mirroring” as a metaphor to explain assessment labour. Nygaard’s ideas got me thinking about other ways I might transform my marking’s efficiency and effectiveness in future; although her focus is on feeding back to colleagues and grad students, I think it has some real applicability to undergraduate assessment too. I’ll share some of her provocations and reflect on them (ha!) in part two of this post, next week.

Until then, happy midterms!


When students grade each other (and other peer-assessment challenges)

I’m a big fan of group work in the classroom. Partly, this is because it takes a group to make a piece of theatre, and I teach theatre; partly this is because life is all about working in groups of people, and working in groups of people is astonishingly hard.

Just ask this person:


Or, ask students (including my TA this year, Madison Bettle) if they like group work, and you usually get two kinds of responses:

“it’s ok/I’m fine with it” (translation: other people do the work, so it’s pretty great!/I love doing all the work, so it’s pretty great!),


“I find it difficult” (translation: I do all the work, and I really resent it).

So why do I persist? It’s simple: learning to be a better collaborator is as important to living and working in the world as is learning to breathe. It’s a pity more of us don’t place an emphasis on effective group skills in our teaching, because, man, oh man, do we all need it!

Over the years, I’ve approached the challenge of group dynamics in my theatre studies classrooms in a variety of ways. I’ve asked students to put on scene studies in groups, but not for grades; the students loved this work, but often resented not getting marks for it. (Understandable, if sad and depressing.) I’ve asked students to put on scene studies in groups for grades; the students loved this work, but found it incredibly annoying when a member (or more) of the group slacked off and got the grade anyway. (Friends: call it collateral damage and then call it a day.)

This year, I took a slightly more complicated approach: I asked students to put on scene studies in groups for grades, and then I asked them to contribute to their final marks by grading each other.

This is the story of how that turned out.

Last Thursday, the students in my 20th Century Drama class had just one job: to get into their performance groups, answer a series of questions, and come to a conclusion about what grade(s) the various members of the group deserved for their efforts this year. The student-generated grades (which I would respect, regardless of difficulties) would make up 5/15 marks for the performance component of the class; the performance component of the class would make up 15/100 marks for the class as a whole. (In other words: some pressure, but not a tonne of pressure.)

As Charlotte Bell explained in this space last autumn, students need clear tools to assist with peer grading. This is the task I set to help the students manage the challenge (and it is, of course, a challenge!) of grading themselves and one another:


Part One:

On your own, please respond to the following questions, in writing. You have ten minutes.

  1. What were my greatest strengths as a group member this year? List up to THREE traits, and include details explaining each.
  2. What were my greatest weaknesses as a group member this year? List up to THREE traits, and include details explaining each.
  3. Where did my group excel this year? For example, when and how did we meet our own expectations? Summarize your feelings, and describe one or two key occasions where the group achieved what it set out to do.
  4. Where did my group fall short of its own expectations this year? Summarize, and describe one or two key occasions where you feel the group could have done better.
  5. What grade would I assign my group for our year’s efforts?
  6. What grade would I assign myself, as a group member?

Part Two:

In a pair WITHIN your group, please discuss your responses to Part One, and then respond to the following questions. Remember to be honest, respectful, supportive AND FAIR.

  1. Where did our group excel, and where did it fall short of expectations? Summarize your individual findings (take notes!), and then decide if, on balance,
    1. You excelled much more than you fell short
    2. You excelled a bit more than you fell short
    3. You sometimes excelled, but often also fell short
    4. You largely fell short.
  2. Based on your individual reflections, and also on your comments and choice above, what grade would you assign your group for this year? (Choose a number, based on the letter category that corresponded with your choice above.)
  3. Are there members of your group who went beyond the call of group work duty? If so, choose whether or not to assign them bonus marks.
  4. Are there members of your group who let the group down? If so, choose if and how to penalize them.

Part Three:

As a group, discuss your findings and share your tentative grades.

Negotiate: what final grades will you assign each group member? What comments will you include to support your grade choices?

Type your comments and grades. Note that the comments should be about a paragraph long (no more).

Send your comments and grades to Kim, via email.

When I created this template, I worked hard to take as many differing voices into account as possible, mindful that students would have (potentially) different impressions of how things had gone in their group. What I forgot, I realise now, is that having different impressions of how things have gone is very different from being able (or feeling able securely) to express how things have gone to a group member with whose opinion you might not fully agree. My template seeks to be academic in its objectivity – but, as teachers all know, objectivity is extremely difficult to achieve when assigning anyone, let alone one another, grades for our shared efforts.

The Thursday of our peer assessment exercise arrived, and we did – I thought! – pretty well. The students were lively and cheerful in their group chats in class; most of them emailed me happily with shared or individuated group grades shortly after. I annotated my class notes (this is my habit, to preserve some kind of institutional memory for future years), and called it a win.

But then, two things happened.

First, I was approached by a group that had run into trouble: one of their members had been perennially absent for meetings and prep, but had always arrived in time to claim the glory. In our peer assessment exercise they had manifested no remorse (or even awareness!), and the rest of the group had felt uncomfortable confronting them. Result? The group had agreed on a shared grade, but now deeply regretted it.

Second, I received what I thought was a truly heartening email from another group featuring a member often absent; by all accounts it sounded like that student had stepped up in peer assessment, owned their mistakes, and agreed on a lesser grade.

I was thrilled that for one failure another success had resulted. I also realized, at that point, that it would be helpful to get the students’ feedback on how the peer assessment exercise had gone, since I had two very different pieces of evidence to account for.

On our last day together, I posed the following question:

How did it go for you and your group? Reflect in writing for ONE minute; aim to indicate something of value, and also to make one suggestion for improvement.

Given the balance of evidence at hand, I expected a fair amount of positivity in the students’ responses. Instead, I got this (incredibly valuable! – But somewhat unexpected) feedback:

  • It was difficult to discuss group issues in a class setting – can we give people the option to find another space to talk?
  • It was difficult because most of our groups became close over the year: we were worried about upsetting the group dynamic;
  • Could we try anonymous grading? People don’t want to address people to their face if they feel others have not done their share;
  • Could you (Kim, the teacher) shield us from the harshest of comments but still express our concerns?
  • Could we try doing group work assessment at the half-point during the year?

Looking at this feedback now, as I write this post, I’m surprised at myself. How did I not realise the difficulties inherent in the peer grading template I’d designed? Of course I’d known it would be hard for students to confront group members who did not pull their weight; what I’d forgotten (hello!) was that I had rather a lot more experience in grading underperforming students than most students do – and thus that I really needed to provide some hard-core emotional and intellectual guidance to the students needing to do this work now.

How do you tell someone you’ve grown to like, and even to love, that they let you down in your shared work? How do you assign them a number?

One of the groups facing challenges chose to let sleeping dogs lie; the other, however, ended up revisiting their assessment and grades. I met with two representatives in my office today to talk through what had happened. One member, felt by the others not to have pulled their weight, had been assigned a lesser grade after the fact by the remaining members of the group; that member felt, correctly, that they had not been given the chance to speak or respond to accusations. The other member represented the majority feeling: that the first member was well liked and respected but had put in far less work, and thus deserved a lesser grade. [That member also explained that the others, who had spent a long time after class talking about how to account for this disparity, did not feel comfortable confronting their peer in class – whether wrongly or rightly, they felt sincerely that their peer would not be willing to fully hear and accept their critique, and they did not want to disrupt their group’s friendly dynamic by pushing the issue.]

Our meeting was fruitful but hard; I know both students worked to be respectful and not to get overly emotional about the stakes involved. (And here I have to say how much I respect the efforts of both in this regard!) I acted as a mediator for this meeting, and I learned two very important things from it.

First (duh!) that I needed to create a safer space for all of my students to share their group feedback. In our debrief of the peer assessment one student suggested we feed back anonymously; rather, I suspect, what needs to happen is that I, as instructor, need to a) create multiple moments of low-pressure feedback throughout the year, culminating in b) a meeting of the group with me in which we decide on shared or individuated grades. My role as mediator is crucial, and it cannot happen in the classroom; it needs to happen in my office, or in another semi-private space where students feel able to speak honestly and openly.

Second, that (hello again!) all group feedback is marked by social privilege, including gender privilege: this was absolutely the case in our meeting, and it brought home to me the lived significance of how these kinds of privilege impact student voices in the classroom, though few students realize it. The way we approach and respond to one another depends on how confident we have become in our own voices and perspectives, be they gendered, raced, or classed. In today’s meeting – which, I want to stress, happened between me and two very mature and thoughtful young adults – I was reminded of this research by Colin Latchem:

Although it is important to avoid gender stereotyping and acknowledge that there can be considerable variations within each gender and particular context, there is a considerable amount of research on psychological gender differences in communications. In general, men are held to construct and maintain an independent self-construal (Cross & Madson, 1997). As a consequence, men tend to be more independent and assertive, use language to establish and maintain status and dominate in relationships, and transmit information and offer advice in order to achieve tangible outcomes. By contrast, women tend to be more expressive, tentative, and polite in conversation, valuing cooperation and using dialogue in order to create and foster intimate bonds with others by talking about issues they communally face (Basow & Rubenfeld, 2003).

Today’s meeting reminded me that I cannot simply give students space to express their feelings about one another’s work; I need to make space in which those feelings can be safely and effectively expressed regardless of social privilege.

Next year, I plan to invite performance groups to feed back to each other informally a few times over the year, and I plan to take an active role in that feedback in order to help students to understand what they are saying to one another, and how they are saying it. At the end of it all we’ll have a chat, and I’ll be a part of it; I’ll try to mediate group challenges, but I’ll also make an effort to talk about how seemingly invisible power dynamics impact what is said between group members, and how.

Because group work isn’t just about students working in groups; it’s about students learning the very human skills of talking to each other across race, gender, class and other social and ethnic boundaries. They need our help to do this well – and we owe it to them, and to our larger world, to help them do it.



Guest post: learning from “mock” presentations

Welcome to September! And, for those of you reading in North America, happy Labour Day Monday. Just in time for the new school year’s first week of merry bustle, you’ll find below a post by Charlotte Bell, a soon to be new PhD who has embarked on a remarkable first project: this year, rather than sticking to the relative safety of the ivory tower, she will be teaching in an underprivileged grammar school in Birmingham, learning first hand about the strengths and weaknesses of the UK’s education system at a time when it is undergoing profound change. This choice reflects Charlotte’s ethos as a truly activist scholar and educator, and I’m looking forward to showcasing her reflections on the blog in the months to come. For now, have a read of her experience using mock presentations to improve student performance in a second year seminar at Queen Mary this past year. Inspiring stuff!

Learning from “mock” presentations

By Charlotte Bell

Over the past four years I have had the privilege of working with students at each stage of their undergraduate degree at Queen Mary, University of London, as a mentor, a teaching assistant (TA), and a module (course) convenor. In each case, I have watched formal assessments (also known as summative assessments) provoke a lot of anxiety for students. However, assessment isn’t – or shouldn’t be – just the formal exam or piece of coursework a student submits. The focus of this post will be on the placement and role of end of year presentations (a popular assessment method in UK universities), and specifically on how I have used ‘mock’ presentations as a formative method of assessment.

There are two key reasons for my focus. First, the final presentation often marks the end of a module and the start of a holiday period; it’s the assessment that seems to demand Powerpoint or Prezzi, with as many effects and embedded videos as possible. It aWefeighdilso seems an ideal platform to showcase students’ abilities to synthesize, evaluate and form coherent arguments, based on the semester’s work. However, Iand have found myself disappointed in these situations; I often get a sense that the presentations are rushed, the labour not equally distributed among the presentation group’s members, or that the opportunity to use technology seems to overwhelm the presentations (lots of flash, at the expense of substance). Presentations are also the form of assessment or sharing of practice that, as Kim has discussed in previous posts, highlights some uncomfortable gender politics, in which women appear to apologise for taking up space at the front of the room or for owning the soundscape, even if only for five minutes. So the final presentation raises the obvious question: how can we make these assessments more worthwhile, a mode of teaching as well as grading?

Second, I suspect that in years to come HE (Higher Education; university or college-level) teachers and lecturers will have to pay more attention to the ways in which presentations are introduced and students supported, especially in UK classrooms. In 2013, to everyone’s detriment, Ofqual (the Office of qualifications and examination regulations in the UK) announced that the ‘Speaking and Listening’ (presentation) component of English GCSEs (General Certificate in Secondary Education) would no longer be a summative part of the English National Curriculum. Students in future may not arrive at university with the same skill-set in this form of assessment, making the need to “teach” the presentation (rather than simply grading it) even more urgent.

The final presentation assessment, therefore, is increasingly important. Presentations are not easy, but they are a key method of assessment throughout our careers in academia and beyond: they are essential aspects of conference work, of interviews (which in academia often include a presentation on research or teaching a lesson segment), and of course of business pitches alongside the inevitable pressure of networking. Presentations reveal students’ weaknesses in ways that are remarkably instructive and that may have an impact on students’ career prospects. But students can learn from doing presentations in class only if they are also given a real chance to learn from their mistakes.

Assessment in the classroom is (or, should be) an ongoing process. In Embedded Formative Assessment, (2011) education guru Dylan Wiliam poses five key strategies that assessment for learning enables:


Wiliam’s lens – cognitive science – is not without its problems: how can you as teacher evidence that these strategies have, in fact, enabled progress or learning during a seminar? The answer is, you probably can’t; and in many cases, you probably won’t be around long enough to see that ‘light bulb’ moment. However, his model is useful for clarifying the value and purpose of assessment. If your assessment doesn’t aid learning, what is its point?

Incorporating Formative Assessment into the module

The module in which I trialled the use of ‘mock’ presentations was a second year seminar-based option, part of the wider Applied Performance Pathway offered as part of the BA in Drama at QMUL. The module examines work by twentieth century theatre practitioners, theatre companies and collectives who have engaged in the making of theatre by, with and for ‘the people’, in order to examine how the social, economic and political context of their work shapes contemporary applied performance practice. Students on this module engage with a range of playtexts, critical theory, biography, policy documents and archival materials, and the module includes two points of summative assessment: a group presentation and an individual written essay.

I introduced the students to the concept of mock presentations one week in advance of the mock event. I told students they would form their own groups for the mock presentation, and would then stay in these groups for their final presentation assessment two weeks later. As a class, we went to The Unicorn Theatre on a group trip, and students knew that the mock presentations would be based on this show and an accompanying piece of set reading on theatre for children. The idea was that all students would work on the same topic for their mocks, both in order to create a shared back story for the mock event and in order to allow for collaborative peer-to-peer feedback as part of the mock process. This shared topic also gave me as the teacher a clear understanding of how well each group was able to read a specific piece of theatre in conjunction with scholarly work.

Assessment Methods

The prompt question for the mock presentation and the final assessed presentation was the same:

Who are ‘the people’? How and why have theatre makers set out to examine and challenge ideas of who theatre is for?

Students were asked, for their final presentations, to explore these questions through the lens of a specific theatre company, theatre building, theatre artist or issue in the cultural politics of theatre for ‘the people’. The final presentations were to be 10 minutes long and students were to be prepared to answer three questions from the floor. Powerpoint and Prezzi were banned. Instead, students had to produce a handout.

In addition, students would be asked to assess each other as well as themselves; importantly, their peer feedback would be incorporated into the overall feedback I provided. In many ways, this was the most important element of this mock exercise.

The mock and final presentations were both assessed using two grids. (These were adapted from my colleagues Philippa Williams from the Department of Geography, QMUL, and Michael Slavinsky at The Brilliant Club, London UK.) One grid was used to assess each of the other groups’ presentations; the other was a self-assessment grid in which students were to identify what they and their group did well, and what they could do to improve their work next time around.

These grids are both simple, straightforward and translate ‘assessment criteria’ into short points for consideration:


I find such grids useful because they make clear that there is no ‘hidden agenda’ when it comes to marking presentations. The marking criteria fit on to a single side of A4 paper. During the presentations, the students and I worked from the same grids ‘in real time’ as we watched one another’s labour; this way, we formed an assessment team rather than a hierarchy.

The self-assessment grid was adapted from the marker’s essay cover. This grid is based on the popular assessment model of identifying both ‘what went well’ and ‘even better if’. It also includes the option for students to award themselves a grade within a broad boundary (for example, “I think I scored a B-B+).

grid2-figure 3

The ‘mock’ presentations

The first 15 minutes of the mock session were handed over to students to work on preparing their mock presentations. The prompt for these 15 minutes was: what questions does your presentation raise? What can you do now to help you answer these questions? I was careful to relate these cues specifically to one of the assessment criteria on the marking grid: ‘ability to answer questions’. I also chose this focus because I suspected students’ question and answer preparations might be overlooked somewhat during their preparations. The 15 minutes of prep time also gave students some ‘breathing space’ and helped set a relaxed but productive atmosphere for the session – particularly for those students who had come in straight from another class. Rather than entering a room where students were waiting to start presentations, students entered a room where productive work was happening and mistakes could still be made, and corrected.

After these initial 15 minutes I gathered the group back together and went through the structure of the session. I advised students on how they might use the grids: circle or put a cross next to a description – annotate or write notes on the back. I then reiterated that I would be taking their feedback seriously, incorporating it into my overall feedback for each of the groups.

Students self-selected the order in which they gave their presentations. While a group was setting up I handed out a group marking grid to those in the audience. I decided to hand out new sheets at the beginning of each presentation slot 1) to ensure that grids didn’t get mixed up, 2) to provide a reason for the group at the front of the class to take their time setting up or getting ready, and 3) to make time to answer any questions students might have about the process. We sat and listened to the presentation and I fielded or asked 1-2 questions of each group following their presentations. We applauded the group as students returned to their seats. Members of the group also picked up a self-assessment sheet as they returned to their seats and the class had three minutes to make some notes on each presentation. If the next group was ready to set up they did so. We repeated this process as needed.

Assessment in Practice

The main aim of our self- and peer assessments as part of this process was to encourage students to think specifically about points for improvement, as well as allowing me to monitor discrepancies between what the students thought they did against what the rest of the class and myself saw them accomplish. As students were taking time to set up or fill out their grids I had a quick flick through some of their responses to get a sense of the room; the grids seemed to be providing a helpful space for students to articulate their honest reactions. As it turns out, marking each other against shared criteria and knowing you are being marked against that same criteria doesn’t result in a lot of congratulatory back-patting. On the contrary, marking and being marked against the same criteria seemed to encourage the students to take the process seriously, giving each other focused and useful feedback:


However, what I didn’t expect was the extent to which some students struggled to define what they did well. Most did not give themselves a grade boundary or mark, despite the opportunity on the marking grid to do so. Those that did provide grades for themselves were not generous:



Whilst these examples of self-assessment do accurately identify sticking points in individual presentations, such as problems with speaking clearly and preparing and responding to questions confidently, they also draw out a troubling and rather upsetting undertone. I do not think these are the comments of self-indulgent young people; they are indicative of insecurity and low self-esteem. Identifying points for improvement such as ‘less mumbling’, or noting the feeling of being put on the spot during a Q&A session are helpful – but, again, only if action points can in turn be created to support that person to overcome the insecurity that perpetuates mumbling or puts them in a position where ‘turning up’ is something they identify as ‘what the group did well’. Once again, then, the students’ self-assessments indicate for me how crucial a task it is to turn presentation assessments into true learning opportunities.

Having had a quick glance through some of the grids on the spot, I was able to address a few of the issues and concerns they raised immediately through oral feedback to the class as a whole. I followed this up a short time later with general written feedback uploaded to our online site. In addition, I also wrote feedback for each group, incorporating peer feedback into my comments and responding to issues they raised on their individual assessment sheets. This feedback was sent out the same afternoon as our mock event took place.

As part of this preliminary feedback I included an ‘on track for’ grade boundary: for example, 2:1/1st borderline (roughly B/B+/A). Though I’m not a huge fan of ‘target grades’ and ‘grade boundaries’, they are a fact of university assessment and I don’t think pretending they don’t matter is helpful – to the students or the teacher. Being as transparent as possible makes having conversations with students that might otherwise be difficult or awkward (particularly around grade expectations) a lot easier and more straightforward.

The results: what happened during the final presentations?

Students had two weeks between their mock presentations and their assessed presentations in order to address concerns raised in their feedback, and all groups produced better presentations in their final assessments. Their handouts were visually stimulating and detailed. One group used the white board to demonstrate the development of their argument and point to tensions in policy for participants in disabled arts. Another group produced an annotated handout that was explicitly referred to in their presentation – and would make a great poster. All groups took time to note down questions during the Q&A session to give themselves time to think through their answers. All groups introduced themselves and gave their presentations a title. Self-assessment identified specific points/aspects that ‘went well’ and demonstrated some progress towards improved confidence:


[a takeaway pop-up book-style handout with room for listeners to add their own notes, questions or comments]



Overall, creating a ‘mock’ process like this one might seem like a lot of work. It wasn’t. Adapting the grids to the module and its specific presentation assessment was a quick process: the presentation grids are quick and easy to read and provide a fast and clear overall view of a group’s presentation. Typing up the general feedback after the mock session took about five minutes, as I had jotted notes down in the moment and relayed it verbally to the students earlier in the day. Writing the individual group feedback took a little longer, but incorporating peer and self-assessment into the process made identifying points of focus in this feedback more efficient. I hadn’t had to do any additional content prep for either the mock or final presentation sessions: the students provided the knowledge content and material to work with. Framing the first session as a ‘mock’ presentation gave the exercise gravitas and a status that linked directly to an activity ‘that would count’ later on in the semester. Overall, not only did practicing the assessment better prepare the students to produce evidenced, well timed, imaginative, effective and competent presentations in the final week of the semester, I think it also put the students in a better position from which to constructively critique theirs and each others’ work.

I shall certainly be using these tools (and tools like them) in the future, and I may adapt the assessment grids for different types of projects. In addition I’m going to continue to pay attention to the ways in which I use the language of assessment criteria in class to respond to students’ work and in-class discussions. If as an assessor I’m looking for ‘concise’ writing/speaking and ‘good quality and independent research’ then I need to flag to students when they have demonstrated this kind of labour, rather than stopping my response at ‘brilliant’ or ‘fantastic’.

I would love to hear your feedback and any tips or resources you use to integrate assessments into the delivery of your courses/modules.

CHARLOTTE BELL is the final stages of her PhD in the Drama Department, Queen Mary University of London, where she was also a teacher. Her research explores the cultural economics of site-specific art and performance in and about social housing estates. Her work has been published in Wasafiri, Contemporary Theatre Review and New Theatre Quarterly. In 2013 she won the TaPRA Postgraduate Essay Prize. She is on the Postgraduate Committee for TaPRA and from 2013-14 she was an Advanced Skills Tutor with The Brilliant Club, London, UK. This September she starts as an English teacher in a state comprehensive secondary school in Birmingham. Visit Charlotte online here: