Grading Participation

By Signy Lynch

When I checked my inbox early this December and saw a notification for the latest Activist Classroom blog post, entitled ‘OMG CAN SOMEONE PLEASE TELL ME HOW TO GRADE PARTICIPATION???,’ it felt like a sign. As I came across this post, I was just finalizing the syllabus for the first course I would ever teach, Perspectives on Contemporary Theatre, a fourth-year seminar at York University.

I had great fun designing the syllabus, but had been hesitating over the participation section—I, like Kim, had been preoccupied with how best to grade my students on participation, and how to do so in a way that might motivate and elicit meaningful engagement from them. There were a number of factors to consider. While a seminar class, the course I was teaching was quite large, with 37 students. In order to fairly evaluate participation, I felt I needed some way to increase my engagement with them on a individual level. I also knew I wanted students to be given credit for and incentive to engage with the readings, as an important focus for class discussion.

Participate.jpg

Image: Some people raising their hands. Participation?

After deliberation, and inspired by some of the resources that Kim posted, (particularly the first point in this post, on grading participation through written assignments) I settled on a participation grade made up of the following three components:

  1. In-class participation and attendance (the “typical” way)
  2. Commenting at least once per week on another student’s Instagram response post
  3. Written reflections on participation, conducted in class three times throughout the semester.

To emphasize the importance of engagement, particularly in a seminar course, I made participation worth 20% of students’ final grade; though I didn’t assign a strict proportion of that 20% to each section to allow myself (and the students) some flexibility.

One key theoretical influence on this formula for participation was the principle of universal design. Universal design provides students multiple ways to engage in the course, shows them multiple representations of material, and allows them multiple avenues through which to express their learning (here’s a great primer on universal design in higher education, for those unfamiliar). Incorporating universal design into course design is a more inclusive way to teach that respects students’ differences as learners, both in ability and interest.

art artistic bright color

Image: a variety of coloured pencils. Universal design design appeals to a variety of learners. Photo by Pixabay on Pexels.com

Since the first component of my participation grade breakdown—in-class participation—is fairly traditional, I’ll spend a little more time elaborating on components two and three.

Component Two – Instagram Comments

Component two asked students to leave at least one comment per week on a classmate’s Instagram post, in connection to an existing Instagram response journal assignment. For a total of 30% of the final grade, I asked students to post weekly short (200-250 words) responses on Instagram to a passage of their selection from one of that week’s readings.

The posts were due the Friday before each Monday class, and the comments due right before class, ensuring that students had ample time to review each other’s posts and select one to which to respond.

Instagram Post Assignment.png

Screenshot of a post from the course Instagram account, describing the journal assignment

In the past (despite teaching theatre!) I’ve heard from many students that speaking up in class is a real barrier to their participation. Thus in asking students to contribute through written comments I offered them alternate mode of communication (inclusive design!), while at the same time generating content that could be drawn on to round-out in-class discussion. Unlike the Instagram journal posts themselves, these comments were graded for completion rather than substance, to further reduce barriers to participation.

This component turned out fairly well overall. One student wrote to me in their response that posting on Instagram, “feels less formal than posts on Moodle, and I’ve noticed myself and my peers feel more comfortable responding to each other.” Through this component some great conversations happened on Instagram; however, I do wish there had been a bit more consistency in students’ comments and a slightly higher level of involvement—for some students the exercise often felt quite cursory.

Component Three – Writing Exercises

The third part of my students’ participation mark was derived from short written reflections (taking around ten minutes each) conducted at three different times during the term. I had students respond to some questions on a piece of paper, which was then placed in in an unsealed envelope. The idea was that students would review their own writing as the semester went on and base their subsequent responses on their earlier goals and thoughts.

A central goal of this component of participation was to give individual students a chance to reflect on and define what meaningful participation meant to them. In so doing, I hoped to activate students’ intrinsic motivation by asking them to find meaning in the work they were doing for the course.

Importantly, these writing exercises were framed as reflective exercises. I told students that for this component they would be evaluated primarily on their reflection on participation and not on the participation itself, encouraging honesty.

high angle view of paper against white background

Image: a stack of envelopes as used in this exercise. Photo by Pixabay on Pexels.com

Another key part of this was that students shouldn’t be concerned about writing answers they thought would please me, but should examine their own feelings and preferences. Perspectives on Contemporary Theatre was not a required course. Despite this, I discovered from the first participation exercise I conducted that while many students were interested in the course content, some were primarily taking it because they needed a fourth-year credit that fit their schedule. I wanted to recognize and honour the fact that students were taking the course for many different reasons and may have had different priorities or assigned value differently than me. Thus through this component, I could give students points for effort, while also recognizing different types of effort and rewarding students for thinking on their own terms.

The questions I asked in each exercise are below:

Participation Exercise #1 – (near beginning of term)

  1. Why are you taking this course?
  2. What are your expectations from the course/what do you hope to get from it?
  3. Has this course aligned with your expectations/diverged from them so far? In what way(s)?
  4. What does meaningful participation in this course mean to you? (This response should consider your above answers. One example could be: ‘I don’t like to talk in class, but I want to really engage with the readings by taking detailed notes’)
  5. What two specific goals will you set for yourself regarding your participation in this course?

Participation Exercise #2 – (at midterm)

  1. How have you done with your goals so far? (Remember, I’m not evaluating you on whether you meet them but on your ability to reflect on them, so please answer honestly.)
  2. What factors have affected your participation?
  3. Review your goals. Are they specific and measurable? Are they still useful/in line with what you consider to be meaningful participation? If necessary, rewrite them and say what you’ve changed and why.
  4. What steps will you take going forwards to ensure you meet these goals?

Finally, as an optional part 5, you can weigh in with me and let me know how the course is going for you. This is your chance to give me feedback about your experience so far–whether it’s, ‘I wish we would watch more videos,’ or ‘I’m confused!’ etc.

Participation Exercise #3 – (end of term)

  1. How have you done with both general participation and your specific goals in this course?
  2. What factors have affected your participation?
  3. Are you okay with your level of participation? Why/why not?
  4. What would you change about your participation in this course if you could?
  5. If you were grading yourself on participation in this course, what grade would you give yourself and why?

In addition to the prompts for self-reflection, these exercises offered students some opportunities to provide feedback to me. Specifically questions 2 and 3 in exercise one and the optional number 5 in exercise two serve this goal. (For the final reflection I asked students to provide feedback through the course evaluations.) Collecting this feedback allowed me to address student concerns, and adjust in-class activities to student preferences, which I hope made students feel they had some some say in the course and that I valued their opinions and and experiences. At the same time my asking for feedback demonstrated to students that I was trying to be reflexive about my teaching practice in the same way I was asking them to reflect on their participation.

These written reflections also gave me some useful insight into students’ attitudes and feelings about participation in the course, so that I could then try to better it. When I heard from a number that the fear of being wrong was a major factor in their hesitance to contribute to in-class discussions, I was able to bring up this point in seminar, and talk it through with my students, and also to critically examine my own behaviour to see how it might be contributing to those feelings. I think one influencing factor was the difficulty of some of the readings, so I made sure to re-articulate that the material was meant to be challenging, that I was in no way expecting them to understand it all, and that they shouldn’t feel stupid if they were struggling with it.

person uses pen on book

Image: Hand holding a pen and writing in a journal. Creating opportunities for students to engage in reflection was important to me. Photo by rawpixel.com on Pexels.com

Another student wrote, “I’m a little confused on how else to participate other than agreeing or disagreeing on the subjects at hand,” which served as a launching point for a productive group discussion on what forms participation could take. Some of students’ suggestions on this subject, both in class and in their journals, really impressed me. One student who admitted they avoided class discussions for fear of being wrong suggested that a way to get around this could be asking questions rather than trying to answer them. In an ideal world, they would feel comfortable with both actions, but here came up with a productive middle ground.

Finally, students’ discussion of meaningful participation not only guided their self-reflection, but also aided in my evaluation of them. Students’ observations on what meaningful participation meant to them, played a large factor in my assessment of the first participation component, their in-class participation. For example, if a student expressed difficulty with speaking up in class and didn’t include it in their definition of participation, I paid more attention to what their stated goals were, their in-class attentiveness and group work, and weighted their Instagram comments a little more heavily in determining their grade.

Final Thoughts

Overall, I found this three-part system very useful. It helped me to connect with my students and to understand them a lot better as individuals. Through the third component in particular, I learned a lot about their individual goals and the struggles they were facing, which put me in a much better position to evaluate their participation.

This experiment confirmed to me that relying solely on my own perceptions of students to grade participation is not enough, and I will continue to experiment with this model going forwards. While this iteration of it worked out fairly well for this particular course, variations or other approaches entirely might be better suited for courses with different formats.

Thanks to Kim for inviting me to reflect through this blog post. I hope this reflection is of use to some of you, and feel free to share your thoughts or own experiences with me in the comments!

thank you text on black and brown board

Image: a chalkboard reading ‘thank you’. Photo by rawpixel.com on Pexels.com

signy-lynch-profile

Signy Lynch is a SSHRC-funded PhD Candidate in Theatre and Performance Studies at York University. Her research interests include political performance, diversity in theatre, spectatorship, affect, and theatre criticism. Her dissertation investigates how direct audience address in contemporary performance in Canada can help audience members and performers to negotiate the complexities of twenty-first century life. She has published work in Canadian Theatre Review, alt.theatre, and CdnTimes and is a member of the board of directors of Cahoots Theatre.

 

Feed back to me (part 2)

Last week I offered some thoughts on marking with the rubric as a close guide and feedback framework; today I want to share some nifty feedback advice from Lynn Nygaard, the author of Writing for Scholars: A Practical Guide to Making Sense & Being Heard (Sage, 2015). Just as I was contemplating the difference using the rubric is making for me as a grader, her ideas about one-to-one feedback crossed my desk via the ever-reliable Tomorrow’s Professor listserv, to which I’ve belonged since 2001 (thanks, Jenn Stephenson!).

I was struck in particular by two pieces of intel in Nygaard’s piece: the importance of asking questions during the feedback process, and the value of offering feedback face-to-face (as opposed to solely in written form).

The context for the chunk of Nygaard’s book that was excerpted on the TP listserv is “peer reviewing” – the process through which scholars offer one another comments and assessment during the publishing process. (When you read that something is “peer reviewed”, it means experts in the field have read the material, assessed it based on a range of criteria from quality of research to quality of argumentation, and deemed it valuable enough to be shared with other experts in the field as well as the broader public.) For Nygaard, this context includes both graduate students (IE: feeding back to supervisees who are completing dissertation work) as well as peers whose work we might be asked to comment on for publication.

So undergraduate students aren’t the explicit focus here, but as I mentioned last week I think we can extrapolate easily for undergraduate constituencies – after all, good marking practices are good marking practices, full stop.

The first insight in Nygaard’s excerpt that grabbed me was:

Do not underestimate the importance of asking questions.

We hector students about this all the time, right? ASK QUESTIONS. THERE ARE NO BAD OR WRONG QUESTIONS! Questions are the route to a good paper, a strong experiment; research questions are more important than thesis statements. (Or, to nuance that a bit: good research questions yield better thesis statements.)

But how many of us have thought to ask questions in our comments for students on their written work? It’s not atypical for me to pepper students with questions after an in-class presentation, but those questions rarely make it into the typed feedback. In fact, I tend to focus on declarative statements (“your paper/presentation would have been stronger had you X”) when I write up my comments – asserting my knowledgeable opinion rather than keeping the feedback student-centred. So Nygaard is suggesting something provocative here, I think, when she encourages the asking of questions as feedback.

main-qimg-b02625bde9139f7499b111b35b7e73ea-c

Now, Nygaard stresses that these need not be complex questions, or even content-driven ones. When we respond to student work, remember, we’re offering, usually, feedback on practice as much as (or even more than) content: how well students ask questions themselves, identify the parameters of their study, structure their articulation of the data or their reading of the text they are presenting. At their best, then, feedback questions might drive back to basics, focusing on the sorts of things students tend to skip past in an effort to get to the finished product. Nygaard offers the following samples for questions to ask a (student) writer:

What is the most interesting thing you have found so far?
What are you finding most difficult to write about?
What is it you want people to remember when they are finished reading this?
What interested you in this topic to begin with?

Now, if these questions sound chatty, it’s because they are. And here’s Nygaard’s other key insight (for me): what if feedback were offered orally more often?

When we speak to colleagues and graduate students, often we do so in our offices, face to face. Undergrads, by contrast, get sheets of paper or pop-up windows on their computer screens with some typed stuff and a grade. Easy to distance, easy to dismiss.

But, as Nygaard notes, the value of feeding back in person is significant. It gives the feedback (and not just the grade itself) real stakes. And, more important, it offers an opportunity for dialogue that is integral to the producing of stronger, future work:

…if you deny the other person a chance to explain, you rob them of an opportunity to achieve greater clarity for themselves – because there is no better way to understand something than to explain it to someone else.

cum-sa-ne-purtam-cand-vizitam-alte-tari-conversatia

Reading this reminded me, ironically, not of supervisions with my own grad-school advisers, but of encounters with a dear and influential undergraduate instructor, the feminist and queer theorist Dianne Chisholm. Dianne is an Oxbridge graduate, and every time a paper was due she had us all into her office, one by one, Oxbridge-style to read our essays aloud to her and receive our feedback in person.

We were, of course, TERRIFIED of this entire process (and kinda terrified of Dianne, too). But we also adored her, because she offered us the opportunity to learn, grow, and get better – she proved that to us time and again, by giving us her time and her attention.

Now, I’m not saying that we should all take every undergraduate assignment in like this; it’s time consuming and really only works in seminar-sized groups. But it does have key benefits that we ought not to dismiss. For one thing, it places the onus squarely on the student to absorb and respond to feedback – to do something with it, even if only for a few minutes. To imagine the better version of the paper in front of them, maybe.

Nygaard goes on to write:

…remember that your job is to help the author, not to make yourself look good.  Your ultimate measure of success is the degree to which the author walks away knowing what to do next, not the degree to which you have made your expertise apparent.

Declarative comments on written work (like the one I offer as an example above) tend toward the “me expert, you not so much” end of the spectrum; they demonstrate that I know stuff and that you don’t yet know quite enough of it. But guess what? We’re in the scholarship business, with the hierarchy professor//student more or less entrenched; the “knowing//knowing less” binary is sort of a given. So what if we took it as that given and moved on, instead asking questions and offering meaningful advice to students that could drive their work forward and upward? This might happen on paper, or in an office-hour debrief, or – maybe best of all? – in a mix of the two.

At minimum, what if we aimed to provide more feedback to undergraduates that simply indicated that this particular assignment, even returned, graded, to them, is *not* the end after all? Nygaard offers the final, following thought:

Even if you are meeting informally with a colleague, try to end the session by asking, “So, what is your next step?”

The perfect question for us all, really.

Kim

 

Feed back to me (part 1)

October is marking season at my university: midterms, essays, tests and quizzes all crowd into the space between the end of September’s silly season (frosh week, reunion weekend, general football mayhem) and the final date to add or drop courses. The wind and rain rush in, the leaves come down…and we all end up buried, by Halloween, under piles and piles of papers.

f3bfgcah-jpg-small

This year I’ve been trying something new with my undergraduate class in performance studies; for the first time I’m marking essays explicitly against a pre-existing rubric, one I’ve made freely available to everyone on the assignment page of our online learning portal. I’ve used marking rubrics regularly for the last few years – they were mandatory at Queen Mary, where I taught from 2012-2014, and I found them clarifying and productive. But this is the first time I’ve used a rubric as a literal marking tool, rather than just as a general set of guidelines for our reference.

(What’s a rubric? Click here for a really helpful explanation.)

My typical marking pattern until now has been some variation on this:

  • post a rubric for students for written assignments, so that they know broadly what I expect in terms of content, structure, research, grammar, and style;
  • read papers without consulting the rubric carefully, assuming it implicitly reflects what I already know to be true about good, bad, and ugly essays;
  • write up comments without direct reference to the rubric, and assign a grade.

I suspect a lot of us mark this way, whether we realise it or not. And this is not, of course, to say our comments on student papers are not fulsome, reflective of our rubrics, or written with care; I personally pride myself on providing clear feedback that describes a paper’s intention, where that attention is achieved, where it is not achieved, and what students may adjust in order to advance up the grade scale. I’ve also experimented several times with laddering assignments, with using peer feedback on drafts, and with various other techniques to lower the writing stakes and make the process of editing and improving written work more transparent and accessible.

(I’ve written about, and hosted some terrific guest posts on, assessment challenges in the past; click here for the archive.)

So I clearly care a lot about assessment – about getting it right and giving students the intel about their work that they need to improve. But rubrics? Not so much with the caring about rubrics, maybe. I suspect I’m a bit jaded, like many of us, because rubrics look on the surface like yet another measurement tool we’re being forced to use in order to fit our teaching labour into boxes that can be ticked by our senior administrations and the governments who control their purse strings. They are probably such a thing. But they are also something else: they are a clear, consistent way to communicate student success and mitigate student failure, on our own terms. (Let’s not forget: most of us still have the freedom to set our own rubrics. For now, anyway.)

And, as I discovered, they are also a great way for us to learn key information about our own marking tendencies and the assumptions underpinning them.

Marking with the rubric changed the pattern I describe above. My process now goes something like this:

  • import the rubric bullet points into my existing marking template;
  • read the papers with those bullets explicitly in mind;
  • comment in equal measure on each bullet;
  • assign a rough grade zone to each bullet (IE: this aspect of your work is at “B” level, or at “A-” level, etc);
  • average the bullets to arrive at a final grade.

In case you’re having trouble picturing this, here’s a screen shot of my template, with some (anonymised) feedback in place:

Screen Shot 2016-10-26 at 14.03.31.png

The first thing I realised after using the rubric in this way? I’ve historically given far too much weight to some aspects of student work, and too little to others… even though my rubrics have always implied that all aspects – content, structure, research, and grammar/style – are equally valuable. So I’ve been short-changing students who, for example, have good research chops but poor writing and structuring skills, because the latter makes the former harder to recognise, and without a rubric to prompt me I’ve simply not been looking hard enough for it. I’ve also, without question, been over-compensating students with elegant writing styles; less impressive research labour becomes less visible if the argumentation runs along fluidly.

Right off the bat, then, my use of the rubric as a marking guide both levelled the playing field for my students, and allowed me to come face to face with one of my key marking biases.

The second thing I realised was that marking in this rubric-focused way is a real challenge! I am a decorated editor and an astute reader of colleagues’ work, but that doesn’t mean I’m a perfect grader – not by any means. Reading novice scholarly work (aka, student work) with care requires keeping a lot of balls in the air at once: where’s the structure slipping out of focus; when is research apparent, when is it there but not standing out, when is it obviously absent; how much is poor grammar actually impeding my understanding, as opposed to just pissing me off (a different level of problem).

To do the juggle well, I’ve discovered, I have to slow down. Except… I have trained myself (as we all have – neoliberal university survival skill!) to read student work very quickly, make some blanket judgements along the way, and then produce a final grade driven as much by feel as by careful analysis of the paper’s strengths and weaknesses. When I was forced to put feeling aside and look back at all of a paper’s component parts, I as often as not saw that the grade I “felt” at the end was right was not, in fact, what the rubric was telling me was fair.

_7665571_orig

So add a few minutes more per paper, then. But where to snatch them from? It’s not as if I’m rolling in free time over here…

Thankfully, the rubric came to my rescue on this one, too. My third discovery: I could write fewer comments, and more quickly, yet still provide comprehensive feedback. The rubric language I include on each page of typed assessment stands in nicely for a whole bunch of words I do not need to write anew each time, and it standardises the way I frame and phrase my comments from student to student. That’s not to say everyone gets the same feedback, but rather that my brain is now compartmentalising each piece of assessment as I read, and is more quickly able to put those into the comment “boxes” at reading’s end.

Plus, in order to keep feedback to a page yet also include the rubric language in each set of comments, I’m writing less per paper, period. I doubt this is a bad thing – students generally don’t read all the feedback they receive from us, if they read any of it. Placing my responses to their papers directly in the language of my already-stated expectations, and offering smaller, more readable chunks will, I hope, get more students reading more of their feedback, and using it too. (I have plans to survey them on this in the last week of classes – stay tuned.)

As luck would have it, just as I was thinking this post through I came across a compelling discussion by Lynn Nygaard that uses “mirroring” as a metaphor to explain assessment labour. Nygaard’s ideas got me thinking about other ways I might transform my marking’s efficiency and effectiveness in future; although her focus is on feeding back to colleagues and grad students, I think it has some real applicability to undergraduate assessment too. I’ll share some of her provocations and reflect on them (ha!) in part two of this post, next week.

Until then, happy midterms!

Kim