Welcome to September! And, for those of you reading in North America, happy Labour Day Monday. Just in time for the new school year’s first week of merry bustle, you’ll find below a post by Charlotte Bell, a soon to be new PhD who has embarked on a remarkable first project: this year, rather than sticking to the relative safety of the ivory tower, she will be teaching in an underprivileged grammar school in Birmingham, learning first hand about the strengths and weaknesses of the UK’s education system at a time when it is undergoing profound change. This choice reflects Charlotte’s ethos as a truly activist scholar and educator, and I’m looking forward to showcasing her reflections on the blog in the months to come. For now, have a read of her experience using mock presentations to improve student performance in a second year seminar at Queen Mary this past year. Inspiring stuff!
Learning from “mock” presentations
By Charlotte Bell
Over the past four years I have had the privilege of working with students at each stage of their undergraduate degree at Queen Mary, University of London, as a mentor, a teaching assistant (TA), and a module (course) convenor. In each case, I have watched formal assessments (also known as summative assessments) provoke a lot of anxiety for students. However, assessment isn’t – or shouldn’t be – just the formal exam or piece of coursework a student submits. The focus of this post will be on the placement and role of end of year presentations (a popular assessment method in UK universities), and specifically on how I have used ‘mock’ presentations as a formative method of assessment.
There are two key reasons for my focus. First, the final presentation often marks the end of a module and the start of a holiday period; it’s the assessment that seems to demand Powerpoint or Prezzi, with as many effects and embedded videos as possible. It aWefeighdilso seems an ideal platform to showcase students’ abilities to synthesize, evaluate and form coherent arguments, based on the semester’s work. However, Iand have found myself disappointed in these situations; I often get a sense that the presentations are rushed, the labour not equally distributed among the presentation group’s members, or that the opportunity to use technology seems to overwhelm the presentations (lots of flash, at the expense of substance). Presentations are also the form of assessment or sharing of practice that, as Kim has discussed in previous posts, highlights some uncomfortable gender politics, in which women appear to apologise for taking up space at the front of the room or for owning the soundscape, even if only for five minutes. So the final presentation raises the obvious question: how can we make these assessments more worthwhile, a mode of teaching as well as grading?
Second, I suspect that in years to come HE (Higher Education; university or college-level) teachers and lecturers will have to pay more attention to the ways in which presentations are introduced and students supported, especially in UK classrooms. In 2013, to everyone’s detriment, Ofqual (the Office of qualifications and examination regulations in the UK) announced that the ‘Speaking and Listening’ (presentation) component of English GCSEs (General Certificate in Secondary Education) would no longer be a summative part of the English National Curriculum. Students in future may not arrive at university with the same skill-set in this form of assessment, making the need to “teach” the presentation (rather than simply grading it) even more urgent.
The final presentation assessment, therefore, is increasingly important. Presentations are not easy, but they are a key method of assessment throughout our careers in academia and beyond: they are essential aspects of conference work, of interviews (which in academia often include a presentation on research or teaching a lesson segment), and of course of business pitches alongside the inevitable pressure of networking. Presentations reveal students’ weaknesses in ways that are remarkably instructive and that may have an impact on students’ career prospects. But students can learn from doing presentations in class only if they are also given a real chance to learn from their mistakes.
Assessment in the classroom is (or, should be) an ongoing process. In Embedded Formative Assessment, (2011) education guru Dylan Wiliam poses five key strategies that assessment for learning enables:
Wiliam’s lens – cognitive science – is not without its problems: how can you as teacher evidence that these strategies have, in fact, enabled progress or learning during a seminar? The answer is, you probably can’t; and in many cases, you probably won’t be around long enough to see that ‘light bulb’ moment. However, his model is useful for clarifying the value and purpose of assessment. If your assessment doesn’t aid learning, what is its point?
Incorporating Formative Assessment into the module
The module in which I trialled the use of ‘mock’ presentations was a second year seminar-based option, part of the wider Applied Performance Pathway offered as part of the BA in Drama at QMUL. The module examines work by twentieth century theatre practitioners, theatre companies and collectives who have engaged in the making of theatre by, with and for ‘the people’, in order to examine how the social, economic and political context of their work shapes contemporary applied performance practice. Students on this module engage with a range of playtexts, critical theory, biography, policy documents and archival materials, and the module includes two points of summative assessment: a group presentation and an individual written essay.
I introduced the students to the concept of mock presentations one week in advance of the mock event. I told students they would form their own groups for the mock presentation, and would then stay in these groups for their final presentation assessment two weeks later. As a class, we went to The Unicorn Theatre on a group trip, and students knew that the mock presentations would be based on this show and an accompanying piece of set reading on theatre for children. The idea was that all students would work on the same topic for their mocks, both in order to create a shared back story for the mock event and in order to allow for collaborative peer-to-peer feedback as part of the mock process. This shared topic also gave me as the teacher a clear understanding of how well each group was able to read a specific piece of theatre in conjunction with scholarly work.
The prompt question for the mock presentation and the final assessed presentation was the same:
Who are ‘the people’? How and why have theatre makers set out to examine and challenge ideas of who theatre is for?
Students were asked, for their final presentations, to explore these questions through the lens of a specific theatre company, theatre building, theatre artist or issue in the cultural politics of theatre for ‘the people’. The final presentations were to be 10 minutes long and students were to be prepared to answer three questions from the floor. Powerpoint and Prezzi were banned. Instead, students had to produce a handout.
In addition, students would be asked to assess each other as well as themselves; importantly, their peer feedback would be incorporated into the overall feedback I provided. In many ways, this was the most important element of this mock exercise.
The mock and final presentations were both assessed using two grids. (These were adapted from my colleagues Philippa Williams from the Department of Geography, QMUL, and Michael Slavinsky at The Brilliant Club, London UK.) One grid was used to assess each of the other groups’ presentations; the other was a self-assessment grid in which students were to identify what they and their group did well, and what they could do to improve their work next time around.
These grids are both simple, straightforward and translate ‘assessment criteria’ into short points for consideration:
I find such grids useful because they make clear that there is no ‘hidden agenda’ when it comes to marking presentations. The marking criteria fit on to a single side of A4 paper. During the presentations, the students and I worked from the same grids ‘in real time’ as we watched one another’s labour; this way, we formed an assessment team rather than a hierarchy.
The self-assessment grid was adapted from the marker’s essay cover. This grid is based on the popular assessment model of identifying both ‘what went well’ and ‘even better if’. It also includes the option for students to award themselves a grade within a broad boundary (for example, “I think I scored a B-B+).
The ‘mock’ presentations
The first 15 minutes of the mock session were handed over to students to work on preparing their mock presentations. The prompt for these 15 minutes was: what questions does your presentation raise? What can you do now to help you answer these questions? I was careful to relate these cues specifically to one of the assessment criteria on the marking grid: ‘ability to answer questions’. I also chose this focus because I suspected students’ question and answer preparations might be overlooked somewhat during their preparations. The 15 minutes of prep time also gave students some ‘breathing space’ and helped set a relaxed but productive atmosphere for the session – particularly for those students who had come in straight from another class. Rather than entering a room where students were waiting to start presentations, students entered a room where productive work was happening and mistakes could still be made, and corrected.
After these initial 15 minutes I gathered the group back together and went through the structure of the session. I advised students on how they might use the grids: circle or put a cross next to a description – annotate or write notes on the back. I then reiterated that I would be taking their feedback seriously, incorporating it into my overall feedback for each of the groups.
Students self-selected the order in which they gave their presentations. While a group was setting up I handed out a group marking grid to those in the audience. I decided to hand out new sheets at the beginning of each presentation slot 1) to ensure that grids didn’t get mixed up, 2) to provide a reason for the group at the front of the class to take their time setting up or getting ready, and 3) to make time to answer any questions students might have about the process. We sat and listened to the presentation and I fielded or asked 1-2 questions of each group following their presentations. We applauded the group as students returned to their seats. Members of the group also picked up a self-assessment sheet as they returned to their seats and the class had three minutes to make some notes on each presentation. If the next group was ready to set up they did so. We repeated this process as needed.
Assessment in Practice
The main aim of our self- and peer assessments as part of this process was to encourage students to think specifically about points for improvement, as well as allowing me to monitor discrepancies between what the students thought they did against what the rest of the class and myself saw them accomplish. As students were taking time to set up or fill out their grids I had a quick flick through some of their responses to get a sense of the room; the grids seemed to be providing a helpful space for students to articulate their honest reactions. As it turns out, marking each other against shared criteria and knowing you are being marked against that same criteria doesn’t result in a lot of congratulatory back-patting. On the contrary, marking and being marked against the same criteria seemed to encourage the students to take the process seriously, giving each other focused and useful feedback:
However, what I didn’t expect was the extent to which some students struggled to define what they did well. Most did not give themselves a grade boundary or mark, despite the opportunity on the marking grid to do so. Those that did provide grades for themselves were not generous:
Whilst these examples of self-assessment do accurately identify sticking points in individual presentations, such as problems with speaking clearly and preparing and responding to questions confidently, they also draw out a troubling and rather upsetting undertone. I do not think these are the comments of self-indulgent young people; they are indicative of insecurity and low self-esteem. Identifying points for improvement such as ‘less mumbling’, or noting the feeling of being put on the spot during a Q&A session are helpful – but, again, only if action points can in turn be created to support that person to overcome the insecurity that perpetuates mumbling or puts them in a position where ‘turning up’ is something they identify as ‘what the group did well’. Once again, then, the students’ self-assessments indicate for me how crucial a task it is to turn presentation assessments into true learning opportunities.
Having had a quick glance through some of the grids on the spot, I was able to address a few of the issues and concerns they raised immediately through oral feedback to the class as a whole. I followed this up a short time later with general written feedback uploaded to our online site. In addition, I also wrote feedback for each group, incorporating peer feedback into my comments and responding to issues they raised on their individual assessment sheets. This feedback was sent out the same afternoon as our mock event took place.
As part of this preliminary feedback I included an ‘on track for’ grade boundary: for example, 2:1/1st borderline (roughly B/B+/A). Though I’m not a huge fan of ‘target grades’ and ‘grade boundaries’, they are a fact of university assessment and I don’t think pretending they don’t matter is helpful – to the students or the teacher. Being as transparent as possible makes having conversations with students that might otherwise be difficult or awkward (particularly around grade expectations) a lot easier and more straightforward.
The results: what happened during the final presentations?
Students had two weeks between their mock presentations and their assessed presentations in order to address concerns raised in their feedback, and all groups produced better presentations in their final assessments. Their handouts were visually stimulating and detailed. One group used the white board to demonstrate the development of their argument and point to tensions in policy for participants in disabled arts. Another group produced an annotated handout that was explicitly referred to in their presentation – and would make a great poster. All groups took time to note down questions during the Q&A session to give themselves time to think through their answers. All groups introduced themselves and gave their presentations a title. Self-assessment identified specific points/aspects that ‘went well’ and demonstrated some progress towards improved confidence:
[a takeaway pop-up book-style handout with room for listeners to add their own notes, questions or comments]
Overall, creating a ‘mock’ process like this one might seem like a lot of work. It wasn’t. Adapting the grids to the module and its specific presentation assessment was a quick process: the presentation grids are quick and easy to read and provide a fast and clear overall view of a group’s presentation. Typing up the general feedback after the mock session took about five minutes, as I had jotted notes down in the moment and relayed it verbally to the students earlier in the day. Writing the individual group feedback took a little longer, but incorporating peer and self-assessment into the process made identifying points of focus in this feedback more efficient. I hadn’t had to do any additional content prep for either the mock or final presentation sessions: the students provided the knowledge content and material to work with. Framing the first session as a ‘mock’ presentation gave the exercise gravitas and a status that linked directly to an activity ‘that would count’ later on in the semester. Overall, not only did practicing the assessment better prepare the students to produce evidenced, well timed, imaginative, effective and competent presentations in the final week of the semester, I think it also put the students in a better position from which to constructively critique theirs and each others’ work.
I shall certainly be using these tools (and tools like them) in the future, and I may adapt the assessment grids for different types of projects. In addition I’m going to continue to pay attention to the ways in which I use the language of assessment criteria in class to respond to students’ work and in-class discussions. If as an assessor I’m looking for ‘concise’ writing/speaking and ‘good quality and independent research’ then I need to flag to students when they have demonstrated this kind of labour, rather than stopping my response at ‘brilliant’ or ‘fantastic’.
I would love to hear your feedback and any tips or resources you use to integrate assessments into the delivery of your courses/modules.
CHARLOTTE BELL is the final stages of her PhD in the Drama Department, Queen Mary University of London, where she was also a teacher. Her research explores the cultural economics of site-specific art and performance in and about social housing estates. Her work has been published in Wasafiri, Contemporary Theatre Review and New Theatre Quarterly. In 2013 she won the TaPRA Postgraduate Essay Prize. She is on the Postgraduate Committee for TaPRA and from 2013-14 she was an Advanced Skills Tutor with The Brilliant Club, London, UK. This September she starts as an English teacher in a state comprehensive secondary school in Birmingham. Visit Charlotte online here: https://qmul.academia.edu/CharlotteBell.