This year I was staying in a North Devon village on GCSE results day. I had no internet access. By descending a ramp to the shoreline and huddling against a dripping wall that dangled with clumps of seaweed, I managed to pick up a 3G signal and download that fateful list of names and accompanying letters.
As always, I was met with a little bit of pleasant surprise and a little bit of disappointment. There were some expected patterns and some unexpected ones too. From the word go, I began instinctively to turn those abstract As, Bs and Cs into comprehensible human narratives. He did well because… She must have struggled in the exam because…
I have been mulling over these stories we tell ourselves, how retrospectively, through the solid lens of hindsight, we attempt to unearth those causal links that bring coherence to exam grades.
I had three GCSE classes this year – two Y11 groups and a Y10 English literature group. By and large, things went well for them. My Y11 top-set English Language results, however, did leave me puzzled. (About a third of our Y11 – around 120 students – are in one of four top sets. Grades should range from B to A*.) I was not quite expecting the range I encountered:
5 x A*
6 x A
10 x B
5 x C
1 x D
The AQA English Language GCSE is now made up of 40% controlled assessment and 60% final written exam. With the demise of speaking and listening, which once made up 20%, more rides on the exam itself than ever before. As we know, English results at the C/D borderline have also dropped this year. Naturally, the national picture needs to be taken into consideration, but I do feel that it is unwise to pass the buck completely.
So, I started conjuring up some narratives for my own class…
• The number of A*s suggests that I have a particular knack for teaching high-flying students.
• I should have taken more responsibility for the boy who got a D. Perhaps the fact that he was also being privately tutored meant that I took my eye off the ball in lessons.
• The number of Cs in the class suggests that I focused too much on the high-fliers. Did I attend to these students’ needs as well as I might have?
But these tales did not quite cut the mustard. I tried these instead:
• Too few students achieved an A or above. I failed too many who had the potential to do so much better.
• There was nothing more I could have done for the boy who got a D. I must learn to accept that sometimes under-achievement just happens.
• I was just unfortunate with the number of Cs. In small sample sizes, like a class of 27, freak results are more commonplace.
But was it all about me? And so I got to thinking about the wider social and individual causes:
• My A* students were those who always exhibited perseverance and hard work. All were from middle-class backgrounds.
• The boy who got a D was undergoing a number of complex personal issues. No teacher could have done anything about it.
• Most of my C students were those who lacked confidence in written exams. In the main, this was beyond my control.
When I got home from Devon, I examined the question breakdown data from the exam board:
• In the exam, eight students achieved A*. The poor quality of the controlled assessments (40% of the grade) I oversaw in class prevented three students from achieving A*.
• Looking at his scores, my D student – who was actually quite capable – wrote next to nothing for each question.
• There was no single question that scuppered my C students’ performances. They got Cs and not Bs in the exam for a range of different reasons with no consistent pattern.
For any set of exam results, answers come in countless intertwining narratives. Pinning down concrete reasons is hard. As individual teachers we must not shy away from our personal responsibility, yet we must also remember that we are in thrall to the national picture, our school contexts, the decade of prior learning our students bring with them, the social environments our students are raised in, their individual characteristics and, of course, just plain old good and bad luck. If, with similar grouping and contexts, our results are significantly better or worse than those of our colleagues – or other very significant trends are apparent – then we can make cautious inferences about the quality of our teaching. If this is not the case, then we will need to accept that we cannot always be sure.
Results, therefore, give us a flavour of the success or failure of our teaching practice but not the full picture. Close analysis of letters and numbers can veer us away from the truth as well as lead us closer to it. The fact that my students averaged only 7.56 on question 4 of their English exam, for instance, is useful only if I know why this is the case and how to make it better next time round. Should more time be set aside for practice? Is the cause a lack of knowledge about language rather than a lack of knowledge about how to tackle the question effectively?
I worry that the brave new world of Performance Related Pay will lead to the oversimplification of these narratives as school leaders on a budget look to withhold pay increases and classroom teachers are forced to justify themselves through increasingly specious arguments. I believe that exam results, whatever they are, should spur the profession towards betterment, not hold it back.
Every year, after all the navel gazing, my exam results always boil down to two simple decisions:
• Get a little bit better at making sure that no-one in the class is ever left behind.
• Get a little bit better at teaching every topic I cover.
I’m sure I’ll be saying the same thing next year!