By Ruchi Sharma and Cian O’Donovan
For the second year running students’ A-levels have been graded by their own teachers. Covid-19 has caused disruption throughout the academic year, leading to cancelled exams and forcing changes to how A-levels have been assessed. After last year’s algorithm controversy, the regulator Ofqual announced new plans, yet concerns about extremely high grade inflation persist. But covid-19 related grade inflation is only part of the story. A close look at the role data plays in our school exam system reveals longer-term issues of fairness and inequality in the education system.
Grade inflation is an inexact description of what happens when students attain higher grades from one year to the next. This may be because students or teachers are getting better, exams are getting easier, or because of the covid-19 shock to the education system produced unforeseeable and uncontrollable consequences. In reality there is no way to accurately pinpoint reasons for what is a deeply ambiguous situation. Nevertheless, shifting attitudes to grade inflation are a social and political issue that have influenced education policy and A-level outcomes in recent decades.
Grade inflation is a problem in three senses. First because changes to A-level grading can introduce new inequalities for students. For instance, pupils with graduate parents benefitted from an unfair advantage in the way that A-levels were graded last year. The point here is that changes to how complex systems like A-level assessment work can create unforeseen consequences that impact some groups more than others.
Second, grade inflation is a problem for universities and employers who rely on A-levels to discriminate and rank students. The pandemic has seen increased demand for higher education (HE) over internships and jobs. Like last year, 2021 will see a record number of students getting their first choice course. To meet the increased HE demand, some universities are concerned that they will have to bear the cost of grade inflation by accepting more domestic students, missing out on higher fees from overseas students.
For employers and some universities there is the added risk this year that insufficient grading procedures will mask learning losses due to covid-19 disruption, and leave some students without the skills necessary for higher education or work.
Third, grade inflation is a political problem for the Department for Education who have to balance calls for maintaining standards in education over time with pressures to ensure every student is given equal chance to succeed.
What is unclear is who will carry the burden of these risks; universities – who may have to commit extra learning resources to students – future employers or students themselves.
Grade inflation is usually controlled through standardising approaches during assessment and grading, but these have been interrupted by covid-19. This summer, A-levels have been awarded through Teacher Assessed Grades. This means that teachers are responsible for grading their own students based on submissions students made during the disrupted year of learning. To ensure that a chemistry A* awarded to a student in Huddersfield is equivalent to an A* handed out in Birmingham each exam centre – the schools, academies and colleges where exams usually take place – has submitted its own grading plan to exam boards to check.
Due to disruption this year however, teachers have been told to assess only material they have actually taught. Because teaching differs from school to school, the grades of student from different schools will reflect different parts of the syllabus. Consequently, learning achievements of students awarded similar grades will not be commensurate either with students from other schools, or with those of students attaining similar grades in recent years. Because not all students are affected equally, there is an issue of fairness at stake.
One way Ofqual ensure the exam system is fair is through processes of standardisation – essentially shifting boundaries at which grades are awarded to adjust for changes in the syllabus, exam conditions or cohort characteristics from year to year.
When setting grade boundaries, Ofqual considers fairness in two senses. First, intergenerationally: if the general ability of a cohort is judged to be similar to historic norms, then Ofqual says their assessed outcomes should be roughly the same. Second, geographically: assessment should be commensurable across all the exam boards and centres under the regulator’s remit.
To determine grade boundaries whilst maintaining exam standards in normal years, Ofqual uses statistical predictions for a cohort based on previous cohorts’ grades along with the expertise of senior examiners. In addition, when A-levels undergo structural changes it employs a principle of comparable outcomes in order to ensure cohorts who are in the lead year for these changes are not disadvantaged. This maintains standards, such that a student who achieved an ‘A’ grade in one year would (if they were to sit it again) achieve it again in another year.
This also highlights long-term and systemic issues in school exam systems. The failure of the A-level algorithm in 2020 highlights the need for a more transparent, accountable and inclusive process in the deployment of algorithms. This is right. But it also points to the need for a more transparent and trustworthy grading system for school exams and university admissions.
Our forthcoming Rapid Ethics Review of data use in A level assessment shows that standardisation and grade inflation are representative of deeper, more systemic issues. In any given year, the adjustment of grades in line with standardising models will lead to disadvantages for individuals who outperform the national cohort average. For instance, the process of setting grade boundaries involves converting marks into grades. For this, the marks of any cohort are compared to previous years’ cohorts along with the cohorts’ prior attainment in previous levels of education. This gives an estimate of the expected performance of the cohort under evaluation. The grade boundary is assigned such that distribution of grades for any cohort is similar to that of previous years. This practice of grading risks unfairly diminishing the grades of individuals who outperform the national cohort.
Statistical modelling also means that some students get higher grades than they deserve. This is unfair, especially if students are in schools already more advantaged. There are also likely to be longer-term consequences when less competent students get inflated grades. These issues raise further concerns about deep-seated unfairness in the exam system. Following two years of disruption and ad hoc adjustment by Ofqual and the Department for Education, now is the time for some considered reflection and discussion about what next. Exam assessment is one area where returning to normal next year should not be an option.
Ruchi Sharma is a student at UCL’s Department of Science and Technology Studies working on the Accelerator’s Data use workstream as part of the STS Summer Intern Programme. Twitter: @ruchisharma1108