In his multifaceted critique of higher education in The Australian yesterday, Adam Creighton makes one infrequently-made claim: that ‘grade inflation is rife’.
In Australia we often worry about soft marking at the pass-fail point, which Adam also mentions. But grade inflation controversies overseas are about too many students receiving high marks. In England, an increase in the proportion of students receiving first-class degrees from 16% in 2010-11 to 29% by 2017-18 has attracted the regulator’s attention. In the United States, critics complain that ‘A’ is the most popular grade.
Australian universities are not required to report student marks, and so we cannot conclusively confirm or reject the grade inflation hypothesis. But the figures we have do not look overly-skewed to the top.
In a paper looking at the relationship between ATAR and socioeconomic status, the NSW Universities Admission Centre earlier this year reported on first-year university grade point averages (GPA). They used a 7 point scale for their GPA.
The UAC’s main point is that ATAR rather than SES best predicts GPA. But it is striking that even the most able first-year students, coming into universities with ATARs of 90 or above, averaged less than a credit grade.
The Student Experience Survey, which is used to measure student satisfaction with teaching, also asks its respondents to self-report their marks. The survey is a mix of first-year and later-year students.
Unfortunately marks are not part of the routine experience survey data release, other than for students considering leaving (unsurprisingly, the lower the self-reported mark the higher the chance the student is considering going). But when I was at the Grattan Institute we had the underlying data for a while and produced the chart below.
As these are self-reports I would not be surprised if there was some upward bias in the numbers. 3.3 per cent of domestic and 1.6 per cent of international students report average marks above 90 per cent. Another 22.4 per cent of domestic students report average marks in the 80s, and 12.2 per cent of internationals.
Maybe a quarter of domestic students reporting average marks above 80 per cent looks high, but overall the distribution looks fairly normal, with most students in the middle. It also suggests that any soft marking for international students is at the pass-fail point, not in handing out too many high distinctions to keep paying customers happy.
Why would Australia not obviously be following US or English marking trends? Popular causal theories in those countries, such as more market competition in higher education creating incentives to please students, apply equally or more to Australia.
It’s a topic that needs more research, but I think marking on a curve might be a factor. This means that marks are expected to fit a pre-determined statistical pattern.
Marking on a curve could help explain why some academics believe that there are too many passes, because the model assumes that only a certain percentage will fail, not sufficiently taking into account a longer tail of weak students, as ATARs have declined and international student numbers increase.
But marking on a curve also limits high distinctions, because the model assumes a small proportion of high achievers.
University marks and grades may also be a case in which more transparency is a bad thing. If students know that high marks are harder to get at university X than Y, that may create pressure on university X to mark more generously. Otherwise university X graduates could wrongly look less academically impressive to potential employers than university Y graduates.
In England and the United States, well-publicised information on patterns of grades may have helped trigger grade inflation by revealing the soft marking universities. Without this data in Australia, patterns of marks and grades look much closer to what we might expect, given the range of abilities and differences in effort of the student population.