In my series of posts on why universities became financially reliant on international students I have, to date, focused on domestic factors. Research funding policy changes are the most important. Universities needed new discretionary revenue to finance government-supported research projects, and to pay the salaries of staff with teaching and research roles.
But universities did not need a nearly 500 per cent real increase in international student fee revenue since 2000 to fill these budgetary gaps.
Suppose annual Commonwealth research spending was 50 per higher across the last few decades, all of it paid through block grants rather than generating additional costs via competitive grants. Up until the year 2000, as the chart below shows, a 50 per cent increase in public funding would have covered all research spending. But in 2018 Commonwealth funding 50 per cent higher than it was would still have left over 40 per cent of research spending unfunded (although there is about $1.9 billion in non-Commonwealth research income).
Profits on international students have been used to help finance a massive increase in university research expenditure this century.* Growth on this scale was something universities chose to do, not a change forced on them by government policy.
To explain the pre-2020 scale of international student revenue global factors need to be considered. Academics and universities have a strong intrinsic interest in research, and so part of the story of increasing research expenditure is opportunity. On UNESCO figures, the number of internationally mobile tertiary students (not just higher education) increased from 2.2 million at the start of the century to 5.3 million in 2017.
In Australia’s two most important markets, India and China, the growth was even greater: more than fourfold and fivefold respectively, as the chart below shows. Australia had been an early mover in the commercial international education market, and university leaders realised that capturing even a moderate percentage of this growth was worth billions of dollars.
But another global development turned this temptation into a near imperative: the rise of university rankings. Universities have long been places of unusually high status anxiety, but the establishment of the Academic Ranking of World Universities in 2003 (often called the Shanghai Jiao Tong rankings), and the Times Higher Education Rankings in 2004, put brutally (if spuriously) precise numbers on where each university stood. Other rankings followed.
The methodological critiques soon piled on, but they made no difference. By 2005, the University of Sydney had announced ranking aspirations. By 2007 the University of Melbourne’s annual report included information on performance against its target rankings.
Target rankings are now common. The University of Sydney wants to be first in Australia in the best-known rankings. The University of Melbourne wants to be consistently in the top 40 of the ARWU and the top 25 of the THE rankings. UNSW has developed a composite index of different rankings, and aims to be in the top 50. The University of Queensland wants to be ‘well inside’ the top 75.
Rankings subdivided into regions, fields of research and university ages created more potential winners and losers. By the 2010s, Australian universities with no prospect of reaching the highest ranks were showing an interest.
The trouble is that many universities around the world hold similar ambitions. In her book on the global influence of university rankings, Ellen Hazelkorn found that most university leaders she surveyed were unhappy with their rankings. Even if they did not personally like rankings they could not easily ignore them. As Hazelkorn’s book argues, the decisions of students, donors, governments and prospective academic staff can be influenced by rankings. In her survey, seven out of ten university leaders had taken action to improve their university’s ranking.
This competition for an inherently limited number of top ranks means that just improving research quality and quantity is not enough. Universities must improve by more than their competitors. Rapid growth is necessary to get ahead. This is one reason why the Group of Eight universities, which have the most ambitious research targets, ended up highly exposed to the international student market.
As the chart below shows, the pre-2020 international student boom was largely a Group of Eight and private sector affair (although many non-university higher education provider enrolments are in pathway colleges leading to a range of public universities).
For years, the Group of Eight universities were on a virtuous cycle. International student surveys show Chinese students are particularly motivated by rankings, their willingness to pay high fees helped universities increase their research and boost their rankings, which in turn attracted more Chinese students. The strategy succeeded in its own terms. Two Australian universities made the AWRU’s top 100 when it began in 2003. By 2019 seven were in the top 100.
The risk now is that the virtuous cycle turns vicious; that fewer Chinese students means less research, which means lower rankings, which means fewer Chinese students. But we will have to see how this turns out, as universities in competitor countries are also taking a big COVID-19 hit. Rankings are based on relative, not absolute, research performance.
International student fees were the drug that fuelled a rankings addiction as well as the funding that filled resource gaps. The withdrawal symptoms are very painful. But now might be the (forced) time to stand back and re-think the dynamics that led to our current situation.
Rankings have not had a wholly malign influence. They helped convert profits from the (then) booming Chinese economy into research that will potentially benefit a wide range of people. But arguably rankings also distort research priorities in favour of fields that contribute to the metrics used, which generally are biased towards science over social science or the humanities. Australian topics are disadvantaged, since research on Australia is cited less than topics of global significance or concerning countries with larger populations.
Research excellence can be measured against a standard, as our domestic ERA exercise tries to do, rather than placing exaggerated significance on the often small relative differences that drive the rankings. A the chart below shows, even in the top 100 there are a few exceptional universities with high absolute scores, and then a long tail of institutions, including all Australian top 100 universities, with minor differences in their scores.
While rankings continue to get publicity it will be hard for universities to ignore them completely. But universities with falling ranks might lose their enthusiasm for giving rankings prominence in their marketing, which only encourages students to attach unwarranted importance to rankings.
And if rankings become less significant, universities will not feel the need to indefinitely increase their international student numbers. Yes, it is good to have international students – not just their money, but also their contribution to university and Australian life, and the value of long-term personal connections between Australia and countries in our region.
But even before COVID-19 arrived, university international student practices were attracting plenty of concern and criticism on both financial risk and academic (English language standards, soft marking, cheating, influence of the Chinese Communist Party) grounds.
Nobody wanted university priorities to be re-oriented in the rapid and destructive way that is now happening. But in the medium to long-term, less of an emphasis on global rankings, and some moderation in international student numbers, may not be all bad.
-*Despite some concerns about the detail of ABS research expenditure estimates, I am confident that the massive upward trend in university research spending shown in the first chart above is broadly right.
4 thoughts on “Why did universities become reliant on international students? Part 5: The rise of research rankings”
Thanx for this.
What is the source for the university categories? I couldn’t find them in the Department’s ucube.
The underlying numbers are from uCube, the categories are my own – I appreciate that the dates aren’t completely right for Menzies, but I was trying to go for broad similarities.
Group of Eight
University of Queensland
University of Sydney
University of Sydney
University of Melbourne
University of Adelaide
University of Western Australia
University of South Australia
University of Newcastle
University of Wollongong
La Trobe University
Charles Sturt University
Southern Cross University
University of New England
Central Queensland University
James Cook University
University of Southern Queensland
University of the Sunshine Coast
Charles Darwin University
University of Tasmania
Metro Dawkins universities
Western Sydney University
Edith Cowan University
University of Canberra
Australian Catholic University
I could not find ANU in the Group of 8, tho for the purposes of analysing research resources it may be better to exclude it, especially since the Government stopped publishing separately its non competitive block institutes research grant to the ANU.
I would include Macquarie amongst the Menzies universities.
For some purposes of analysis I include Bond, Divinity, and Notre Dame Australia amongst your category of Metro Dawkins, tho I call them ‘New Gen’ universities.
Otherwise, your categories are the same as mine with different labels:
Moodie, Gavin (2012)  Types of Australian Universities.
A bit embarrassing to forget to mention my own uni. ANU and Macquarie are in the categories suggested. Merit in putting Notre Dame in with Metro Dawkins for this analysis. The other private HEPs are a very diverse bunch. For a bigger project than a blog I would try to give them categories such as pathway, other for-profit, and not for profit.