The complicated university teaching-research relationship

In The Age this morning, Don Aitken argues that university teaching has come off second best. ‘Today research, and only research, is really important,’ he says.

I certainly think that university teaching needs improving. But the story is not one of the decline of teaching and the rise of research, with one improving at the clear expense of the other.

Up until the Dawkins reforms of the late 1980s and early 1990s more than half of higher education students attended colleges of advanced education or institutes of technology. Their mission was teaching rather than research, although some of their academics were doing research. The universities were teaching-research institutions, but with weaker research pressures than today. Most research funding was delivered as a block grant that was (unlike today) not linked to indicators of research performance.

If the teaching-focused colleges of advanced education and institutes of technology were good at teaching, we would expect their positive legacy to show when the first national student survey (the course experience questionnaire) was conducted in the mid-1990s. In reality, the CEQ showed generally dismal results. Across the country, the average positive response to six teaching-related questions was around one-third.

As the government started emphasising research performance in its funding policies, the apparent incentive was to focus on it over teaching. But this is not showing in the trend data (the figure below). The time series was was upset in 2010 in ways that exaggerate satisfaction compared to the past, but the steady upward trend in satisfaction cannot be disputed. (Some theories as to why are here.)

GTS

A consistently calculated time series on research productivity only goes back to 1997. It shows steadily increasing productivity up to 2005, where it stablises at an average 2.1-2.2 publications per full-time researcher per year (counting teaching-research staff as 0.4 full-time equivalent in research, in line with common time use expectations).

Publications per academic

Rather than research rising at the expense of teaching, on these indicators they both rose together until the middle of last decade. In research, the focus has shifted to research quality – it’s still too early to put numbers on it, but simultaneous with on-going increases in satisfaction with teaching universities are culling weaker researchers and focusing their investment in areas of relative research strength.

As well as it being difficult to find evidence for research at the expense of teaching over time, our recent Grattan research project failed to find much evidence that low-research departments are better at teaching than high-research departments, as measured by recent student surveys.

My view is that at the dawn of the Dawkins era universities were under-performing institutions, across both teaching and research. Research was further down the path of professionalisation and favoured in academic culture. But both teaching and research needed to improve a lot, and that is what we have seen.

Just removing research and making some universities ‘teaching only’ would not on its own make things better. Improved teaching needs concerted effort, whether or not it occurs in an institution that also produces research.

  1. “universities were under-performing institutions”

    Probably a lot of Australian institutions were under-performing then by today’s standards. But given the falling relative wages in higher education, it’s stark how much productivity could be improved by introducing a few external (not even really market) signals. Governments’ exploitation of the apparent willingness of bright people to work hard for little money in exchange for relative autonomy and pursuit of interests has produced terrific gains for Australian students. Cutting subsidies and freeing up prices now would allow more of those gains to be captured by taxpayers.

  2. I have also been looking at the publications per academic data. The way that you weight the FTE researchers (0.4 for T&R) is accurate for a sectoral-level analysis, but it is also worth noting that the efficiency (or publications per human resources dedicated to research) has increased even more dramatically.

    A majority of all research-only staff are employed in Level A and below, whereas the majority of T&R are in Level C and above. Dramatic expansion of low-ranked research-only positions has coincided with steady increases in publications per FTE researcher (as you show). Therefore, by weighting T&R as 0.4, it does not reveal how universities have gained a great deal more publications for their HR costs.

    A minor and technical recommendation I have would be to use a lagged measure for research staff. For example, it is generally accepted that the research process covers a three year period. Publications in 2010 are a product of research conducted in 2008-2010. Therefore, I use a three year average as the denominator (e.g. publications in 2010 divided by research staff in 2008-2010).

  3. Peter – Yes, a lagged measure for staff would be more accurate, though I suspect it would not dramatically change the pattern of the aggregate data.

    There are also issues with the institution-level data that I would like to discuss offline.

Leave a Comment