Monthly Archives: July 2013

The complicated university teaching-research relationship

In The Age this morning, Don Aitken argues that university teaching has come off second best. ‘Today research, and only research, is really important,’ he says.

I certainly think that university teaching needs improving. But the story is not one of the decline of teaching and the rise of research, with one improving at the clear expense of the other.

Up until the Dawkins reforms of the late 1980s and early 1990s more than half of higher education students attended colleges of advanced education or institutes of technology. Their mission was teaching rather than research, although some of their academics were doing research. The universities were teaching-research institutions, but with weaker research pressures than today. Most research funding was delivered as a block grant that was (unlike today) not linked to indicators of research performance.

If the teaching-focused colleges of advanced education and institutes of technology were good at teaching, we would expect their positive legacy to show when the first national student survey (the course experience questionnaire) was conducted in the mid-1990s. In reality, the CEQ showed generally dismal results. Across the country, the average positive response to six teaching-related questions was around one-third.

As the government started emphasising research performance in its funding policies, the apparent incentive was to focus on it over teaching. But this is not showing in the trend data (the figure below). The time series was was upset in 2010 in ways that exaggerate satisfaction compared to the past, but the steady upward trend in satisfaction cannot be disputed. (Some theories as to why are here.)


A consistently calculated time series on research productivity only goes back to 1997. It shows steadily increasing productivity up to 2005, where it stablises at an average 2.1-2.2 publications per full-time researcher per year (counting teaching-research staff as 0.4 full-time equivalent in research, in line with common time use expectations).

Publications per academic

Rather than research rising at the expense of teaching, on these indicators they both rose together until the middle of last decade. In research, the focus has shifted to research quality – it’s still too early to put numbers on it, but simultaneous with on-going increases in satisfaction with teaching universities are culling weaker researchers and focusing their investment in areas of relative research strength.

As well as it being difficult to find evidence for research at the expense of teaching over time, our recent Grattan research project failed to find much evidence that low-research departments are better at teaching than high-research departments, as measured by recent student surveys.

My view is that at the dawn of the Dawkins era universities were under-performing institutions, across both teaching and research. Research was further down the path of professionalisation and favoured in academic culture. But both teaching and research needed to improve a lot, and that is what we have seen.

Just removing research and making some universities ‘teaching only’ would not on its own make things better. Improved teaching needs concerted effort, whether or not it occurs in an institution that also produces research.

Why is student satisfaction with teaching increasing?

My new Grattan report argues that teaching in Australian universities could be improved. But despite remaining shortcomings, I think significant progress has been made since the 1990s.

We often hear that with higher student-staff ratios Australian academics have less time to spend on students. But in the long-running course experience questionnaire survey it is the time-use questions that have shown the greatest improvement over time.

The figure below shows that the proportion of completing students agreeing that staff ‘put a lot of time into commenting on my work’ and ‘normally gave me helpful feedback on how I was going’ has roughly doubled since 1997.*

feeback questions ceq

I think a major explanation is likely to be technology. The increase last decade matches with the spread of home internet connections. Academic staff became much more accessible via email and learning management systems than they had ever been before, and were also able to efficiently give the same or similar feedback to multiple students.

There were also good improvements (20 percentage points plus) in agreement with propositions such as lecturers were good at explaining things, teachers motivated me to do my best work, and staff worked hard to make their subjects interesting. These are not so obviously technology driven, suggesting that other forces for good teaching were at work.

These might include the spread of subject-level student surveys and their link to promotion and greater (though still typically very short course) training in university teaching.

Whatever the exact causes, these results highlight how increasing funding is not necessarily the key to improved education. Through most of these years, real per student funding for Commonwealth-supported students was declining. How universities organise themselves is the most important factor.

* All five points on the response scale were labelled for the first time in 2010, with points labelled strongly disagree, disagree, neither agree nor disagree, agree and strongly agree. In previous
years, only the anchor points of strongly disagree and strongly agree were labelled. This seems to have increased positive responses.

The persistence of health and education students

I recently received some new data on completion and attrition rates by ATAR, a surprisingly under-examined topic in Australian higher education. My Mapping Australian higher education publication summarises research suggesting a weak relationship between ATAR and average marks. However, data on 2005 commencing students shows a quite strong relationship beween ATAR and completion – the higher the ATAR, the higher the chance of completion. The whole cohort data is in this article.

We also have the data by field of education. Most disciplines have the same general pattern. But two, health and education, have higher persistence at lower ATARs, as can be seen below.

health ed atar completion

The same two broad fields of study also have graduates with high rates of retention in jobs related to their field of study, as seen in the chart below.

Degree job relevance

I’m inclined to think that the main reason is that people who choose these degrees have a relatively high degree of commitment to the end occupation from day one. A colleague notes that this may in part be because students in these fields don’t necessarily have many attractive alternatives. For people with lowish ATARs who don’t want to do voc ed, teaching and nursing have been paths to relatively secure and reasonably paid careers.