Over the past couple of decades, countries have designed policies to improve their research universities and make them more “world-class”, largely on the assumption that this will pay some kind of economic dividend. A lot of these policies involved what became known as “excellence initiatives” – projects that concentrated spending on a restricted number of institutions with the idea that these extra resources would propel these universities into some kind of global elite. This raises the question: do they work?
(Canada, which has largely ignored most of the world-class university debates, has not knowingly framed policies in this way; however, policies such as the Network Centres of Excellence, Canada Research Chairs, Canada Excellence Research Chairs, Canada First Research Excellence Fund, etc) all nevertheless get categorized in the international literature as “excellence initiatives”.)
The two global experts on this subject are Jamil Salmi and Isak Froumin (see here, page 249-onwards). They have both noted that there is very little in the way of data to answer this question, and remarkably few countries have conducted rigorous assessments of the outcomes of their programs. Indeed, in the one instance in which a country did so (Germany), the 10-year outcomes report known as the Imboden Report was bizarre: the programme was declared a success and continued funding requested on the basis of literally no empirical evidence whatsoever. In fact, evidence from the Academic Ranking of World Universities suggests that the research performance of German universities declined somewhat over the period in question and a later German report questioned whether the program had created the kind of institutional mission differentiation the program had initially been designed to create. Similarly, Japan has had several rounds of Excellence Programs, each one designed slightly differently from the last (it’s about Centres of Excellence! No, it’s about graduate studies! No, it’s about internationalization!) which have handed out hundreds of millions of dollars to institutions with no apparent effect as far as research output goes.
Even where we can see countries with Excellence Initiatives have improved outcomes, there is a problem of proving causation. There certainly are countries which launched excellence initiatives and have seen their institutions become more research-intensive relative to global standards, including Australia, Taiwan and Singapore. However, in each case, there are reasons to think that other things were at play than just the excellence initiative. Yes, Australia had the ARC Centres of Excellence, but they were a fraction of 1% of total sector expenditures and are very unlikely to explain the huge jump in research production in Australia (likelier explanations: changed management practices which have placed an enormous premium on research production). Singapore had a Research Centres of Excellence Project (which in practice was only open to two universities – National University of Singapore and Nanyang Polytechnic), but the $750 million spend on this over five years needs to be put in the context of a public research budget that has expanded nine-fold since the mid-90s – the RCEP never amounted to more than 6% of total research spending. And while Taiwan’s World-Class University’s project did see a significant boost in funding to the top 20% or so institutions, it turns out that the increase in STEM research output at those institutions was no larger than the increase at institutions who weren’t selected for the program.
It seems to me that one of the very few countries where an excellence initiative had a significant effect was China, with it’s 985 program (which I described a bit here) and, more recently, its Double World-Class Project has undoubtedly had an effect. Yes, the rise of China’s top universities obviously needs to be seen in the context of a much larger national rise. But the 985 program gave a lot of extra money over a very long period of time (two decades now) and it came with protection against over-expansion (not available to other institutions), increases in graduate-student intensity and changes in specific rules around graduation requirements (no PhDs awarded unless you have two publications in ISI-indexed journals).
So, basically, a lot of money invested consistently over a long period of time, combined with some changes in incentives to both institutions and individuals, seems to make a big difference. And if you want some proof of that, consider Iran, which did all of those things without claiming their initiative was “excellence”-related. The country jumped from 56th to 22nd globally on SCOPUS citation counts between 1996 and 2014. Annual publications in Medline jumped by a factor of between 2000 and 2014, with high levels of international collaboration. Now, to be fair, this improvement was widely spread across the country rather than focussed on a few specific “world-class” institutions (though University of Tehran did manage to break into the world top-400), and some of the specific fields in which Iran excelled over this period (e.g. nuclear physics, nuclear chemistry, nuclear medicine) probably owed their advances to factors other than university funding, but you get the idea.
In short, what seems likely is that most excellence initiatives are too small, too diffuse or too time-limited to have much lasting effect. That’s not to say they don’t increase research output – they almost certainly do. But changing institutional character/behaviour takes decades. And while most excellence programs provide $5-10 million per year to participating universities, the resource gap between the world’s top institutions and its merely good ones are in the hundreds of millions if not billions of dollars. They help, but not enough to reach their avowed goals.
Alex…re your comment ” the ARC Centres of Excellence…are very unlikely to explain the huge jump in research production in Australia (likelier explanations: changed management practices which have placed an enormous premium on research production”.
The other driver for research growth in Australia has been growth in international enrollments. This is a risky strategy, since commitments to research professors comes with long-term obligations. As noted in the NOUS report on Sustainable Growth in International Higher Education, “the need for universities to continue to support research investment through student revenue will mean that international strategy and growth will become priorities for all universities”. The NOUS report also notes that some Aussie Unis have treated the growth of international enrollments as an opportunity to “turn on the taps”, others have more prudently used the revenue growth to “dig deeper wells”.
All these schemes seem to encourage some very bad trends in academia. I’m not sure, in any case, that one can cultivate curiosity from the top down, though one can certainly encourage those of a meretricious bent to change to better-funded fields. In any case, do we actually want a system in which a small number of institutions are “world class” and the rest are implicitly stigmatized?
You note the lack of quantifiable data, but it isn’t clear that this would help. Would it not fall under Goodhart’s law, and encourage least publishable units? Wouldn’t it encourage the slavish pursuit of critical fashions?