Excellence vs Progress

Earlier this week, I was in Moscow at a session talking about (among other things) national excellence programs, making the point that there aren’t really that many examples of successful ones.  One of the university rectors in the audience then asked me the following question (I apologize for paraphrasing a bit here because I don’t remember the exact wording): “look, the real problem in science is that we are spinning our wheels, not making any great discoveries.  Instead, all we are doing is producing more and more papers of diminishing average value.  By focusing on bibliometric outputs, aren’t excellence programs simply accentuating the problem?”

In some ways it is hard to argue with this point of view.  It is true that the pace of “great discoveries” seems to be slowing down.  It is also true that more and more papers are being produced: overall at the top 700 or so global universities, the average number of papers produced in the period 2012-2016 was over 20% higher than in the previous quinquennium.  I can’t really speak to whether or not the quality has gone down or not, though that idea certainly is widely believed to be the case.

But is this slowdown the fault of excellence-oriented policies?  I suspect this is stretching things for a few reasons. 

First, it’s a particularly university-obsessed view of the world that thinks that the “great advances” happen in university/university-hospital laboratories rather than more application-oriented settings like, say, Menlo Park or Bell Laboratories.  Indeed, the whole notion that “advances” occur as the result of discrete projects resulting in Eureka moments seems dated.  More often than not in the age of advanced science, breakthroughs come through a huge accretion of tiny advances (which may all happen in universities/research laboratories) which get synthesized into usable technological leaps (which mostly happens outside universities).  To the extent this is true, the most you can say about most current excellence programs is that they tend to focus on publication output more than on scientific communication that leads to accretion – except that most scientists will tell you that publication is precisely how scientific communication occurs.  That is to say, the alternatives to more publications as a way to advance science aren’t obvious.

Second, it’s worth going back a bit in history to think about when the big jump in publications actually happened.  “Publish or perish” as a term goes back to around World War II, and certainly at big American research universities current levels of research output are nothing new.  In Canada, big research universities began upping their publication requirements for tenure in the 1990s, mostly in an attempt to emulate the Americans.  This, it should be noted, happened before there was any significant government interest in paying for research.  In fact, in Canada causality here is arguably reversed – it was our research universities, needing research dollars to make them competitive with their American counterparts, who lobbied government for more money, which then came packaged with excellence program features (e.g. the Canada First Research Excellence Fund).   And what was it that pushed institutions to set higher tenure requirements?  Quite simply, it was disciplinary norms, which in North America have always been set in big American research universities.  That is to say, if focus on publication outcomes is the problem, then we’re in a Scream situation: the problem is coming from inside the building.

(To be fair, the story isn’t quite the same in Europe and Asia, where excellence programs were more directly responsible for the change in norms.  I’ll spare you my version of the history, but suffice to say that generally speaking, national governments really liked the idea of bringing some of the economic dynamism of American research universities to their own countries and for a number of reasons both good and bad, came to the conclusion – in part due to really bad advice from their respective academic communities – that the way to measure progress in this direction was through bibliometrics.  The point to keep in mind though is that where in Canada the pressure to imitate American norms came from below, elsewhere it largely came from above). 

All that said, it’s probably fair to conclude that even if national excellence programs aren’t causing an overall slowdown in discovery, it’s not immediately obvious that they are doing much to alleviate the problem either.  And the reason for that is the managerial imperatives behind modern government-funded science.  Briefly, the theory of accountability in government suggests that when the public gives money to someone, the recipients can show tangible results at the end of it.  Nothing wrong with that in theory; the problem of course comes when you try to bring that logic down to the level of individual research projects, you get paradoxical results.  The entire logic behind public funding of research is that its role is to fund the risky, basic research with uncertain applications that business won’t do itself.  But that’s incompatible with government funding policies, which only fund research projects with short times to completion or which are seen as having high probabilities of success.  Big breakthroughs in science take money, time, patience, and to some extent luck, none of which line up easily with the shorter-term imperatives of most excellence programs.

The problem is what to do about it.  These short-termist imperatives aren’t just products of excellence programs; to some degree they pervade all modern research funding programs.  And the alternatives aren’t obvious either.  Some have mooted the idea of a “universal research income” where every professor gets a certain amount of base funding, but it’s not clear how dispersing the same amount of money more widely actually solves any problem other than that of researchers having to fill in applications.  Fewer, bigger awards for longer periods for researchers with top track records?  The experience with Foundation Grants at CIHR should make people think twice about that solution.

In short, then, there may be a problem here, but it goes a lot deeper than just excellence programs.  There’s a generalized crisis of science in mass higher education systems, and no one has any good solutions.

Posted in

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.