HESA

Higher Education Strategy Associates

The Science Policy Review

So, any day now, the report of the Government of Canada’s Science Policy review should be appearing.  What is that, you ask?  Good question.

“Science policy” is one of those tricky terms.  Sometimes it can mean science as a way of making policy (like when someone claims they want all policy to be “evidence-based); sometimes it’s about policy for or about Science, and the rules and regulations under which it is funded.  This particular Science policy review, chaired by former U of T President David Naylor is a bit narrower; as the mandate letter and key questions show, this review is fundamentally about funding.   In fact, three sets of questions about funding: funding of fundamental research, funding of facilities/equipment, and funding of “platform technologies” (which is irritating innovation policy jargon which makes a lot more senses in IT than in the rest of science, but whatever).

For the first two sets of questions, there’s a heavy tilt towards fitness of purpose of existing funding agencies.  The review’s emphasis is not so much “are we spending enough money” (that’s a political decision) but rather “does the way we spend money make sense”.  For example, one might well ask “does a country of 35 million people and fewer than 100 universities actually need three granting councils, plus CFI, the Foundation for Sustainable Development, Brain Canada, Genome Canada, the Canada First Research Excellence Fund… you get the idea.

There was a frisson of excitement last year when the UK decided to fold all their granting councils into One Big Council – might our Science Review recommend something similar?  Personally, I’m not entirely sold on the idea that fewer councils means lest paperwork and more coherence (the reasons usually given in favour of rationalization), because policies and agendas can survive institutional mergers.  And as a colleague of mine who used to be quite high up in a central agency once said to me: the reason all these agencies proliferated in the first place was that politicians got frustrated with the traditional granting councils and wanted something more responsive.  Paring them back doesn’t necessarily solve the problem – it just re-sets the clock until the next time politicians get itchy.

This itchiness could happen sooner than you think.  Even as the government has been asking Naylor and his expert panel to come up with a more rational scheme of science management A couple of weeks ago it emerged that one of the ideas the Liberals had decided to test in their regular pre-budget focus group work was the idea of spending mega-millions (billions?) on a scientific “Moonshot”: that is, a huge focused effort on one goal or technology such as  – and I quote – driverless cars, unmanned aircraft, or “a network of balloons travelling on the edge of space designed to help people connect to the internet in remote areas or in a crisis situation”.  Seriously.  If any of you thought supporting big science projects over broad-based basic science was a Tory thing, I’m afraid you were sorely mistaken.

Anyways, back to the review.  There’s probably room for the review to provide greater coherence on “big science” and science infrastructure – Nassif Ghoussoub of UBC has provided some useful suggestions here.  There may be some room for reduction in the number of granting agencies (though – bureaucratic turf protection ahoy!) and definitely room to get the councils – especially CIHR – to back off on the idea that every piece of funded research needs to have an end-use in mind (I’d be truly shocked if Naylor didn’t beat the crap out of that particular drum in his final report).

But the problem is that the real challenges in Canadian Science are much more intractable.  Universities hired a lot of new staff in the last fifteen years, both in order to improve their research output and to deal with all those new undergraduates we’ve been letting in.  This leads to more competition.  Meanwhile, government funding has declined somewhat since 2008 – even after that nice little unexpected boost the feds gave the councils last budget.  At the same time, granting councils – most of all CIHR – have been increasing the average size of awards.  Which is great if you can get a grant; the problem is that with stagnant budgets the absolute number of grants is falling.  So what do rational individual researchers do with more competition for fewer awards?  They submit more applications to increase their chances of getting an award.  Except that this drives down acceptance rates still further – on current trends, we’ll be below 10% before too long.

Again, this isn’t just a Canadian phenomenon – we’re seeing similar results in a number of countries.  The only solution (bar more funding, which isn’t really in the Review’s remit) is to give out a larger number of smaller awards.  But this runs directly contrary to the prevailing political wind, which seems to be about making fewer, bigger awards: Canada Excellence Research Chairs (there’s rumours of a new round!), CFREF, Moonshots, whatever.  You can make a case for all those programs but the question is one of opportunity costs.  CFREF might be brilliant at focusing institutional resources on a particular problem and acting as anchors for new tech/business clusters: but is it a better use of money than seeding money widely to researchers through the usual peer-review mechanism?  (for the record, I think CFREF makes infinitely more sense than CERCs or Moonshots, but am a bit more agnostic on CFERF vs. granting councils).

Or, to be more brutal: should we have moonshots and CFREF and a 10% acceptance rate on council competitions, or no moonshots or CFREF and a 20% acceptance rate?  We’ve avoided public discussion on these kinds of trade-offs for too long.  Hopefully, Naylor’s review will bring us the more pointed debate this topic needs.

This entry was posted in Uncategorized. Bookmark the permalink.

One Response to The Science Policy Review

  1. Mark Roman says:

    As a university CIO I can safely say that “platform technologies” don’t make any sense in IT either. It’s just jargon used to avoid addressing core information systems complexity.

Leave a Reply

Your email address will not be published. Required fields are marked *

We encourage constructive debate. Therefore, all comments are moderated. While anonymous or pseudonymous comments are permitted, those that are inflammatory or disrespectful may be blocked.