HESA

Higher Education Strategy Associates

Tag Archives: Naylor Report

June 15

School’s Out!

This is my last blog of the academic year.  I may post once or twice during the summer, if something big happens or if someone important says something titanically stupid and I need to vent, but otherwise I’ll keep your inboxes unsullied for a couple of months.  For those of you who have trouble drinking your 7AM (EST) coffee without reading this blog, my apologies, but it’s time for batteries to re-charge.

When I return on August 28th, it will be in a new format, with advertising.  I know, I know, but six years of doing this thing every day for free is probably enough.  If you have something – a conference or a service or a product – that you want to promote to a large and attentive higher education audience, do get in touch at info@higheredstrategy.com.

This was a below-par year for higher education, globally.  Brexit and Trump (and Xi, and Erdogan) called into question  some of the basic ideas underpinning internationalization.  No major government (to my knowledge, anyway) did anything particularly new or exciting in terms of investing in research and core funding, though there are some US states which seem to be upgrading their investments a bit.  We seem to be at the end of a long cycle of global higher education expansion and – Africa excepted, maybe – the focus is instead moving to efficiency and value for money.

In Canada, government support to institutions hummed along just below inflation while staff pay settlements kept growing at above inflation.  Cue more international students to fill the fiscal gap.  We’re so far into this cycle it seems impossible to ever stop.  But at some point, governments will call a stop to it and this ride is going to come to an end.  The fact that Agent May has been busted (three years ahead of schedule) in London and UK universities might once again be free to swing for the fences where international students are concerned should be a source of concern for everyone.  We’re about to have competition again, folks.   You ready?

Are things going to get better?  Well, on research the answer is probably yes.  It seems like the Government of Canada, in response to the Naylor Report, is going to spend more money on fundamental research, which is good.  The questions for next year are really: i) do the feds have the intellectual capacity to hold two thoughts in head at same time and invest both in fundamental and applied research at the same time and ii) how crazy is the super-cluster competition going to be?  But on the fundamental question of core funding for institutions, I think the answer is no.  Governments across the country and the political spectrum are wedded to the idea of starving institutions and giving more money to students.  Newfoundland is only the most egregious example.

But I’m optimistic.  I see more and more universities and colleges being increasingly strategic about their budgeting and operations.  I see money coming open internally as that long-delayed wave of retirements slowly starts to happen.  I see faculty associations (mostly – there are exceptions) moderating financial demands in light of prolonged financial difficulties.  And as always, I am constantly amazed at the dedication, brilliance and inventiveness of the tens of thousands of people who work in our post-secondary institutions.

And with those happy thoughts, I bid you all a good vacation.  And if you have any comments about the blog and how it’s been over the last year – what I should change, write more/less about – please do get in touch with me at alex at higheredstrategy dot com.  I am always eager for feedback.

Now, go play in the sun.

June 06

Making “Applied” Research Great Again

One of the rallying calls of part of the scientific community over the past few years is how under the Harper government there was too much of a focus on “applied” research and not enough of a focus on “pure”/”basic”/”fundamental research.  This call is reaching a fever pitch following the publication of the Naylor Report (which, to its credit, did not get into a basic/applied debate and focussed instead on whether or not the research was “investigator-driven”, which is a different and better distinction).  The problem is that that line between “pure/basic/fundamental” research and applied research isn’t nearly as clear cut as people believe, and the rush away from applied research risks throwing out some rather important babies along with the bathwater.

As long-time readers will know, I’m not a huge fan of a binary divide between basic and applied research.  The idea of “Basic Science” is a convenient distinction created by natural scientists in the aftermath of WWII as a way to convince the government to give them money the way they did during the war but without having soldier looking over their shoulder.  In some fields (medicine, engineering), nearly all research is “applied” in the sense that it there are always considerations of the end-use for the research.

This is probably a good time for a refresher on Pasteur’s Quadrant.  This concept was developed by Donald Stokes, a political scientist at Princeton, just before his death in 1997.  He too thought the basic/applied dichotomy was pretty dumb, so like all good social scientists he came up with a 2×2 instead.  One consideration in classifying science is whether or not it involved a quest for fundamental understanding; the other was whether or not the researcher had any consideration for end-use.   And so what you get is the following:

June 6 -17 Table 1

(I’d argue that to some extent you could replace “Bohr” with “Physics” and “Pasteur” with “Medicine” because it’s the nature of the fields of research and not individual researchers’ proclivities, per se, but let’s not quibble).

Now what was mostly annoying about the Harper years – and to some extent the Martin and late Chretien years – was not so much that the federal government was taking money out of the “fundamental understanding” row and into the “no fundamental understanding” row (although the way some people go on you’d be forgiven for thinking that), but rather than it was trying to make research fit into more than one quadrant at once.  Sure, they’d say, we’d love to fund all your (top-left quadrant) drosophilia research, but can you make sure to include something about its eventual (bottom-right quadrant) commercial applications?  This attempt to make research more “applied” is and was nonsense, and Naylor was right to (mostly) call for an end to it.

But that is not the same thing as saying we shouldn’t fund anything in the bottom-right corner – that is, “applied research”.

And this is where the taxonomy of “applied research” gets tricky.  Some people – including apparently the entire Innovation Ministry, if the last budget is any indication – think that the way to bolster that quadrant is to leave everything to the private sector, preferably in sexy areas like ICT, Clean Tech and whatnot.  And there’s a case to be made for that: business is close to the customer, let them do the pure applied research.

But there’s also a case to be made that in a country where the commercial sector has few big champions and a lot of SMEs, the private sector is always likely to have some structural difficulties doing the pure applied research on its own.  It’s not simply a question of subsidies: it’s a question of scale and talent.  And that’s where applied research as conducted in Canada’s colleges and polytechnics comes in.  They help keep smaller Canadian companies – the kinds that aren’t going to get included in any “supercluster” initiative – competitive.  You’d think this kind of research should be of interest to a self-proclaimed innovation government.  Yet whether by design or indifference we’ve heard nary a word about this kind of research in the last 20 months (apart perhaps from a renewal of the Community and College Social Innovation Fund).

There’s no reason for this.  There is – if rumours of a cabinet submission to respond to the Naylor report are true – no shortage of money for “fundamental”, or “investigator-driven” research.  Why not pure applied research too?  Other than the fact that “applied research” – a completely different type of “applied research”, mind you – has become a dirty word?

This is a policy failure unfolding in slow motion.  There’s still time to stop it, if we can all distinguish between different types of “applied research”.

May 08

Naylor Report, Part II

Morning all.  Sorry about the service interruption.  Nice to be back.

So, I promised you some more thoughts about the Fundamental Science Review.  Now that I’ve lot of time to think about it, I think I’m actually surprised by what it doesn’t say, says and how many questions remain open.

What’s best about the report?  The history and most of the analysis are pretty good.  I think a few specific recommendations (if adopted) might actually be a pretty big deal – in particular the one saying that the granting councils should stop any programs forcing researchers to come up with matching funding, mainly because it’s a waste of everyone’s time.

What’s so-so about it?  The money stuff for a start.  As I noted in my last blog post, I don’t really think you can justify a claim to more money based on “proportion of higher ed investment research coming from federal government”.  I’m more sympathetic to the argument that there needs to be more funds, especially for early career researchers, but as noted back here it’s hard to argue simultaneously that institutions should have unfettered rights to hire researchers but that the federal government should be pick up responsibility for their career progression.

The report doesn’t even bother, really, to make the case that more money on basic research means more innovation and economic growth.  Rather, it simply states it, as if it were a fact (it’s not).  This is the research community trying to annex the term “innovation” rather than co-exist with it.  Maybe that works in today’s political environment; I’m not sure it improves overall policy-making.  In some ways, I think it would have been preferable to just say: we need so many millions because that’s what it takes to do the kind of first-class science we’re capable of.  It might not have been politic, but it would have had the advantage of clarity.

…and the Governance stuff?  The report backs two big changes in governance.  One is a Four Agency Co-ordinating Board for the three councils plus the Canada Foundation for Innovation (which we might as well now call the fourth council, provided it gets an annual budget as recommended here), to ensure greater cross-council coherence in policy and programs.  The second is the creation of a National Advisory Committee on Research and Innovation (NACRI) to replace the current Science, Technology and Innovation Council and do a great deal else besides.

The Co-ordinating committee idea makes sense: there are some areas where there would be clear benefits to greater policy coherence.  But setting up a forum to reconcile interests is not the same thing as actually bridging differences.  There are reasons – not very good ones, perhaps, but reasons nonetheless – why councils don’t spontaneously co-ordinate their actions; setting up a committee is a step towards getting them to do so, but success in this endeavour requires sustained good will which will not necessarily be forthcoming.

NACRI is a different story.  Two points here.  The first is that it is pretty clear that NACRI is designed to try to insulate the councils and the investigator-driven research they fund from politicians’ bright ideas about how to run scientific research.  Inshallah, but if politicians want to meddle – and the last two decades seem to show they want to do it a lot – then they’re going to meddle, NACRI or no.  Second, the NACRI as designed here is somewhat heavier on the “R” than on the “I”.  My impression is that as with some of the funding arguments, this is an attempt to hijack the Innovation agenda in Research’s favour.  I think a lot of people are OK with this because they’d prefer the emphasis to be on science and research rather than innovation but I’m not sure we’re doing long-term policy-making in the area any favours by not being explicit about this rationale.

What’s missing?  The report somewhat surprisingly punted what I expected to be a major issue: namely, the government’s increasing tendency over time to fund science outside the framework of the councils in such programs as the Canada Excellence Research Chairs (CERC) and the Canada First Research Excellence Fund (CFREF).  While the text of the report makes clear the authors’ have some reservations about these programs, the recommendations are limited to a “you should review that, sometime soon”.  This is too bad, because phasing out these kinds of programs would be an obvious way to pay for increase investigator-driven funding (though as Nassif Ghoussoub points out here  it’s not necessarily a quick solution because funds are already committed for several years in advance).  The report therefore seems to suggest that though it deplores past trends away from investigator-driven funding, it doesn’t want to see these recent initiatives defunded, which might be seen in government as “having your cake and eating it too”.

What will the long-term impact of the report be? Hard to say: much depends on how much of this the government actually takes up, and it will be some months before we know that.  But I think the way the report was commissioned may have some unintended adverse consequences.  Specifically, I think the fact that this review was set up in such a way as to exclude consideration of applied research – while perfectly understandable – is going to contribute to the latter being something of a political orphan for the foreseeable future.  Similarly, the fact that the report was done in isolation from the broader development of Innovation policy might seem like a blessing given the general ham-fistedness surrounding the Innovation file, in the end I wonder if the end result won’t be an effective division of policy, with research being something the feds pay universities do and innovation something they pay firms to do.  That’s basically the right division, of course, but what goes missing are vital questions about how to make the two mutually reinforcing.

Bottom line: it’s a good report.  But even if the government fully embraces the recommendations, there are still years of messy but important work ahead.

April 18

Naylor Report, Take 1

People are asking why I haven’t talked about the Naylor Report (aka the Review of Fundamental Science) yet.  The answer, briefly, is i) I’m swamped ii) there’s a lot to talk about in there and iii) I want to have some time to think it over.  But I did have some thoughts about chapter 3, where I think there is either an inadvertent error or the authors are trying to pull a fast one (and if it’s the latter I apologize for narking on them).  So I thought I would start there.

The main message of chapter 3 is that the government of Canada is not spending enough on inquiry-driven research in universities (this was not, incidentally, a question the Government of Canada asked of the review panel, but the panel answered it anyway).  One of the ways that the panel argues this point is that while Canada has among the world’s highest levels of Research and Development in the higher education sector – known as HERD if you’re in the R&D policy nerdocracy – most of the money for this comes from higher education institutions themselves and not the federal government.  This, that say, is internationally anomalous and a reason why the federal government should spend more money.

Here’s the graph they use to make this point:

Naylor Report

Hmm.  Hmmmmm.

So, there are really two problems here.  The first is that HERD can be calculated differently in different countries for completely rational reasons.  Let me give you the example of Canada vs. the US.  In Canada, the higher education portion of the contribution to HERD is composed of two things: i) aggregate faculty salaries times the proportion of time profs spend on research (Statscan occasionally does surveys on this – I’ll come back to it in a moment) plus ii) some imputation about unrecovered research overhead.  In the US, it’s just the latter.  Why?  Because the way the US collects data on HERD, the only faculty costs they capture are the chunks taken out of federal research grants.  Remember, in the US, profs are only paid 9 months per year and at least in the R&D accounts, that’s *all* teaching.  Only the pieces of research grant they take out as summer salary gets recorded as R&D expenditure (and also hence as a government-sponsored cost rather than a higher education-sponsored one).

But there’s a bigger issue here.  If one wants to argue that what matters is the ratio of federal portion of HERD to the higher-education portion of HERD, then it’s worth remembering what’s going on in the denominator.  Aggregate salaries are the first component.  The second component is research intensity, as measured through surveys.  This appears to be going up over time.  In 2000, Statscan did a survey which seemed to show the average prof spending somewhere between 30-35% of their time on research. A more recent survey shows that this has risen to 42%.  I am not sure if this latest co-efficient has been factored into the most recent HERD data, but when it does, it will show a major jump in higher education “spending” (or “investment”, if you prefer) on research, despite nothing really having changed at all (possibly it has been and it is what explains the bump seen in expenditures in 2012-13)

What the panel ends up arguing is for federal funding to run more closely in tune with higher education’s own “spending”.  But in practice what this means is: every time profs get a raise, federal funding would have to rise to keep pace.  Every time profs decide – for whatever reasons – to spend more time on research, federal funds should rise to keep pace.  And no doubt that would be awesome for all concerned, but come on.  Treasury Board would have conniptions if someone tried to sell that as a funding mechanism.

None of which is to say federal funding on inquiry-driven research shouldn’t rise.  Just to say that using data on university-funded HERD might not be a super-solid base from which to argue that point

January 25

The Science Policy Review

So, any day now, the report of the Government of Canada’s Science Policy review should be appearing.  What is that, you ask?  Good question.

“Science policy” is one of those tricky terms.  Sometimes it can mean science as a way of making policy (like when someone claims they want all policy to be “evidence-based); sometimes it’s about policy for or about Science, and the rules and regulations under which it is funded.  This particular Science policy review, chaired by former U of T President David Naylor is a bit narrower; as the mandate letter and key questions show, this review is fundamentally about funding.   In fact, three sets of questions about funding: funding of fundamental research, funding of facilities/equipment, and funding of “platform technologies” (which is irritating innovation policy jargon which makes a lot more senses in IT than in the rest of science, but whatever).

For the first two sets of questions, there’s a heavy tilt towards fitness of purpose of existing funding agencies.  The review’s emphasis is not so much “are we spending enough money” (that’s a political decision) but rather “does the way we spend money make sense”.  For example, one might well ask “does a country of 35 million people and fewer than 100 universities actually need three granting councils, plus CFI, the Foundation for Sustainable Development, Brain Canada, Genome Canada, the Canada First Research Excellence Fund… you get the idea.

There was a frisson of excitement last year when the UK decided to fold all their granting councils into One Big Council – might our Science Review recommend something similar?  Personally, I’m not entirely sold on the idea that fewer councils means lest paperwork and more coherence (the reasons usually given in favour of rationalization), because policies and agendas can survive institutional mergers.  And as a colleague of mine who used to be quite high up in a central agency once said to me: the reason all these agencies proliferated in the first place was that politicians got frustrated with the traditional granting councils and wanted something more responsive.  Paring them back doesn’t necessarily solve the problem – it just re-sets the clock until the next time politicians get itchy.

This itchiness could happen sooner than you think.  Even as the government has been asking Naylor and his expert panel to come up with a more rational scheme of science management A couple of weeks ago it emerged that one of the ideas the Liberals had decided to test in their regular pre-budget focus group work was the idea of spending mega-millions (billions?) on a scientific “Moonshot”: that is, a huge focused effort on one goal or technology such as  – and I quote – driverless cars, unmanned aircraft, or “a network of balloons travelling on the edge of space designed to help people connect to the internet in remote areas or in a crisis situation”.  Seriously.  If any of you thought supporting big science projects over broad-based basic science was a Tory thing, I’m afraid you were sorely mistaken.

Anyways, back to the review.  There’s probably room for the review to provide greater coherence on “big science” and science infrastructure – Nassif Ghoussoub of UBC has provided some useful suggestions here.  There may be some room for reduction in the number of granting agencies (though – bureaucratic turf protection ahoy!) and definitely room to get the councils – especially CIHR – to back off on the idea that every piece of funded research needs to have an end-use in mind (I’d be truly shocked if Naylor didn’t beat the crap out of that particular drum in his final report).

But the problem is that the real challenges in Canadian Science are much more intractable.  Universities hired a lot of new staff in the last fifteen years, both in order to improve their research output and to deal with all those new undergraduates we’ve been letting in.  This leads to more competition.  Meanwhile, government funding has declined somewhat since 2008 – even after that nice little unexpected boost the feds gave the councils last budget.  At the same time, granting councils – most of all CIHR – have been increasing the average size of awards.  Which is great if you can get a grant; the problem is that with stagnant budgets the absolute number of grants is falling.  So what do rational individual researchers do with more competition for fewer awards?  They submit more applications to increase their chances of getting an award.  Except that this drives down acceptance rates still further – on current trends, we’ll be below 10% before too long.

Again, this isn’t just a Canadian phenomenon – we’re seeing similar results in a number of countries.  The only solution (bar more funding, which isn’t really in the Review’s remit) is to give out a larger number of smaller awards.  But this runs directly contrary to the prevailing political wind, which seems to be about making fewer, bigger awards: Canada Excellence Research Chairs (there’s rumours of a new round!), CFREF, Moonshots, whatever.  You can make a case for all those programs but the question is one of opportunity costs.  CFREF might be brilliant at focusing institutional resources on a particular problem and acting as anchors for new tech/business clusters: but is it a better use of money than seeding money widely to researchers through the usual peer-review mechanism?  (for the record, I think CFREF makes infinitely more sense than CERCs or Moonshots, but am a bit more agnostic on CFERF vs. granting councils).

Or, to be more brutal: should we have moonshots and CFREF and a 10% acceptance rate on council competitions, or no moonshots or CFREF and a 20% acceptance rate?  We’ve avoided public discussion on these kinds of trade-offs for too long.  Hopefully, Naylor’s review will bring us the more pointed debate this topic needs.