HESA

Higher Education Strategy Associates

Category Archives: funding

March 24

Banning the Term “Underfunding”

Somehow I missed this when the OECD’s Education at a Glance 2014 came out, but apparently Canada’s post-secondary system is now officially the best funded in the entire world.

I know, I know.  It’s a hard idea to accept when Presidents of every student union, faculty association, university, and college have been blaming “underfunding” for virtually every ill in post-secondary education since before Air Farce jokes started taking the bus to get to the punchline.  But the fact is, we’re tops.  Numero uno.  Take a look:

Figure 1: Percentage of GDP Spent on Higher Education Institutions, Select OECD Countries, 2011

unnamed

 

 

 

 

 

 

 

 

 

 

 

 

For what I believe is the first time ever, Canada is outstripping both the US (2.7%) and Korea (2.6%).  At 2.8% of GDP, spending on higher education is nearly twice what it is in the European Union.

Ah, you say, that’s probably because so much of our funding comes from private sources.  After all, don’t we always hear that tuition is at, or approaching, 50% of total funding in universities?  Well, no.  That stat only applies to operating expenditures (not total expenditures), and is only valid in Nova Scotia and Ontario.  Here’s what happens if we look only at public spending in all those countries:

Figure 2: Percentage of GDP Spent on Higher Education Institutions from Public Sources, Select OECD Countries, 2011

unnamed

 

 

 

 

 

 

 

 

 

 

 

 

While it’s true that Canada does have a high proportion of funds coming from private sources, public sector support to higher education still amounts to 1.6% of GDP, which is substantially above the OECD average.  In fact, our public expenditure on higher education is the same as in Norway and Sweden; among all OECD countries, only Finland and Denmark (not included in graph) are higher.

And this doesn’t even consider the fact that Statscan and CMEC don’t include expenditures like Canada Education Savings Grants and tax credits, which together are worth another 0.2% of GDP, because OECD doesn’t really have a reporting category for oddball expenditures like that.  The omission doesn’t change our total expenditure, but it does affect the public/private balance.  Instead of being 1.6% of GDP public, and 1.2% of GDP private, it’s probably more like 1.8% or 1.9% public, which again would put us at the absolute top of the world ranking.

So it’s worth asking: when people say we are “underfunded”, what do they mean?  Underfunded compared to who?  Underfunded for what?  If we have more money than anyone else, and we still feel there isn’t enough to go around, maybe we should be looking a lot more closely at *how* we spend the money rather than at *how much* we spend.

Meantime, I think there should be a public shaming campaign against use of the term “underfunding” in Canada.  It’s embarrassing, once you know the facts.

March 04

Stop Saying Higher Education is a Public Good

Few things drive me crazier than when people claim higher education is a public good, and then claim that, on that basis, it deserves either: a) more public funding, or b) needs to be funded exclusively on a public basis.  This argument represents a fundamental misunderstanding of what the term “public good” actually means.

When most people hear the phrase “public good”, they’re probably thinking something like, “it’s good, it’s publicly funded; therefore, it’s a public good”.  But  that rationale is tautological.  In fact, claims for public funding on the basis of a good being “public” rests on a much narrower definition.  Here, I’d urge everyone to read Frances Wooley’s excellent summary of this issue entitled, “Why public goods are a pedagogical bad”.  To qualify as a public good, the good has to be both non-rival (that is, one person using it does not diminish others’ ability to use it), and non-excludable (that is, once provided it is difficult to prevent people from using – e.g. lighthouses).  The number of goods to which that might actually apply is very, very small, and higher education certainly isn’t one of them.  Classroom space is very definitely rival, and it is trivially easy to exclude people from education – no money, no degree.  Higher education is thus a private good.  One with many public benefits, for sure, but private nonetheless.

Why does it matter if people call it a public good?  Because in all your basic economic textbooks, public goods are the goods that all (or nearly all) think should be publicly funded.  When people say something is a pubic good, they’re actually launching an (erroneous) appeal to economic authority as a basis for public funding.

Now, just because something isn’t a public good doesn’t mean there’s no case for a subsidy: it just means there’s no automatic case for it.  Health care, welfare, and employment insurance are not public goods, but there’s still a very good case to be made for all of them in terms of a public insurance function – that is, it’s cheaper to collectively insure against ill health, job loss, and poverty than it is to make people do it themselves.

Sometimes there’s a case for government subvention due to obvious market failure – most student loans come under this category (markets have a hard time funding human capital), as does public funding of research (some types of research won’t be undertaken by the private sector because of the size of the externalities).

So it’s fine to say there is a public purpose to higher education.  And it’s fine to say higher education has many public benefits.  But saying higher education is a public good, and therefore deserves full public financing, is simply wrong.  If we’re going to have sensible conversations about higher education financing, the least we can do is get the terminology right.

February 20

Performance-Based Funding (Part 4)

I’ve been talking about performance-based funding all week; today, I’ll try to summarize what I think the research and experience actually says.

Let’s return for a second to a point I made Tuesday.  When determining whether PBF “works”, what matters is to be able to show that incentivizing particular outcomes actually changes institutional behaviour, and leads to improvements in outcomes. However, no study to date has actually bothered to link quantifiable changes in funding with any policy outcomes.  Hillman and Tandberg – who found little-to-no positive effects – came closest to doing this, but they looked only at the incidence of PBF, and not the size of PBF; as such, their results can easily be read to suggest that the problem with PBF is that it needs to be bigger in order to work properly.  And indeed, that’s very likely: in over half of US states with PBFs, the proportion of operating income held for PBF purposes is 2.5%; in practice, the size of the re-distribution of funds from PBFs (that is, the difference between how that 2.5% is distributed now versus how it was distributed before PBFs were introduced) is probably a couple of orders of magnitude smaller still.

I would argue that there’s a pretty simple reason why most PBFs in North America don’t actually change the distribution of funds: big and politically powerful universities tend to oppose changes that might “damage” them.  Therefore, to the extent that any funding formula results in something too far from the status quo (which tends to reward big universities for their size), they will oppose it.  The more money that suddenly becomes at risk, the more the big universities scream.  Therefore, the political logic of PBFs is that to have a chance of implementation they have to be relatively small, and not disturb the status quo too much.

Ah, you say: but what about Europe?  Surely the large size of PBF incentives must have caused outrage when they were introduced, right?  That’s a good question, and I don’t really have an answer.  It’s possible that, despite their size, PBF schemes did not actually change much more in terms of distribution than did their American counterparts.  I can come up with a few country-specific hypotheses about why that might be: the Danish taximeter system was introduced at a time when universities were still considered part of governments (and academics part of the civil service), the Polish system was introduced at a time of increasing government funding, etc.  But those are just guesses.  In any case, such lit as I can find on the subject certainly doesn’t mention much in terms of opposition.

So, I think we’re kind of back to square one.  I think the Hillman/Tandberg evidence tells us that simply having a PBF doesn’t mean much, and I think the European evidence suggests that at a sizeable enough scale, PBFs can  incentivize greater institutional efficiency.  But beyond that, I don’t think we’ve got much solid to go on.

For what it’s worth, I’d add one more thing based on work I did last year looking at the effect of private income on universities in nine countries: and that is, only incentivize things that don’t already carry prestige incentives.  Canadian universities are already biased towards activities like research; incentivizing them further through performance funding is like giving lighter fluid to a pyromaniac.

No, what you want to incentivize is the deeply unsexy stuff that’s hard to do.  Pay for Aboriginal completions in STEM subjects.  Pay for Female Engineering graduates. Pay big money to the institution that shows the greatest improvement in the National Survey of Student Engagement (NSSE) every two years.  Offer a $20 million prize to the institution that comes up with the best plan for measuring – and then improving – learning, payable in installments to make sure they actually follow through (ok, that’s competitive funding rather than performance-based funding, but you get the idea).

Neither the pro- nor anti-camp can point to very much genuinely empirical evidence about efficacy; in the end it all comes down to whether one thinks institutions will respond to incentives.  I think it’s pretty likely that they do; the trick is selecting the right targets, and structuring the incentives in an intelligent way.  And that’s probably as much art as science.

February 19

Performance-Based Funding (Part 3)

As I noted yesterday, the American debate on PBF has more or less ignored evidence from beyond its shores; and yet, in Europe, there are several places that have very high levels of performance-based funding.  Denmark has had what it calls a “taximeter” system, which pays institutions on the basis of student progression and completion, for over 20 years now, and it currently makes up about 30% of all university income.  Most German Länder have some element of incentive-based funding on either student completion or time-to-completion; in some cases, they are also paid on the basis of the number of international students they attract (international students pay no tuition in Germany).  In the Netherlands, graduation-based funding makes up over 60% of institution operating grants (or, near as I can tell, about 30% of total institutional income).  The Czech Republic now gives out 20% of funding to institutions on a quite bewildering array of indicators, including internationalization, research, and student employment outcomes.

Given this, you’d think there might be a huge and copious literature about whether the introduction of these measures actually “worked” in terms of changing outcomes of the indicators in question.  But you’d be wrong.  There’s actually almost nothing.  That’s not to say these programs haven’t been evaluated.  The Danish taximeter system appears to have been evaluated four times (haven’t actually read these – Danish is fairly difficult), but the issue of dropouts doesn’t actually seem to have been at the core of any of them (for the record, Danish universities have relatively low levels of dropouts compared to other European countries, but it’s not clear if this was always the case or if it was the result of the taximeter policy).  Rather, what gets evaluated is the quite different question of: “are universities operating more efficiently?”

This is key to understanding performance indicators in Europe. In many European countries, public funding makes up as close to 100% of institutional income as makes no odds.  PBF has therefore often been a way of trying to introduce a quasi-market among institutions so as to induce competition and efficiency (and on this score, it usually gets fairly high marks).  In North America, where pressures for efficiency are exerted through a competitive market for students, the need for this is – in theory at least – somewhat less.  This largely explains the difference in the size of performance-based funding allocations; in Europe, these funds are often the only quasi-competitive mechanism in the system, and so (it is felt) they need to be on the scale of what tuition is in North America in order to achieve similar competitive effects.

Intriguingly, performance-based funding in Europe is at least as common with respect to research as it is to student-based indicators (a good country-by-country summary from the OECD is here).  Quite often, a portion of institutional operating funding will be based on the value of competitive research won, a situation made possible by the fact that many countries in Europe separate their institutional grants into funding for teaching and funding for research in a way that would give North American universities the screaming heebie-jeebies.  Basically: imagine if the provinces awarded a portion of their university grants on the same basis that Ottawa hands out the indirect research grants, only with less of the questionable favouritism towards smaller universities.  Again, this is less about “improving overall results” than it is about keeping institutions in a competitive mindset.

So, how to interpret the evidence of the past three days?  Tune in tomorrow.

February 18

Performance-Based Funding (Part 2)

So, as we noted yesterday, there are two schools of thought in the US about performance-based funding (where, it should be noted, about 30 states have some kind of PBF criteria built into their overall funding system, or are planning to do so).  Basically, one side says they work, and the other says they don’t.

Let’s start with the “don’t” camp, led by Nicholas Hillman and David Tandberg, whose key paper can be found here.  To determine whether PBFs affect institutional outcomes, they look mostly at a single output – degree completion.  This makes a certain amount of sense since it’s the one most states try to incentivize, and they use a nice little quasi-experimental research design showing changes in completion rates in states with PBF and those without.  Their findings, briefly, are: 1) no systematic benefits to PBF – in some places, results were better than in non-PBF systems, in other places they were worse; and, 2) where PBF is correlated with positive results, said results can take several years to kick-in.

Given the methodology, there’s no real arguing with the findings here.  Where Hillman & Tandberg can be knocked, however, is that their methodology assumes that all PBF schemes are the same, and are thus assumed to be the same “treatment”.  But as we noted yesterday, the existence of PBF is only one dimension of the issue.  The extent of PBF funding, and the extent to which it drives overall funding, must matter as well.  On this, Hillman and Tandberg are silent.

The HCM paper does in fact give this issue some space.  Turns out that in the 26 states examined, 18 have PBF systems, which account for less than 5% of overall public funding.  Throw in tuition and other revenues, and the amount of total institutional revenue accounted by PBF drops by 50% or more, which suggests there are a lot of PBF states where it would simply be unrealistic to expect much in the way of effects.  Of the remainder, three are under 10%, and then there are five huge outliers: Mississippi at just under 55%, Ohio at just under 70%, Tennessee at 85%, Nevada at 96%, and North Dakota at 100% (note: Nevada essentially has one public university and North Dakota has two: clearly, whatever PBF arrangements are there likely aren’t changing the distribution of funds very much).  The authors then point to a number of advances made in some of these states on a variety of metrics, such as “learning gains” (unclear what that means), greater persistence for at-risk students, shorter times-to-completion, and so forth.

But while the HCM report has a good summary of sensible design principles for performance-based funding, there is little that is scientific about it when it comes to linking policy to outcomes. There’s nothing like Hillman and Tandberg’s experimental design at work here; instead, what you have is an unscientific group of anecdotes about positive things that have occurred in places with PBF.  So as far as advancing the debate about what works in performance-based funding, it’s not up to much.

So what should we believe here?  The Hillman/Tandberg result is solid enough – but if most American PBF systems don’t change funding patterns much, then it shouldn’t be a surprise to anyone that institutional outcomes don’t change much either.  What we need is a much narrower focus on systems where a lot of institutional money is in fact at risk, to see if increasing incentives actually does matter.

Such places do exist – but oddly enough neither of these reports actually looks at them.  That’s because they’re not in the United States, they’re in Europe.  More on that tomorrow.

February 17

Performance-Based Funding (Part 1)

I was reading the Ontario Confederation of University Faculty Association (OCUFA)’s position statement on a new funding formula for the province.  Two things caught my eye.  One, they want money to make sure Ontario universities can do world-class research and teaching; and two, they demand strict opposition to any kind of performance-based funding formula (PBF).  Put differently: OCUFA wants great teaching and research to be funded, but are adamantly opposed to rewarding anyone for actually doing it.

Except that’s slightly uncharitable.  OCUFA’s larger point seems to be that performance-based funding formulae (also known as output-based funding) “don’t actually achieve their goals”, and point to work done by University of Wisconsin professor Nicholas Hillman and Florida State’s David Tandberg on the topic.  From a government-spending efficacy point of view, this objection is fair enough, but it’s a bit peculiar from an institutional or faculty standpoint; the Hillman/Tandberg evidence doesn’t indicate that institutions were actually harmed in any way by the introduction of said arrangements, so what’s the problem?

Anyways, last week HCM associates in Washington put out a paper taking a contrary view to Hillman/Tandberg, so we now have some live controversy to talk about.  Tomorrow, I’ll examine the Hillman/Tandberg and HCM evidence to evaluate the claims of each, but today I want to go through what output-based funding mechanisms can actually look like, and in the process show how difficult it is for meta-analyses – such as Hillman’s and HCM’s – to calculate potential impact.

At one level, PBF is simple: you pay for what comes out of universities rather than what goes in.  So: don’t pay for bums in seats, pay for graduates; don’t pay based on research grants earned, pay based on articles published in top journals, etc.  But the way these get paid-out can vary widely, so their impacts are not all the same.

Take graduation numbers, which happens to be the simplest and most common indicator used in PBFs.  A government could literally pay a certain amount per graduate – or maybe “weighted graduate” to take account of different costs by field of study.  It could pay each institution based on its share of total graduates or weighted graduates.  It could give each institution a target number of graduates (based on size and current degree of selectivity, perhaps) and pay out 100% of a value if it hits the target, and 0% if it does not.  Or, it could set a target and then pay a pro-rated amount based on how well the institution did vis-a-vis the target.  And so on, and so forth.

Each of these methods of paying out PBF money plainly has different distributional consequences.  However, if  you’re trying to work out whether output-based funding actually affects institutional outcomes, then the distributional consequence is only of secondary importance.  What matters more is how different the distributional outcomes are from whatever distribution existed in the previous funding formula.

So, say the province Saskatchewan moves from its current mix of historical grant and formula grant to a fully PBF system, where 100% of the funding is based on the number of (field-weighted) graduates produced.  Currently, the University of Saskatchewan gets around three times as much in total operating grants as the University of Regina.  If USask also produced three times as many (field-weighted) graduates as URegina, even the shift to a 100% PBF model wouldn’t change anything in terms of distribution, and hence would have limited consequences in terms of policy and (presumably) outputs.

In effect, the real question is: how much funding, which was formerly “locked-in”, becomes “at-risk” during the shift to PBF?  If the answer is zero, then it’s not much of a surprise that institutional behaviour doesn’t change either.

Tomorrow: a look at the duelling America research papers on PBF.

February 12

Free Election Manifesto Advice

OK, federal political parties.  I have some election manifesto advice for you.  And given that you’ve all basically accepted Tory budget projections and promised not to raise taxes, it’s perfect.  Completely budget neutral.  Here it is:

Do Less.

Seriously.  After 15 years of increasingly slapdash, haphazard policy-making in research and student aid, a Do Less agenda is exactly what we need.

Go back to 1997: we had three granting councils in Canada.  Then we got the Canadian Foundation for Innovation.  Then the Canadian Foundation for Sustainable Development Technology.  Then Brain Canada, Genome Canada, Grand Challenges Canada, the Canadian Foundation for Healthcare Improvement, The Canada First Research Excellence Fund – and that’s without mentioning the proliferation of single-issue funds created at SSHRC and NSERC.  On commercialization, we’ve got a College and Community Innovation Program, a College-University Idea to Innovation Program, a dozen or so Centres of Excellence for Commercialization and Research (CECRs) – plus, of course, the wholesale revamp of the National Research Council to turn it into a Canadian version of the Fraunhofer Institute.

It’s not that any of these initiatives are bad.  The problem is that by spreading out money thinly to lots of new agencies and programs, we’re losing something in terms of coherence.  Funding deadlines multiply, pools of available cash get smaller (even if overall budgets are more or less what they used to be), and – thanks to the government requirement that a large portion of new funding arrangements be leveraged somehow – the number of funders whose hands need to held (sorry, “whose accountability requirements need to be met”) is rising very fast.  It all leaves less time to, you know, do the actual science – which is what all this funding is supposed to be about, isn’t it?

Or take student assistance.  We know how much everyone (Liberals especially) loves new boutique student aid programs.  But that’s exactly the wrong way to go.  Everything we know about the $10 billion/year student aid business is that it’s far too complicated, and no one understands it.  That’s why people in Ontario scream about affordability and accessibility when in fact the province is nearly as generous as Quebec when it comes to first-year low-income university students.  For people to better appreciate what a bargain Canadian higher education is, we need to de-clutter the system and make it more transparent, not add more gewgaws.

So here’s the agenda: take a breather on new science and innovation programs; find out what we can do to make the system simpler for researchers; merge and eliminate programs as necessary (is Genome Canada really still worth keeping, or can we basically fold that back in to CIHR?) – while ensuring that total funds available do not diminish (a bump would be nice, too, but the simplification is more important).

As for student aid?  Do a deal with the provinces to simplify need assessment and make it easier for students to know their likely aid eligibility much further in advance.  Do a deal with provinces and institutions to convert tax credits into grants to institutions for a large one-time tuition reduction.  Do not, under any circumstances, do anything to make the system more complex.

I know it goes against the grain, guys.  I know you need “announceables” for the campaign.  But in the long run, it’s more important to do things well.  And to do that, we really need to start doing less.

January 20

Classroom Economics (Part 2)

Yesterday, I introduced the equation X = aϒ/(b+c) as a way of setting overall teaching loads. Let’s now use this to understand how funding parameters drive overall teaching loads.

Assume the following starting parameters:

1

 

 

 

 

 

Where a credit hour = 1 student in 1 class for 1 semester.

Here’s the most obvious way it works.  Let’s say the government decides to increase funding by 10%, from $600 to $660 (which would be huge – a far larger move than is conceivable, except say in Newfoundland at the height of the oil boom).  Assuming no other changes – that is, average compensation and overhead remain constant – the 10% increase would mean:

X= 2.27($150,000)/($600+$850) = 235

X= 2.27($150,000)/($660+$850) = 225

In other words, a ten percent increase in funding and a freeze on expenditures would reduce teaching loads by about 4%.  Assuming a professor is teaching 2/2, that’s a decrease of 2.5 students per class.  Why so small?  Because in this scenario (which is pretty close to the current situation in Ontario and Nova Scotia), government funding is only about 40% of operating income.  The size of the funding increase necessary to generate a significant effect on teaching loads and class sizes is enormous.

And of course that’s assuming no changes in other costs.  What happens if we assume a more realistic scenario, one in which average salaries rise 3%, and overhead rises at the same rate?

X= 2.27($154,500)/($660+$850) = 232

In other words, as far as class size is concerned, normal (for Canada anyway) salary increases will eat up about 70% of a 10% increase in government funding.  Or, to put it another way, one would normally expect a 10% increase in government funding to reduce class sizes by a shade over 1%.

Sobering, huh?

OK, let’s now take it from the other direction – how big an income boost would it take to reduce class sizes by 10%?  Well, assuming that salary and other costs are rising by 3%, the entire right side of the equation (b+c) would need to rise by 14.5%.  That would require an increase in government funding of 35%, or an increase in revenues from students of 25% (which could either be achieved through tuition increases, or a really big shift from domestic to international enrolments), or some mix of the two; for instance, a 10% increase in government funds and a 17% increase in student funds.

That’s more than sobering.  That’s into “I really need a drink” territory.  And what makes it worse is that even if you could pull off that kind of revenue increase, ongoing 3% increases in salary and overhead would eat up the entire increase in just three years.

Now, don’t take these exact numbers as gospel.  This example works in a couple of  low-cost programs (Arts, Business, etc.) in Ontario and Nova Scotia (which, to be fair, represent half the country’s student body), but most programs in most provinces are working off a higher denominator than this, and for them it would be less grim than I’m making out here.  Go ahead and play with the formula with data from your own institution and see what happens – it’s revealing.

Nevertheless, the basic problem is the same everywhere.  As long as costs are increasing, you either have to get used to some pretty heroic revenue assumptions (likely involving significant tuition increases) or you have to get used to the idea of ever-higher teaching loads.

So what are the options on cost-cutting?  Tune in tomorrow.

January 19

Classroom Economics (Part 1)

One of the things that continually astonishes me about universities is how few people who work within them actually understand how they are funded, and what the budget drivers really are.  So this week I’m going to walk y’all through a simplified model of how the system really works.

Let’s start by stating what should be – but too often isn’t – the obvious: universities are paid to teach.  They are paid specific amounts to do specific pieces of research through granting councils and other kinds of research funding arrangements, but the core operating budget – made up of government grants and tuition fees – relates nearly entirely to teaching.  This is not in any way to suggest that teaching is all professors should do.  It is, however, to say that their funding depends on teaching.  Want a bigger budget?  Teach more students.

This link is more obvious in some provinces than others.  In places like Ontario and Quebec, which have funding formulae, the link is clear: each student is worth a particular amount of money based on their field and level of study.  In others, like Alberta and British Columbia, where government funding comes as a block, it’s not quite as clear, but the principle is basically the same.

So the issue within the institution is how to get the necessary amount of teaching done.  One way to work out how much teaching is needed is this little formula:

X = aϒ/(b+c)

Where “X” is the total number of credit hours a professor must teach each year (a credit hour here meaning a student student sitting in one course for one term – a class with 40 students is 40 credit hours), “ϒ” is average compensation per professor, “a” is the overhead required to support each professor, “b” is the government grant per student credit hour, and “c” is the tuition revenue per credit hour.

Now, let’s plug in a few numbers here.  Average professorial compensation, including benefits, is approaching $150,000 in Canada.  Faculty salaries and benefits are about 44% of total operating budgets, meaning that for every dollar spent on faculty compensation, another $1.27 is spent on other things.  For argument’s sake, let’s say the average income from government is about $6,000 per student (or $600 per credit hour) and average tuition income, including that for international students, is about $8,500 per student (or $850 per credit hour).  These last two figures will vary by field and level of study, and by province, but those numbers are about right for undergraduate Arts in Ontario.

So, what does our equation look like?

X = 2.27*150,000/($600+$850) = 235.

In this simplified world where all students are undergraduate Arts students, at current faculty salary rates and university cost structure, professors on average have to teach 235 credit hours in order to cover their salaries.  If you’re teaching 3/2, that means 5 classes of 47 students each; if you’re teaching 2/2 that means 4 classes of 59 students apiece.

Now, I know what you’re going to say: there’s not actually that many profs teaching that many students.  And that’s true mainly because I’m low-balling the per-student income figure.  Add in graduate students and the per-student income rises because of more government subsidy.  Choose another discipline (Engineering, say), and income rises for the same reason.  But at universities like, say, Wilfrid Laurier, Saint Mary’s, or Lethbridge, which are big on Arts, Science, and Business, and low on professional programs, this is pretty much the equation they are looking at.

More tomorrow.

January 06

Adult Discussions About Research Policy

Over the winter break, the Toronto Star published an editorial on research funding that deserves to be taken out to the woodshed and clobbered.

The editorial comes in two parts. The first is a reflection on whether or not the Harper government is a “caveman” or just “incompetent” when it comes to science. I suppose it’s progress that the Star gives two options, but frankly the Harper record on science isn’t hard to decode:

  1. The Conservatives like “Big Science” and have funded it reasonably well.
  2. They’re not crazy about pure inquiry-driven stuff the granting councils have traditionally done and have kept growth under inflation as a result (which isn’t great but is better than what has happened to some other areas of government funding).
  3. They really hate government regulatory science especially when it comes to the environment and have approached it the way the Visigoths approached Rome (axes out, with an intention to cause damage).
  4. By and large they’d prefer if scientists and business would work more closely together; after all, what’s state investment in research and development for if not to increase economic growth?

But that’s not the part of the article that needs a smack upside the head. Rather, it’s these statements:

Again and again, the Conservatives have diverted resources from basic research – science for no immediate purpose other than knowledge-gathering – to private-public partnerships aimed at immediate commercial gain.

And

…by abandoning basic research – science that no business would pay for – the government is scorching the very earth from which innovation grows.

OK, first of all: the idea that there is a sharp dividing line between “basic” and “applied” research is pure hornswoggle. They aren’t polar opposites; lots of research (including pretty much everything in medicine and engineering) is arguably both. Outside of astronomy/cosmology, very little modern science is for no purpose other than knowledge gathering. There is almost always some thought of use or purpose. Go read Pasteur’s Quadrant.

Second, while the government is certainly making much of its new money conditional on business participation, the government hasn’t “abandoned” basic research. The billions going into the granting councils are still there.

Third, the idea that innovation and economic growth are driven solely or even mainly by domestic basic research expenditures  is simply a fantasy. A number of economists have shown a connection between economic growth and national levels of research and development; no one (so far as I know) has ever proven it about basic research alone.

There’s a good reason for that: while basic research is the wellspring of innovation (and it’s important that someone does basic research), in open economies it’s not in the least clear that every country has to engage in it to the same degree. The Asian tigers, for instance, emphasized “development” for decades before they started putting money into what we would consider serious basic research facilities. And nearly all the technology Canadian industry relies on is American, and would be so even if we tripled our research budgets.

We know almost nothing about the “optimal” mix of R&D, but it stands to reason that the mix is going to be different in different industries based on how close to the technological frontier each industry is in a given country. The idea that there is a single optimal mix across all times and places is simply untenable.

Cartoonishly simple arguments like the Star’s, which imply that any shift away from “basic” research is inherently wrong, aren’t just a waste of time; the “basic = good, applied = bad” line of argument actively infantilizes the Canadian policy debate. It’s long past time this policy discussion grew up.

Page 1 of 512345