HESA

Higher Education Strategy Associates

Category Archives: funding

February 20

Performance-Based Funding (Part 4)

I’ve been talking about performance-based funding all week; today, I’ll try to summarize what I think the research and experience actually says.

Let’s return for a second to a point I made Tuesday.  When determining whether PBF “works”, what matters is to be able to show that incentivizing particular outcomes actually changes institutional behaviour, and leads to improvements in outcomes. However, no study to date has actually bothered to link quantifiable changes in funding with any policy outcomes.  Hillman and Tandberg – who found little-to-no positive effects – came closest to doing this, but they looked only at the incidence of PBF, and not the size of PBF; as such, their results can easily be read to suggest that the problem with PBF is that it needs to be bigger in order to work properly.  And indeed, that’s very likely: in over half of US states with PBFs, the proportion of operating income held for PBF purposes is 2.5%; in practice, the size of the re-distribution of funds from PBFs (that is, the difference between how that 2.5% is distributed now versus how it was distributed before PBFs were introduced) is probably a couple of orders of magnitude smaller still.

I would argue that there’s a pretty simple reason why most PBFs in North America don’t actually change the distribution of funds: big and politically powerful universities tend to oppose changes that might “damage” them.  Therefore, to the extent that any funding formula results in something too far from the status quo (which tends to reward big universities for their size), they will oppose it.  The more money that suddenly becomes at risk, the more the big universities scream.  Therefore, the political logic of PBFs is that to have a chance of implementation they have to be relatively small, and not disturb the status quo too much.

Ah, you say: but what about Europe?  Surely the large size of PBF incentives must have caused outrage when they were introduced, right?  That’s a good question, and I don’t really have an answer.  It’s possible that, despite their size, PBF schemes did not actually change much more in terms of distribution than did their American counterparts.  I can come up with a few country-specific hypotheses about why that might be: the Danish taximeter system was introduced at a time when universities were still considered part of governments (and academics part of the civil service), the Polish system was introduced at a time of increasing government funding, etc.  But those are just guesses.  In any case, such lit as I can find on the subject certainly doesn’t mention much in terms of opposition.

So, I think we’re kind of back to square one.  I think the Hillman/Tandberg evidence tells us that simply having a PBF doesn’t mean much, and I think the European evidence suggests that at a sizeable enough scale, PBFs can  incentivize greater institutional efficiency.  But beyond that, I don’t think we’ve got much solid to go on.

For what it’s worth, I’d add one more thing based on work I did last year looking at the effect of private income on universities in nine countries: and that is, only incentivize things that don’t already carry prestige incentives.  Canadian universities are already biased towards activities like research; incentivizing them further through performance funding is like giving lighter fluid to a pyromaniac.

No, what you want to incentivize is the deeply unsexy stuff that’s hard to do.  Pay for Aboriginal completions in STEM subjects.  Pay for Female Engineering graduates. Pay big money to the institution that shows the greatest improvement in the National Survey of Student Engagement (NSSE) every two years.  Offer a $20 million prize to the institution that comes up with the best plan for measuring – and then improving – learning, payable in installments to make sure they actually follow through (ok, that’s competitive funding rather than performance-based funding, but you get the idea).

Neither the pro- nor anti-camp can point to very much genuinely empirical evidence about efficacy; in the end it all comes down to whether one thinks institutions will respond to incentives.  I think it’s pretty likely that they do; the trick is selecting the right targets, and structuring the incentives in an intelligent way.  And that’s probably as much art as science.

February 19

Performance-Based Funding (Part 3)

As I noted yesterday, the American debate on PBF has more or less ignored evidence from beyond its shores; and yet, in Europe, there are several places that have very high levels of performance-based funding.  Denmark has had what it calls a “taximeter” system, which pays institutions on the basis of student progression and completion, for over 20 years now, and it currently makes up about 30% of all university income.  Most German Länder have some element of incentive-based funding on either student completion or time-to-completion; in some cases, they are also paid on the basis of the number of international students they attract (international students pay no tuition in Germany).  In the Netherlands, graduation-based funding makes up over 60% of institution operating grants (or, near as I can tell, about 30% of total institutional income).  The Czech Republic now gives out 20% of funding to institutions on a quite bewildering array of indicators, including internationalization, research, and student employment outcomes.

Given this, you’d think there might be a huge and copious literature about whether the introduction of these measures actually “worked” in terms of changing outcomes of the indicators in question.  But you’d be wrong.  There’s actually almost nothing.  That’s not to say these programs haven’t been evaluated.  The Danish taximeter system appears to have been evaluated four times (haven’t actually read these – Danish is fairly difficult), but the issue of dropouts doesn’t actually seem to have been at the core of any of them (for the record, Danish universities have relatively low levels of dropouts compared to other European countries, but it’s not clear if this was always the case or if it was the result of the taximeter policy).  Rather, what gets evaluated is the quite different question of: “are universities operating more efficiently?”

This is key to understanding performance indicators in Europe. In many European countries, public funding makes up as close to 100% of institutional income as makes no odds.  PBF has therefore often been a way of trying to introduce a quasi-market among institutions so as to induce competition and efficiency (and on this score, it usually gets fairly high marks).  In North America, where pressures for efficiency are exerted through a competitive market for students, the need for this is – in theory at least – somewhat less.  This largely explains the difference in the size of performance-based funding allocations; in Europe, these funds are often the only quasi-competitive mechanism in the system, and so (it is felt) they need to be on the scale of what tuition is in North America in order to achieve similar competitive effects.

Intriguingly, performance-based funding in Europe is at least as common with respect to research as it is to student-based indicators (a good country-by-country summary from the OECD is here).  Quite often, a portion of institutional operating funding will be based on the value of competitive research won, a situation made possible by the fact that many countries in Europe separate their institutional grants into funding for teaching and funding for research in a way that would give North American universities the screaming heebie-jeebies.  Basically: imagine if the provinces awarded a portion of their university grants on the same basis that Ottawa hands out the indirect research grants, only with less of the questionable favouritism towards smaller universities.  Again, this is less about “improving overall results” than it is about keeping institutions in a competitive mindset.

So, how to interpret the evidence of the past three days?  Tune in tomorrow.

February 18

Performance-Based Funding (Part 2)

So, as we noted yesterday, there are two schools of thought in the US about performance-based funding (where, it should be noted, about 30 states have some kind of PBF criteria built into their overall funding system, or are planning to do so).  Basically, one side says they work, and the other says they don’t.

Let’s start with the “don’t” camp, led by Nicholas Hillman and David Tandberg, whose key paper can be found here.  To determine whether PBFs affect institutional outcomes, they look mostly at a single output – degree completion.  This makes a certain amount of sense since it’s the one most states try to incentivize, and they use a nice little quasi-experimental research design showing changes in completion rates in states with PBF and those without.  Their findings, briefly, are: 1) no systematic benefits to PBF – in some places, results were better than in non-PBF systems, in other places they were worse; and, 2) where PBF is correlated with positive results, said results can take several years to kick-in.

Given the methodology, there’s no real arguing with the findings here.  Where Hillman & Tandberg can be knocked, however, is that their methodology assumes that all PBF schemes are the same, and are thus assumed to be the same “treatment”.  But as we noted yesterday, the existence of PBF is only one dimension of the issue.  The extent of PBF funding, and the extent to which it drives overall funding, must matter as well.  On this, Hillman and Tandberg are silent.

The HCM paper does in fact give this issue some space.  Turns out that in the 26 states examined, 18 have PBF systems, which account for less than 5% of overall public funding.  Throw in tuition and other revenues, and the amount of total institutional revenue accounted by PBF drops by 50% or more, which suggests there are a lot of PBF states where it would simply be unrealistic to expect much in the way of effects.  Of the remainder, three are under 10%, and then there are five huge outliers: Mississippi at just under 55%, Ohio at just under 70%, Tennessee at 85%, Nevada at 96%, and North Dakota at 100% (note: Nevada essentially has one public university and North Dakota has two: clearly, whatever PBF arrangements are there likely aren’t changing the distribution of funds very much).  The authors then point to a number of advances made in some of these states on a variety of metrics, such as “learning gains” (unclear what that means), greater persistence for at-risk students, shorter times-to-completion, and so forth.

But while the HCM report has a good summary of sensible design principles for performance-based funding, there is little that is scientific about it when it comes to linking policy to outcomes. There’s nothing like Hillman and Tandberg’s experimental design at work here; instead, what you have is an unscientific group of anecdotes about positive things that have occurred in places with PBF.  So as far as advancing the debate about what works in performance-based funding, it’s not up to much.

So what should we believe here?  The Hillman/Tandberg result is solid enough – but if most American PBF systems don’t change funding patterns much, then it shouldn’t be a surprise to anyone that institutional outcomes don’t change much either.  What we need is a much narrower focus on systems where a lot of institutional money is in fact at risk, to see if increasing incentives actually does matter.

Such places do exist – but oddly enough neither of these reports actually looks at them.  That’s because they’re not in the United States, they’re in Europe.  More on that tomorrow.

February 17

Performance-Based Funding (Part 1)

I was reading the Ontario Confederation of University Faculty Association (OCUFA)’s position statement on a new funding formula for the province.  Two things caught my eye.  One, they want money to make sure Ontario universities can do world-class research and teaching; and two, they demand strict opposition to any kind of performance-based funding formula (PBF).  Put differently: OCUFA wants great teaching and research to be funded, but are adamantly opposed to rewarding anyone for actually doing it.

Except that’s slightly uncharitable.  OCUFA’s larger point seems to be that performance-based funding formulae (also known as output-based funding) “don’t actually achieve their goals”, and point to work done by University of Wisconsin professor Nicholas Hillman and Florida State’s David Tandberg on the topic.  From a government-spending efficacy point of view, this objection is fair enough, but it’s a bit peculiar from an institutional or faculty standpoint; the Hillman/Tandberg evidence doesn’t indicate that institutions were actually harmed in any way by the introduction of said arrangements, so what’s the problem?

Anyways, last week HCM associates in Washington put out a paper taking a contrary view to Hillman/Tandberg, so we now have some live controversy to talk about.  Tomorrow, I’ll examine the Hillman/Tandberg and HCM evidence to evaluate the claims of each, but today I want to go through what output-based funding mechanisms can actually look like, and in the process show how difficult it is for meta-analyses – such as Hillman’s and HCM’s – to calculate potential impact.

At one level, PBF is simple: you pay for what comes out of universities rather than what goes in.  So: don’t pay for bums in seats, pay for graduates; don’t pay based on research grants earned, pay based on articles published in top journals, etc.  But the way these get paid-out can vary widely, so their impacts are not all the same.

Take graduation numbers, which happens to be the simplest and most common indicator used in PBFs.  A government could literally pay a certain amount per graduate – or maybe “weighted graduate” to take account of different costs by field of study.  It could pay each institution based on its share of total graduates or weighted graduates.  It could give each institution a target number of graduates (based on size and current degree of selectivity, perhaps) and pay out 100% of a value if it hits the target, and 0% if it does not.  Or, it could set a target and then pay a pro-rated amount based on how well the institution did vis-a-vis the target.  And so on, and so forth.

Each of these methods of paying out PBF money plainly has different distributional consequences.  However, if  you’re trying to work out whether output-based funding actually affects institutional outcomes, then the distributional consequence is only of secondary importance.  What matters more is how different the distributional outcomes are from whatever distribution existed in the previous funding formula.

So, say the province Saskatchewan moves from its current mix of historical grant and formula grant to a fully PBF system, where 100% of the funding is based on the number of (field-weighted) graduates produced.  Currently, the University of Saskatchewan gets around three times as much in total operating grants as the University of Regina.  If USask also produced three times as many (field-weighted) graduates as URegina, even the shift to a 100% PBF model wouldn’t change anything in terms of distribution, and hence would have limited consequences in terms of policy and (presumably) outputs.

In effect, the real question is: how much funding, which was formerly “locked-in”, becomes “at-risk” during the shift to PBF?  If the answer is zero, then it’s not much of a surprise that institutional behaviour doesn’t change either.

Tomorrow: a look at the duelling America research papers on PBF.

February 12

Free Election Manifesto Advice

OK, federal political parties.  I have some election manifesto advice for you.  And given that you’ve all basically accepted Tory budget projections and promised not to raise taxes, it’s perfect.  Completely budget neutral.  Here it is:

Do Less.

Seriously.  After 15 years of increasingly slapdash, haphazard policy-making in research and student aid, a Do Less agenda is exactly what we need.

Go back to 1997: we had three granting councils in Canada.  Then we got the Canadian Foundation for Innovation.  Then the Canadian Foundation for Sustainable Development Technology.  Then Brain Canada, Genome Canada, Grand Challenges Canada, the Canadian Foundation for Healthcare Improvement, The Canada First Research Excellence Fund – and that’s without mentioning the proliferation of single-issue funds created at SSHRC and NSERC.  On commercialization, we’ve got a College and Community Innovation Program, a College-University Idea to Innovation Program, a dozen or so Centres of Excellence for Commercialization and Research (CECRs) – plus, of course, the wholesale revamp of the National Research Council to turn it into a Canadian version of the Fraunhofer Institute.

It’s not that any of these initiatives are bad.  The problem is that by spreading out money thinly to lots of new agencies and programs, we’re losing something in terms of coherence.  Funding deadlines multiply, pools of available cash get smaller (even if overall budgets are more or less what they used to be), and – thanks to the government requirement that a large portion of new funding arrangements be leveraged somehow – the number of funders whose hands need to held (sorry, “whose accountability requirements need to be met”) is rising very fast.  It all leaves less time to, you know, do the actual science – which is what all this funding is supposed to be about, isn’t it?

Or take student assistance.  We know how much everyone (Liberals especially) loves new boutique student aid programs.  But that’s exactly the wrong way to go.  Everything we know about the $10 billion/year student aid business is that it’s far too complicated, and no one understands it.  That’s why people in Ontario scream about affordability and accessibility when in fact the province is nearly as generous as Quebec when it comes to first-year low-income university students.  For people to better appreciate what a bargain Canadian higher education is, we need to de-clutter the system and make it more transparent, not add more gewgaws.

So here’s the agenda: take a breather on new science and innovation programs; find out what we can do to make the system simpler for researchers; merge and eliminate programs as necessary (is Genome Canada really still worth keeping, or can we basically fold that back in to CIHR?) – while ensuring that total funds available do not diminish (a bump would be nice, too, but the simplification is more important).

As for student aid?  Do a deal with the provinces to simplify need assessment and make it easier for students to know their likely aid eligibility much further in advance.  Do a deal with provinces and institutions to convert tax credits into grants to institutions for a large one-time tuition reduction.  Do not, under any circumstances, do anything to make the system more complex.

I know it goes against the grain, guys.  I know you need “announceables” for the campaign.  But in the long run, it’s more important to do things well.  And to do that, we really need to start doing less.

January 20

Classroom Economics (Part 2)

Yesterday, I introduced the equation X = aϒ/(b+c) as a way of setting overall teaching loads. Let’s now use this to understand how funding parameters drive overall teaching loads.

Assume the following starting parameters:

1

 

 

 

 

 

Where a credit hour = 1 student in 1 class for 1 semester.

Here’s the most obvious way it works.  Let’s say the government decides to increase funding by 10%, from $600 to $660 (which would be huge – a far larger move than is conceivable, except say in Newfoundland at the height of the oil boom).  Assuming no other changes – that is, average compensation and overhead remain constant – the 10% increase would mean:

X= 2.27($150,000)/($600+$850) = 235

X= 2.27($150,000)/($660+$850) = 225

In other words, a ten percent increase in funding and a freeze on expenditures would reduce teaching loads by about 4%.  Assuming a professor is teaching 2/2, that’s a decrease of 2.5 students per class.  Why so small?  Because in this scenario (which is pretty close to the current situation in Ontario and Nova Scotia), government funding is only about 40% of operating income.  The size of the funding increase necessary to generate a significant effect on teaching loads and class sizes is enormous.

And of course that’s assuming no changes in other costs.  What happens if we assume a more realistic scenario, one in which average salaries rise 3%, and overhead rises at the same rate?

X= 2.27($154,500)/($660+$850) = 232

In other words, as far as class size is concerned, normal (for Canada anyway) salary increases will eat up about 70% of a 10% increase in government funding.  Or, to put it another way, one would normally expect a 10% increase in government funding to reduce class sizes by a shade over 1%.

Sobering, huh?

OK, let’s now take it from the other direction – how big an income boost would it take to reduce class sizes by 10%?  Well, assuming that salary and other costs are rising by 3%, the entire right side of the equation (b+c) would need to rise by 14.5%.  That would require an increase in government funding of 35%, or an increase in revenues from students of 25% (which could either be achieved through tuition increases, or a really big shift from domestic to international enrolments), or some mix of the two; for instance, a 10% increase in government funds and a 17% increase in student funds.

That’s more than sobering.  That’s into “I really need a drink” territory.  And what makes it worse is that even if you could pull off that kind of revenue increase, ongoing 3% increases in salary and overhead would eat up the entire increase in just three years.

Now, don’t take these exact numbers as gospel.  This example works in a couple of  low-cost programs (Arts, Business, etc.) in Ontario and Nova Scotia (which, to be fair, represent half the country’s student body), but most programs in most provinces are working off a higher denominator than this, and for them it would be less grim than I’m making out here.  Go ahead and play with the formula with data from your own institution and see what happens – it’s revealing.

Nevertheless, the basic problem is the same everywhere.  As long as costs are increasing, you either have to get used to some pretty heroic revenue assumptions (likely involving significant tuition increases) or you have to get used to the idea of ever-higher teaching loads.

So what are the options on cost-cutting?  Tune in tomorrow.

January 19

Classroom Economics (Part 1)

One of the things that continually astonishes me about universities is how few people who work within them actually understand how they are funded, and what the budget drivers really are.  So this week I’m going to walk y’all through a simplified model of how the system really works.

Let’s start by stating what should be – but too often isn’t – the obvious: universities are paid to teach.  They are paid specific amounts to do specific pieces of research through granting councils and other kinds of research funding arrangements, but the core operating budget – made up of government grants and tuition fees – relates nearly entirely to teaching.  This is not in any way to suggest that teaching is all professors should do.  It is, however, to say that their funding depends on teaching.  Want a bigger budget?  Teach more students.

This link is more obvious in some provinces than others.  In places like Ontario and Quebec, which have funding formulae, the link is clear: each student is worth a particular amount of money based on their field and level of study.  In others, like Alberta and British Columbia, where government funding comes as a block, it’s not quite as clear, but the principle is basically the same.

So the issue within the institution is how to get the necessary amount of teaching done.  One way to work out how much teaching is needed is this little formula:

X = aϒ/(b+c)

Where “X” is the total number of credit hours a professor must teach each year (a credit hour here meaning a student student sitting in one course for one term – a class with 40 students is 40 credit hours), “ϒ” is average compensation per professor, “a” is the overhead required to support each professor, “b” is the government grant per student credit hour, and “c” is the tuition revenue per credit hour.

Now, let’s plug in a few numbers here.  Average professorial compensation, including benefits, is approaching $150,000 in Canada.  Faculty salaries and benefits are about 44% of total operating budgets, meaning that for every dollar spent on faculty compensation, another $1.27 is spent on other things.  For argument’s sake, let’s say the average income from government is about $6,000 per student (or $600 per credit hour) and average tuition income, including that for international students, is about $8,500 per student (or $850 per credit hour).  These last two figures will vary by field and level of study, and by province, but those numbers are about right for undergraduate Arts in Ontario.

So, what does our equation look like?

X = 2.27*150,000/($600+$850) = 235.

In this simplified world where all students are undergraduate Arts students, at current faculty salary rates and university cost structure, professors on average have to teach 235 credit hours in order to cover their salaries.  If you’re teaching 3/2, that means 5 classes of 47 students each; if you’re teaching 2/2 that means 4 classes of 59 students apiece.

Now, I know what you’re going to say: there’s not actually that many profs teaching that many students.  And that’s true mainly because I’m low-balling the per-student income figure.  Add in graduate students and the per-student income rises because of more government subsidy.  Choose another discipline (Engineering, say), and income rises for the same reason.  But at universities like, say, Wilfrid Laurier, Saint Mary’s, or Lethbridge, which are big on Arts, Science, and Business, and low on professional programs, this is pretty much the equation they are looking at.

More tomorrow.

January 06

Adult Discussions About Research Policy

Over the winter break, the Toronto Star published an editorial on research funding that deserves to be taken out to the woodshed and clobbered.

The editorial comes in two parts. The first is a reflection on whether or not the Harper government is a “caveman” or just “incompetent” when it comes to science. I suppose it’s progress that the Star gives two options, but frankly the Harper record on science isn’t hard to decode:

  1. The Conservatives like “Big Science” and have funded it reasonably well.
  2. They’re not crazy about pure inquiry-driven stuff the granting councils have traditionally done and have kept growth under inflation as a result (which isn’t great but is better than what has happened to some other areas of government funding).
  3. They really hate government regulatory science especially when it comes to the environment and have approached it the way the Visigoths approached Rome (axes out, with an intention to cause damage).
  4. By and large they’d prefer if scientists and business would work more closely together; after all, what’s state investment in research and development for if not to increase economic growth?

But that’s not the part of the article that needs a smack upside the head. Rather, it’s these statements:

Again and again, the Conservatives have diverted resources from basic research – science for no immediate purpose other than knowledge-gathering – to private-public partnerships aimed at immediate commercial gain.

And

…by abandoning basic research – science that no business would pay for – the government is scorching the very earth from which innovation grows.

OK, first of all: the idea that there is a sharp dividing line between “basic” and “applied” research is pure hornswoggle. They aren’t polar opposites; lots of research (including pretty much everything in medicine and engineering) is arguably both. Outside of astronomy/cosmology, very little modern science is for no purpose other than knowledge gathering. There is almost always some thought of use or purpose. Go read Pasteur’s Quadrant.

Second, while the government is certainly making much of its new money conditional on business participation, the government hasn’t “abandoned” basic research. The billions going into the granting councils are still there.

Third, the idea that innovation and economic growth are driven solely or even mainly by domestic basic research expenditures  is simply a fantasy. A number of economists have shown a connection between economic growth and national levels of research and development; no one (so far as I know) has ever proven it about basic research alone.

There’s a good reason for that: while basic research is the wellspring of innovation (and it’s important that someone does basic research), in open economies it’s not in the least clear that every country has to engage in it to the same degree. The Asian tigers, for instance, emphasized “development” for decades before they started putting money into what we would consider serious basic research facilities. And nearly all the technology Canadian industry relies on is American, and would be so even if we tripled our research budgets.

We know almost nothing about the “optimal” mix of R&D, but it stands to reason that the mix is going to be different in different industries based on how close to the technological frontier each industry is in a given country. The idea that there is a single optimal mix across all times and places is simply untenable.

Cartoonishly simple arguments like the Star’s, which imply that any shift away from “basic” research is inherently wrong, aren’t just a waste of time; the “basic = good, applied = bad” line of argument actively infantilizes the Canadian policy debate. It’s long past time this policy discussion grew up.

September 04

Who’s Relatively Underfunded?

As I said yesterday, there’s a quick way to check claims of relative underfunding in block-grant provinces: take each institution’s enrolment numbers by field of study from Statscan’s Post-Secondary Student Information System (PSIS), plug those numbers into the Ontario and Quebec funding formulas, and then compare each institutions’ hypothetical share of total provincial weighted student units (WSUs) under those formulas to what we know they actually receive via CAUBO’s annual Financial Information of Universities and Colleges (FIUC) Survey.

Simple, right? Well, no, not really, but I have some really talented staff who do this stuff for me (Hi Jackie!), so let’s go look at the data.

Let’s start with Manitoba, where pretty much every second day you can hear the University of Winnipeg making a case about relative underfunding (say what you will about Lloyd Axworthy: the man knows how to keep his message in the newspapers).  But is the claim true?

Figure 1: FTEs, Weighted FTEs, and Actual Funding, Manitoba Universities

image005

 

 

 

 

 

 

 

 

 

 

 

 

Here’s what Figure 1 says:  The University of Manitoba has 69% of the province’s students, but receives 79% of all provincial funding (this is from 2011-12); The University of Winnipeg, on the other hand, has 24% of the students, but only 13% of the total funding.  Clear cut case of underfunding, right?

Well, not entirely.  The fact is that Manitoba has a lot more students in high-cost disciplines than does Winnipeg.  If U of M were in Ontario, it would get 75% of provincial funding; if it were in Quebec (where the formula is slightly more tilted towards medical disciplines), it would get 77% of provincial funding.  So Manitoba receives slightly more funding than it would in other provinces, as does – in a relatively more significant way – Brandon University.  And Winnipeg does receive less than it would if it were in another province: $18 million less than if Manitoba used Quebec’s formula, and $25 million less than if Ontario’s were used.  That’s a big gap, but still less than it would appear just looking at FTEs alone.

Now, on to New Brunswick.  One has to be a little careful about making inter-institutional comparisons with CAUBO data in New Brunswick because of the peculiar arrangement between UNB and St. Thomas (STU).  Because the two share the former’s campus, the provincial government sends UNB a little bit extra (and STU a little bit less) in order to cover extra costs.  So, with that in mind, let’s look at the data:

Figure 2: FTEs, Weighted FTEs, and Actual Funding, New Brunswick Universities

image006

 

 

 

 

 

 

 

 

 

 

 

 

New Brunswick looks a bit different than Manitoba, where the biggest university is overfunded.  In New Brunswick, it’s UNB that actually seems to be doing badly, receiving 50% of all money when, in Ontario, it would receive 54%, and in Quebec it would receive 59% (and remember, that 50% is actually inflated a bit because of the money to support STU students).  The institution that really seems to be overfunded in New Brunswick is Moncton, which is receiving $13 million more than it would if New Brunswick used either the Quebec or Ontario formulae.

So, yes Virginia, relative underfunding does exist in Manitoba and New Brunswick.  This probably wouldn’t be the case if either province ever bothered to put its institutional funding on an empirical footing, via a funding formula.  But that would create winners (likely Winnipeg & UNB) and losers (likely Brandon, Moncton and, to a lesser extent, Manitoba).  And what politician likes to hear that?

September 03

“Relative” Underfunding

Institutions always claim to be underfunded.  Seriously, I’ve been at universities in maybe 25 countries – including Saudi Arabia and the Emirates – and I have yet to find an institution that thought it was overfunded.  The reason for this is simple: there’s always just a little bit more quality around the bend, if only you could buy it (the university down the street has a space-shuttle simulator? We need an actual space shuttle to stay competitive!).  So it’s easy to tune out this kind of talk.

The slightly more sophisticated argument is one of relative underfunding.  That is too say: institution A is getting less than it “should” based on what a selection of other comparable institutions get.  The trick, of course, is to get the comparator right – too often, it’s transparently a plea by institutions in poor provinces to get funded in the same way as some of their peers in wealthier provinces.

One way that governments can avoid this kind of argument is to institute funding formulas (indeed, in many cases, this is precisely the reason they were introduced).  Once a funding formula is created, and institutions are paid according to some kind of algorithm, it becomes tough to argue relative underfunding (that is, unless the formula is specifically re-jigged in such a way as to screw over one particular partner – as Quebec did with its famous “ajustement McGill”).  You can argue that the funding doesn’t weight activities the right way – small institutions tend to argue that fixed costs aren’t properly accounted for, large ones that research activities are never compensated adequately – but you can’t argue being underfunded because the criteria by which money is being distributed are objective.

In Canada, it’s really only Quebec and Ontario that have anything close to pure formula funding, based on input indicators.  Nova Scotia does have a formula, but weirdly only takes a reading of the indicators every decade or so; Saskatchewan has some weird block grant/formula hybrid, which is ludicrously complex for a province with only two institutions.  PEI and Newfoundland don’t really need formulae given that they are single-institution provinces.

That leaves Alberta, British Columbia, Manitoba, and New Brunswick.  In these provinces, money is delivered by block grant rather than on the basis of an algorithm, so there is plenty of scope for institutions to claim being “underfunded” relative to others in their province.   This means that institutions have a perennial rhetorical stick with which to beat government.

Or do they?  In fact, there is a way to check claims of relative underfunding, even in block grant provinces.  All one needs to do is to simply look at the distribution of money across an institution and see if it matches up with the distribution of funds one would see in a province that does have a funding formula (i.e. Quebec and/or Ontario).  If they don’t match, there’s probably a case for underfunding; if they do, there probably isn’t.

Tomorrow, we’ll try this out on Manitoba and New Brunswick.

Page 1 of 512345