HESA

Higher Education Strategy Associates

Category Archives: funding

April 15

Enforced Savings for Education?

It’s generally acknowledged that students from low-income backgrounds have trouble paying for education; that’s why across North America, they tend to get packages of loans and grants which are far in excess of the value of tuition.  And that’s a good thing.  But when it comes to people who are truly middle-class – that is, students near or just over the median family income, there’s a fair bit of debate – sometimes acrimonious – about how to assist them.  

One view – one which I tend to subscribe to – is that most middle class parents are quite capable of providing some assistance to their kids.  It’s not an emergency, out-of-the-blue expenditure; they’ve had 18 years to save for it.  If there’s a liquidity problem, student loans are available which allow students to defer the costs until they have graduated and have a job.

The alternative view – proposed by the usual suspects here in Canada and in the United States by researchers like Sara Goldrick-Rab – is essentially that university costs have outstripped the ability of even middle-class to pay (a claim easier to make in the US than in Canada) and that therefore in the interests of a strong middle-class they need greater subsidy.

From a lifetime income perspective, if higher education are too expensive for families individually, they’re also too expensive for families collectively – unless the plan is to grab tax income from those families who don’t have kids or whose kids don’t go to higher education.  That would probably be quite regressive.  But I don’t think that’s primarily what proponents of aid to middle-class parents are saying.  I think they’re actually making a liquidity argument: middle-class annual incomes can’t cope with a sudden rise in annual household spending that supporting a student in college or university for 3-5 years entails.  That’s primarily an intertemporal liquidity argument – they have the money, they just don’t have it now

Now, the most obvious way to deal with that is loans, but the objection is some variation of “debt is bad, debt = inequality”.  I don’t buy that particularly (see here and here), but let’s assume there is merit in the argument.  Is government subsidy the only way to deal with this?  Answer: of course not.  We have exactly the same issue with retirement income support, and the way we deal with it is through enforced savings through programs like the Canada Pension Plan.

As higher education edges towards becoming universal, the pension model of funding becomes at least worth examining.  Why not create individual accounts for every child born and require parents to contribute a couple of hundred dollar a year through payroll deductions?  If income is below a certain threshold, government could make the contribution on parents’ behalf (as indeed it does for low-income parents through the currently non-mandatory Canada Learning Bond program).  That way, every family would know they had a lump-sum amount available once a child reaches age 18, without having to tap government coffers to support the middle-class who, on the whole, are able to pay for university/college if the payment process is extended sufficiently.

I’m not sure if I personally buy this argument: people tend to like the idea of saving but are less keen on making it mandatory.  But I do think it’s a better idea than using tax-income to support the middle-class.  That money should be reserved for helping the less-advantaged.

April 02

More Inter-Provincial Finance Comparisons

Yesterday we compared provinces on PSE spending as a percentage of GDP – that is, as a percentage of their ability to pay.  More or less, what we found was that most provinces were pretty similar, at 2.5% of GDP, with Saskatchewan a bit lower, Alberta a lot lower, and Nova Scotia and PEI much higher.  But provinces have different economic capabilities and different student participation rates.  So how do all these different expenditure patterns play out where it counts, in dollars per student?

Before I get into the actual numbers, some quick explanatory notes: all income and enrolment figures are for 2011-12, and expressed in 2012 dollars.  The income figures represent all income, not operating, meaning that any big capital projects in that year will skew things a bit.  The “government” figures include both federal and provincial spending.  To keep things relatively consistent, I express student numbers in Statscan FTEs (3.5 PT = 1FT), rather than headcounts; BC and Alberta, who have way more part-time students than anyone else, would look a bit worse if we did this using headcounts only.  Finally, although I am expressing everything in terms of institutional income, since income and expenditure are pretty much identical, you can assume that everything I say here for per-student income is basically true for per-student expenditure, as well.

Got that?  OK, off we go.

Figure 1 shows what income per head looks like in the college sector.  Nationally, colleges receive $16,585 per FTE student per year, just under two-thirds of which come from government.  Most provinces have averages above this, but Ontario and Quebec (which between them have almost 75% of the country’s college students) spends less.  The surprise here is Alberta: despite receiving slightly less than the national average as a percentage of GDP, on a per-student basis its colleges and institutions receive a fairly outlandish $32,000 per FTE, of which about $19,000 comes from government.

Figure 1: College Income per FTE Student by Source and Province, 2011-12

unnamed

 

 

 

 

 

 

 

 

 

 

 

 

Now, let’s turn to universities.  On average, Canadian universities have income of $31,000 per FTE, which is ahead of pretty much any university system in the world, with the exception of US privates.  Most provinces are pretty close to that level: only Manitoba is substantially below $30,000 per student, and only Saskatchewan, and Newfoundland are substantially above it.  In most provinces, universities get about 60% of their total income from government; the exceptions are Ontario and Nova Scotia (where it is about 45%), Newfoundland (73%), and Quebec (66%).

Figure 2: University Income per FTE Student by Source and Province, 2011-12

unnamed

 

 

 

 

 

 

 

 

 

 

 

 

If this doesn’t look like what you’re used to seeing (e.g. Quebec universities don’t seem underfunded compared to Ontario universities), it’s partly because we’re not strictly looking at operating funding here: research funding from the federal government is also included, and that can change the lens a little bit.

Another truth here: the highest levels of funding are in Newfoundland and Saskatchewan.  While Memorial, Saskatchewan and Regina are all decent universities, none of them tend to make anyone’s top ten list of Canadian institutions.  Reasonable people might therefore question the strength of the link between per-student funding and quality.

Combine figures 1 and 2 and you get Figure 3, which shows average income across all post-secondary institutions per FTE.

Figure 3: Institutional Income per FTE Student by Source and Province, Colleges, 2011-12

unnamed

 

 

 

 

 

 

 

 

 

 

 

 

Now, Quebec goes to the bottom of the pile, but that’s mostly because of weak funding in the college sector (which is fair enough, because much of it is effectively the final year of high school).  The biggest winners are – surprise, surprise – the three oil provinces of Newfoundland, Saskatchewan, and Alberta, all of whom are see institutional income of $35,000 per FTE student or thereabouts, which is roughly 35% higher than the national average of $25,800 per student.

It’s a very different picture than the one we saw yesterday when looking at expenditures as a percentage of GDP.  Essentially, rich provinces don’t need to spend as much of their income on higher education to have good post-secondary education.  As noted yesterday, Nova Scotia universities receive 2.5 times as much, in % of GDP terms, as do Alberta universities; yet, due to differences in provincial GDP and enrolments, Alberta institutions actually receive more dollars per FTE.

So which is the better measure, dollars per student or % of GDP?  It kind of depends on one’s perspective.  Institutions, naturally, care about the dollars: if someone else has more, they want parity so they can compete.  But governments and citizens probably care more about % of GDP, which is a measure of society’s ability to pay for things.  Every percentage of GDP used by PSE is a percentage that can’t be used to pay for something else, be it roads, hospitals, or personal consumption.

In other words, you can make a decent case for pretty much any province to be among the country’s best or worst, depending on whether you use a GDP framework or a per-FTE framework.  This is intensely annoying to people who crave certainty and exactitude, but that’s the way it is.

April 01

Some Inter-Provincial Finance Comparisons

Last week, I blogged about how OECD figures showed Canada had the highest level of PSE spending in the world, at 2.8% of GDP.  Many of you wrote to me asking: i) if the picture was the same when we looked at other measures, like per-capita spending or spending per-student; and, ii) could I break things down by province, instead of nationally.  I am ever your servant, so I tried working on this.

I quickly came up against a problem, which was simply that I could in no way replicate the OECD numbers.  Using numbers from FIUC (for universities) and FINCOL (for colleges), the biggest expenditure number I could come up with for the 2011-12 year was $41.75 billion in institutional income.  Dividing this by the 2011 GDP figure of $1.72 billion used in Education at a Glance (itself inexplicably about 3% smaller than the $1.77 billion figure Statscan reports for 2011) gives me 2.43%, rather than the 2.8% Statscan reported to OECD.  There is presumably an explanation for this (my best guess is that it has something to with student assistance), and I have emailed some folks over there to see what’s going on.  But in the meantime, we can still have some fun with inter-provincial comparisons.

Let’s start with what provinces spend on universities:

Figure 1: University Income by Province and Source as a Percentage of GDP

unnamed

 

 

 

 

 

 

 

 

 

 

 

 

In most provinces, total university expenditure is right around two percent of GDP.  Only in two provinces (Saskatchewan, Alberta) is it significantly below this, and only in two (Nova Scotia, Prince Edwards Island) is it significantly above.  In terms of public expenditure, the average across the country is about one percent of GDP.  Nova Scotia, at 3.2%, is likely by some distance the highest-spending jurisdiction in the entire world.

Now, some of you are no doubt wondering: how the heck can Nova Scotia universities spend two and a half times what Alberta universities spend (in GDP terms) when the latter are so bright and shiny and the former are increasingly looking a little battered?  Well, I’ll get more into this tomorrow, but the quick answer is: Alberta’s GDP is eight times higher than Nova Scotia’s, but it only has about three times as many students.

Of course, universities aren’t the whole story.  Let’s look at colleges:

Figure 2: College Income by Province and Source as a Percentage of GDP

unnamed

 

 

 

 

 

 

 

 

 

 

 

 

This is a wee bit more interesting.  Most provinces are bunched closely around the 0.5% of GDP mark, except for Quebec and Prince Edward Island.  If we were using international standards here, where college is usually interpreted as being ISCED level 5 (or level 5B before the 2011 revision), Quebec’s figures would be much lower because CEGEP programs leading to university are considered level 4 (that is, post-secondary, but not actually tertiary), and hence would be excluded.

But PEI is the real stunner here: apparently Holland College accounts for nearly 1.2% of GDP.  This sounds ludicrous to me and I have no explanation for it, but having looked up Holland College’s financials it seems to check out.

Here’s the combined picture:

Figure 3: Total PSE Income by Province and Source as a Percentage of GDP

unnamed

 

 

 

 

 

 

 

 

 

 

 

 

So, what we see here is that most provinces again cluster around spending 2.5% of GDP, which would put their spending roughly on par with the world’s second-biggest spender, Korea (but slightly behind the United States).  Saskatchewan, at 2% of GDP, would still be ranked very highly, while Alberta, at 1.73% would be only a bit above the OECD average.

The crazy stuff is at the other end: PEI and Nova Scotia, where higher education spending exceeds 3.75% of GDP.  And yeah, their GDP is lower than most of the rest of the country (GDP/capita in those two provinces, at $39,800 and $41,500, respectively, is less than half what it is in Alberta), but there are lots of OECD countries with GDPs of roughly that level of income (e.g. Spain) who spend about a third as much on education.

Tomorrow, we’ll look a bit more at per-student spending.

March 30

Investing in Students

One thing I’ve seen a lot of recently, particularly from the left, are exhortations to “invest in education”, “invest in people”, and “invest in students”.   However, as economist Stephen Gordon noted on twitter this weekend, the actual meaning of the verb “to invest” is “to acquire a productive asset”.  So, in a literal sense, it would appear that a lot of people on the left are interested in a government-led return to slavery.

Of course, this isn’t what the left means when it says “invest”.  In fact, calls for “investment” are a kind of rhetorical sleight of hand, combining one perfectly sensible idea with a much more dubious one.  The sensible bit is that “public spending on higher education has significant positive returns”; the less sensible bit is “if we spent more, we will continue to get similar high returns”.

The problem here – one which the investment crowd isn’t always keen to acknowledge – is that when real investors make investments, they actually measure returns.  And when they do, they measure returns relative to the original amount invested.  If returns do not increase in-line with investments, then this is what we call a bad investment.

To understand what I mean, let’s think about the Klein cuts in Alberta in the early 90s, or the Harris cuts in Ontario in the mid-90s, or the Bouchard cuts in Quebec in the mid-90s.  In all three cases, universities saw double-digit percentage decreases in operating grants.  Did student intake or graduation rates fall?  Was the quality of these graduates materially worse than those of any other era?  No?  Then what we have here is a case of a rise in returns to investment; governments spent less and got the same return.

The argument that a rise in spending will return a better investment is actually a tough one to make.  Will we get more graduates?  Will we get more thoughtful or productive graduates?  Will we get more research?  These are all things you have to measure.  By and large in Canada, our investments of the 2000s bought us more graduates and more research.  On other aspects – who knows?

(At this point in any of my talks, someone always asks something to the effect of: “but what about the other aspects of higher education, like citizenship, or critical thought?”  To which my answer is: if that’s what you think we’re buying with public expenditure: fine.  The issue is: on what basis do you think graduates will have more of those qualities if spending goes up 5%, or 10%, or whatever?)

I suspect some of the “investment” crowd wouldn’t mind actually measuring its investments; but, I also suspect there’s a larger portion of this group that could not care less about return on investments.  For these people, the word “invest” is simply a crude disguise for the word “spend”, and by “spend” they mostly mean transferring spending from the private sector to the public sector, hence raising private returns and lowering public ones (and this is from the left, for God’s sake).

None of this is to argue against public spending on education, of course.  And none of it is to say there aren’t reasons why higher education spending shouldn’t be increased.  But be careful of the language of investment: it doesn’t always lead where you think it will.

March 24

Banning the Term “Underfunding”

Somehow I missed this when the OECD’s Education at a Glance 2014 came out, but apparently Canada’s post-secondary system is now officially the best funded in the entire world.

I know, I know.  It’s a hard idea to accept when Presidents of every student union, faculty association, university, and college have been blaming “underfunding” for virtually every ill in post-secondary education since before Air Farce jokes started taking the bus to get to the punchline.  But the fact is, we’re tops.  Numero uno.  Take a look:

Figure 1: Percentage of GDP Spent on Higher Education Institutions, Select OECD Countries, 2011

unnamed

 

 

 

 

 

 

 

 

 

 

 

 

For what I believe is the first time ever, Canada is outstripping both the US (2.7%) and Korea (2.6%).  At 2.8% of GDP, spending on higher education is nearly twice what it is in the European Union.

Ah, you say, that’s probably because so much of our funding comes from private sources.  After all, don’t we always hear that tuition is at, or approaching, 50% of total funding in universities?  Well, no.  That stat only applies to operating expenditures (not total expenditures), and is only valid in Nova Scotia and Ontario.  Here’s what happens if we look only at public spending in all those countries:

Figure 2: Percentage of GDP Spent on Higher Education Institutions from Public Sources, Select OECD Countries, 2011

unnamed

 

 

 

 

 

 

 

 

 

 

 

 

While it’s true that Canada does have a high proportion of funds coming from private sources, public sector support to higher education still amounts to 1.6% of GDP, which is substantially above the OECD average.  In fact, our public expenditure on higher education is the same as in Norway and Sweden; among all OECD countries, only Finland and Denmark (not included in graph) are higher.

And this doesn’t even consider the fact that Statscan and CMEC don’t include expenditures like Canada Education Savings Grants and tax credits, which together are worth another 0.2% of GDP, because OECD doesn’t really have a reporting category for oddball expenditures like that.  The omission doesn’t change our total expenditure, but it does affect the public/private balance.  Instead of being 1.6% of GDP public, and 1.2% of GDP private, it’s probably more like 1.8% or 1.9% public, which again would put us at the absolute top of the world ranking.

So it’s worth asking: when people say we are “underfunded”, what do they mean?  Underfunded compared to who?  Underfunded for what?  If we have more money than anyone else, and we still feel there isn’t enough to go around, maybe we should be looking a lot more closely at *how* we spend the money rather than at *how much* we spend.

Meantime, I think there should be a public shaming campaign against use of the term “underfunding” in Canada.  It’s embarrassing, once you know the facts.

March 04

Stop Saying Higher Education is a Public Good

Few things drive me crazier than when people claim higher education is a public good, and then claim that, on that basis, it deserves either: a) more public funding, or b) needs to be funded exclusively on a public basis.  This argument represents a fundamental misunderstanding of what the term “public good” actually means.

When most people hear the phrase “public good”, they’re probably thinking something like, “it’s good, it’s publicly funded; therefore, it’s a public good”.  But  that rationale is tautological.  In fact, claims for public funding on the basis of a good being “public” rests on a much narrower definition.  Here, I’d urge everyone to read Frances Wooley’s excellent summary of this issue entitled, “Why public goods are a pedagogical bad”.  To qualify as a public good, the good has to be both non-rival (that is, one person using it does not diminish others’ ability to use it), and non-excludable (that is, once provided it is difficult to prevent people from using – e.g. lighthouses).  The number of goods to which that might actually apply is very, very small, and higher education certainly isn’t one of them.  Classroom space is very definitely rival, and it is trivially easy to exclude people from education – no money, no degree.  Higher education is thus a private good.  One with many public benefits, for sure, but private nonetheless.

Why does it matter if people call it a public good?  Because in all your basic economic textbooks, public goods are the goods that all (or nearly all) think should be publicly funded.  When people say something is a pubic good, they’re actually launching an (erroneous) appeal to economic authority as a basis for public funding.

Now, just because something isn’t a public good doesn’t mean there’s no case for a subsidy: it just means there’s no automatic case for it.  Health care, welfare, and employment insurance are not public goods, but there’s still a very good case to be made for all of them in terms of a public insurance function – that is, it’s cheaper to collectively insure against ill health, job loss, and poverty than it is to make people do it themselves.

Sometimes there’s a case for government subvention due to obvious market failure – most student loans come under this category (markets have a hard time funding human capital), as does public funding of research (some types of research won’t be undertaken by the private sector because of the size of the externalities).

So it’s fine to say there is a public purpose to higher education.  And it’s fine to say higher education has many public benefits.  But saying higher education is a public good, and therefore deserves full public financing, is simply wrong.  If we’re going to have sensible conversations about higher education financing, the least we can do is get the terminology right.

February 20

Performance-Based Funding (Part 4)

I’ve been talking about performance-based funding all week; today, I’ll try to summarize what I think the research and experience actually says.

Let’s return for a second to a point I made Tuesday.  When determining whether PBF “works”, what matters is to be able to show that incentivizing particular outcomes actually changes institutional behaviour, and leads to improvements in outcomes. However, no study to date has actually bothered to link quantifiable changes in funding with any policy outcomes.  Hillman and Tandberg – who found little-to-no positive effects – came closest to doing this, but they looked only at the incidence of PBF, and not the size of PBF; as such, their results can easily be read to suggest that the problem with PBF is that it needs to be bigger in order to work properly.  And indeed, that’s very likely: in over half of US states with PBFs, the proportion of operating income held for PBF purposes is 2.5%; in practice, the size of the re-distribution of funds from PBFs (that is, the difference between how that 2.5% is distributed now versus how it was distributed before PBFs were introduced) is probably a couple of orders of magnitude smaller still.

I would argue that there’s a pretty simple reason why most PBFs in North America don’t actually change the distribution of funds: big and politically powerful universities tend to oppose changes that might “damage” them.  Therefore, to the extent that any funding formula results in something too far from the status quo (which tends to reward big universities for their size), they will oppose it.  The more money that suddenly becomes at risk, the more the big universities scream.  Therefore, the political logic of PBFs is that to have a chance of implementation they have to be relatively small, and not disturb the status quo too much.

Ah, you say: but what about Europe?  Surely the large size of PBF incentives must have caused outrage when they were introduced, right?  That’s a good question, and I don’t really have an answer.  It’s possible that, despite their size, PBF schemes did not actually change much more in terms of distribution than did their American counterparts.  I can come up with a few country-specific hypotheses about why that might be: the Danish taximeter system was introduced at a time when universities were still considered part of governments (and academics part of the civil service), the Polish system was introduced at a time of increasing government funding, etc.  But those are just guesses.  In any case, such lit as I can find on the subject certainly doesn’t mention much in terms of opposition.

So, I think we’re kind of back to square one.  I think the Hillman/Tandberg evidence tells us that simply having a PBF doesn’t mean much, and I think the European evidence suggests that at a sizeable enough scale, PBFs can  incentivize greater institutional efficiency.  But beyond that, I don’t think we’ve got much solid to go on.

For what it’s worth, I’d add one more thing based on work I did last year looking at the effect of private income on universities in nine countries: and that is, only incentivize things that don’t already carry prestige incentives.  Canadian universities are already biased towards activities like research; incentivizing them further through performance funding is like giving lighter fluid to a pyromaniac.

No, what you want to incentivize is the deeply unsexy stuff that’s hard to do.  Pay for Aboriginal completions in STEM subjects.  Pay for Female Engineering graduates. Pay big money to the institution that shows the greatest improvement in the National Survey of Student Engagement (NSSE) every two years.  Offer a $20 million prize to the institution that comes up with the best plan for measuring – and then improving – learning, payable in installments to make sure they actually follow through (ok, that’s competitive funding rather than performance-based funding, but you get the idea).

Neither the pro- nor anti-camp can point to very much genuinely empirical evidence about efficacy; in the end it all comes down to whether one thinks institutions will respond to incentives.  I think it’s pretty likely that they do; the trick is selecting the right targets, and structuring the incentives in an intelligent way.  And that’s probably as much art as science.

February 19

Performance-Based Funding (Part 3)

As I noted yesterday, the American debate on PBF has more or less ignored evidence from beyond its shores; and yet, in Europe, there are several places that have very high levels of performance-based funding.  Denmark has had what it calls a “taximeter” system, which pays institutions on the basis of student progression and completion, for over 20 years now, and it currently makes up about 30% of all university income.  Most German Länder have some element of incentive-based funding on either student completion or time-to-completion; in some cases, they are also paid on the basis of the number of international students they attract (international students pay no tuition in Germany).  In the Netherlands, graduation-based funding makes up over 60% of institution operating grants (or, near as I can tell, about 30% of total institutional income).  The Czech Republic now gives out 20% of funding to institutions on a quite bewildering array of indicators, including internationalization, research, and student employment outcomes.

Given this, you’d think there might be a huge and copious literature about whether the introduction of these measures actually “worked” in terms of changing outcomes of the indicators in question.  But you’d be wrong.  There’s actually almost nothing.  That’s not to say these programs haven’t been evaluated.  The Danish taximeter system appears to have been evaluated four times (haven’t actually read these – Danish is fairly difficult), but the issue of dropouts doesn’t actually seem to have been at the core of any of them (for the record, Danish universities have relatively low levels of dropouts compared to other European countries, but it’s not clear if this was always the case or if it was the result of the taximeter policy).  Rather, what gets evaluated is the quite different question of: “are universities operating more efficiently?”

This is key to understanding performance indicators in Europe. In many European countries, public funding makes up as close to 100% of institutional income as makes no odds.  PBF has therefore often been a way of trying to introduce a quasi-market among institutions so as to induce competition and efficiency (and on this score, it usually gets fairly high marks).  In North America, where pressures for efficiency are exerted through a competitive market for students, the need for this is – in theory at least – somewhat less.  This largely explains the difference in the size of performance-based funding allocations; in Europe, these funds are often the only quasi-competitive mechanism in the system, and so (it is felt) they need to be on the scale of what tuition is in North America in order to achieve similar competitive effects.

Intriguingly, performance-based funding in Europe is at least as common with respect to research as it is to student-based indicators (a good country-by-country summary from the OECD is here).  Quite often, a portion of institutional operating funding will be based on the value of competitive research won, a situation made possible by the fact that many countries in Europe separate their institutional grants into funding for teaching and funding for research in a way that would give North American universities the screaming heebie-jeebies.  Basically: imagine if the provinces awarded a portion of their university grants on the same basis that Ottawa hands out the indirect research grants, only with less of the questionable favouritism towards smaller universities.  Again, this is less about “improving overall results” than it is about keeping institutions in a competitive mindset.

So, how to interpret the evidence of the past three days?  Tune in tomorrow.

February 18

Performance-Based Funding (Part 2)

So, as we noted yesterday, there are two schools of thought in the US about performance-based funding (where, it should be noted, about 30 states have some kind of PBF criteria built into their overall funding system, or are planning to do so).  Basically, one side says they work, and the other says they don’t.

Let’s start with the “don’t” camp, led by Nicholas Hillman and David Tandberg, whose key paper can be found here.  To determine whether PBFs affect institutional outcomes, they look mostly at a single output – degree completion.  This makes a certain amount of sense since it’s the one most states try to incentivize, and they use a nice little quasi-experimental research design showing changes in completion rates in states with PBF and those without.  Their findings, briefly, are: 1) no systematic benefits to PBF – in some places, results were better than in non-PBF systems, in other places they were worse; and, 2) where PBF is correlated with positive results, said results can take several years to kick-in.

Given the methodology, there’s no real arguing with the findings here.  Where Hillman & Tandberg can be knocked, however, is that their methodology assumes that all PBF schemes are the same, and are thus assumed to be the same “treatment”.  But as we noted yesterday, the existence of PBF is only one dimension of the issue.  The extent of PBF funding, and the extent to which it drives overall funding, must matter as well.  On this, Hillman and Tandberg are silent.

The HCM paper does in fact give this issue some space.  Turns out that in the 26 states examined, 18 have PBF systems, which account for less than 5% of overall public funding.  Throw in tuition and other revenues, and the amount of total institutional revenue accounted by PBF drops by 50% or more, which suggests there are a lot of PBF states where it would simply be unrealistic to expect much in the way of effects.  Of the remainder, three are under 10%, and then there are five huge outliers: Mississippi at just under 55%, Ohio at just under 70%, Tennessee at 85%, Nevada at 96%, and North Dakota at 100% (note: Nevada essentially has one public university and North Dakota has two: clearly, whatever PBF arrangements are there likely aren’t changing the distribution of funds very much).  The authors then point to a number of advances made in some of these states on a variety of metrics, such as “learning gains” (unclear what that means), greater persistence for at-risk students, shorter times-to-completion, and so forth.

But while the HCM report has a good summary of sensible design principles for performance-based funding, there is little that is scientific about it when it comes to linking policy to outcomes. There’s nothing like Hillman and Tandberg’s experimental design at work here; instead, what you have is an unscientific group of anecdotes about positive things that have occurred in places with PBF.  So as far as advancing the debate about what works in performance-based funding, it’s not up to much.

So what should we believe here?  The Hillman/Tandberg result is solid enough – but if most American PBF systems don’t change funding patterns much, then it shouldn’t be a surprise to anyone that institutional outcomes don’t change much either.  What we need is a much narrower focus on systems where a lot of institutional money is in fact at risk, to see if increasing incentives actually does matter.

Such places do exist – but oddly enough neither of these reports actually looks at them.  That’s because they’re not in the United States, they’re in Europe.  More on that tomorrow.

February 17

Performance-Based Funding (Part 1)

I was reading the Ontario Confederation of University Faculty Association (OCUFA)’s position statement on a new funding formula for the province.  Two things caught my eye.  One, they want money to make sure Ontario universities can do world-class research and teaching; and two, they demand strict opposition to any kind of performance-based funding formula (PBF).  Put differently: OCUFA wants great teaching and research to be funded, but are adamantly opposed to rewarding anyone for actually doing it.

Except that’s slightly uncharitable.  OCUFA’s larger point seems to be that performance-based funding formulae (also known as output-based funding) “don’t actually achieve their goals”, and point to work done by University of Wisconsin professor Nicholas Hillman and Florida State’s David Tandberg on the topic.  From a government-spending efficacy point of view, this objection is fair enough, but it’s a bit peculiar from an institutional or faculty standpoint; the Hillman/Tandberg evidence doesn’t indicate that institutions were actually harmed in any way by the introduction of said arrangements, so what’s the problem?

Anyways, last week HCM associates in Washington put out a paper taking a contrary view to Hillman/Tandberg, so we now have some live controversy to talk about.  Tomorrow, I’ll examine the Hillman/Tandberg and HCM evidence to evaluate the claims of each, but today I want to go through what output-based funding mechanisms can actually look like, and in the process show how difficult it is for meta-analyses – such as Hillman’s and HCM’s – to calculate potential impact.

At one level, PBF is simple: you pay for what comes out of universities rather than what goes in.  So: don’t pay for bums in seats, pay for graduates; don’t pay based on research grants earned, pay based on articles published in top journals, etc.  But the way these get paid-out can vary widely, so their impacts are not all the same.

Take graduation numbers, which happens to be the simplest and most common indicator used in PBFs.  A government could literally pay a certain amount per graduate – or maybe “weighted graduate” to take account of different costs by field of study.  It could pay each institution based on its share of total graduates or weighted graduates.  It could give each institution a target number of graduates (based on size and current degree of selectivity, perhaps) and pay out 100% of a value if it hits the target, and 0% if it does not.  Or, it could set a target and then pay a pro-rated amount based on how well the institution did vis-a-vis the target.  And so on, and so forth.

Each of these methods of paying out PBF money plainly has different distributional consequences.  However, if  you’re trying to work out whether output-based funding actually affects institutional outcomes, then the distributional consequence is only of secondary importance.  What matters more is how different the distributional outcomes are from whatever distribution existed in the previous funding formula.

So, say the province Saskatchewan moves from its current mix of historical grant and formula grant to a fully PBF system, where 100% of the funding is based on the number of (field-weighted) graduates produced.  Currently, the University of Saskatchewan gets around three times as much in total operating grants as the University of Regina.  If USask also produced three times as many (field-weighted) graduates as URegina, even the shift to a 100% PBF model wouldn’t change anything in terms of distribution, and hence would have limited consequences in terms of policy and (presumably) outputs.

In effect, the real question is: how much funding, which was formerly “locked-in”, becomes “at-risk” during the shift to PBF?  If the answer is zero, then it’s not much of a surprise that institutional behaviour doesn’t change either.

Tomorrow: a look at the duelling America research papers on PBF.

Page 1 of 612345...Last »