Performance-Based Funding (Part 4)

I’ve been talking about performance-based funding all week; today, I’ll try to summarize what I think the research and experience actually says.

Let’s return for a second to a point I made Tuesday.  When determining whether PBF “works”, what matters is to be able to show that incentivizing particular outcomes actually changes institutional behaviour, and leads to improvements in outcomes. However, no study to date has actually bothered to link quantifiable changes in funding with any policy outcomes.  Hillman and Tandberg – who found little-to-no positive effects – came closest to doing this, but they looked only at the incidence of PBF, and not the size of PBF; as such, their results can easily be read to suggest that the problem with PBF is that it needs to be bigger in order to work properly.  And indeed, that’s very likely: in over half of US states with PBFs, the proportion of operating income held for PBF purposes is 2.5%; in practice, the size of the re-distribution of funds from PBFs (that is, the difference between how that 2.5% is distributed now versus how it was distributed before PBFs were introduced) is probably a couple of orders of magnitude smaller still.

I would argue that there’s a pretty simple reason why most PBFs in North America don’t actually change the distribution of funds: big and politically powerful universities tend to oppose changes that might “damage” them.  Therefore, to the extent that any funding formula results in something too far from the status quo (which tends to reward big universities for their size), they will oppose it.  The more money that suddenly becomes at risk, the more the big universities scream.  Therefore, the political logic of PBFs is that to have a chance of implementation they have to be relatively small, and not disturb the status quo too much.

Ah, you say: but what about Europe?  Surely the large size of PBF incentives must have caused outrage when they were introduced, right?  That’s a good question, and I don’t really have an answer.  It’s possible that, despite their size, PBF schemes did not actually change much more in terms of distribution than did their American counterparts.  I can come up with a few country-specific hypotheses about why that might be: the Danish taximeter system was introduced at a time when universities were still considered part of governments (and academics part of the civil service), the Polish system was introduced at a time of increasing government funding, etc.  But those are just guesses.  In any case, such lit as I can find on the subject certainly doesn’t mention much in terms of opposition.

So, I think we’re kind of back to square one.  I think the Hillman/Tandberg evidence tells us that simply having a PBF doesn’t mean much, and I think the European evidence suggests that at a sizeable enough scale, PBFs can  incentivize greater institutional efficiency.  But beyond that, I don’t think we’ve got much solid to go on.

For what it’s worth, I’d add one more thing based on work I did last year looking at the effect of private income on universities in nine countries: and that is, only incentivize things that don’t already carry prestige incentives.  Canadian universities are already biased towards activities like research; incentivizing them further through performance funding is like giving lighter fluid to a pyromaniac.

No, what you want to incentivize is the deeply unsexy stuff that’s hard to do.  Pay for Aboriginal completions in STEM subjects.  Pay for Female Engineering graduates. Pay big money to the institution that shows the greatest improvement in the National Survey of Student Engagement (NSSE) every two years.  Offer a $20 million prize to the institution that comes up with the best plan for measuring – and then improving – learning, payable in installments to make sure they actually follow through (ok, that’s competitive funding rather than performance-based funding, but you get the idea).

Neither the pro- nor anti-camp can point to very much genuinely empirical evidence about efficacy; in the end it all comes down to whether one thinks institutions will respond to incentives.  I think it’s pretty likely that they do; the trick is selecting the right targets, and structuring the incentives in an intelligent way.  And that’s probably as much art as science.

Posted in

One response to “Performance-Based Funding (Part 4)

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.