The last time we talked about funding formulas, we discussed the difference between determinative and allocative formulas. When we talk about Ontario, which is currently undergoing a funding formula review, we’re definitely talking about the latter. The formula isn’t going to drive total spending (this remains the legislature’s prerogative), what it is going to do is decide how the total amount will be split up.
The question is: how best to do this?
At this point, it’s worth going into some history about funding formulas. Back in the day – say, the 1960s – universities would come cap-in-hand to government asking for money for various sundry purposes (usually, there were a couple of new “wow” proposals in there to justify a big increase), and government, in-turn, would cut cheques to individual institutions for any old amount. Eventually, governments got tired of that shtick, and decided to come up with a way to allocate funds automatically – but fairly – to avoid going through that rigamarole every year.
Over time, however, global thinking about funding formulas changed – due mainly to work done at the OECD. It’s now no longer just about divvying up money, it’s about using money to create a set of incentives to steer the system. Now, admittedly, when the OECD talks about using money to steer a system, it does so because it thinks it’s better for governments to set goals for institutions, and then get out of the way. In other words, governments “should steer, not row”.
(An interesting question in Ontario, of course, is how formula spending power can be made to steer the system, when the government of the day has a predilection not just to row, but to flail around like a five year-old on a boogie board. Should be interesting.)
Anyhow, the idea is that you can get universities to do stuff by rewarding them via the funding formula. The question then, from a practical point of view, is: how big a carrot do you need to get an institution to do something it may not want to do (e.g. pay more attention to teaching, get research institutions to reach out more to poorer kids, etc.)? The answer here is: “nobody knows”. And this is a bit of a problem, especially if you’re trying to incentivize something. Thanks to the work of Nicholas Hillman and David Tandberg, we can be pretty sure that small nudges – say, nudges that account for 2-3% of the budget, or so – aren’t going to work. If you’re going to try something like this, you need to go big. As in, “at least 10% of an institutional budget” big.
Now, here’s the thing: in Ontario, the government only accounts for about 40% of university funding, with the rest coming from tuition or commercial activities. So something that puts 10% of the institutional budget at risk actually has to put 25% of government funding at risk. And logically speaking, this means you probably can only pick one, or at most two goals for your funding formula to target. So what should the government pick: completion rates? Research commercialization?
It’s hard, in fact, to see how you can steer competently in a way that makes sense for all institutions, in a jurisdiction where so little institutional funding comes from government. There is the possibility of creating individual goals for each institution based on individual missions, but now you’re getting a long way from the idea of a “formula”, something where everyone pumps the same numbers into the system, and a global result for all institutions pops out.
Basically, system steering gets a lot tougher for governments if they’ve already allowed institutions to become mostly student-funded. This is something Ontario is about to discover in a big way.
I don’t think your summary of the Hilman and Tandberg paper is accurate, Alex. They did not show that the nudge didn’t work; they only showed that there was no performance improvement showing up from any action the nudge might have instigated (in the timeframe of their study, which as you know has also been criticized as too short).
Other research suggests that nudges do work in drawing attention and instigating action, e.g., Dougherty, K. J., & Reddy, V. (2013). Performance Funding for Higher Education. ASHE Higher Education Report, 39(2), 1-134. The trick, of course, is to drive actions that will move the needle on the parameter being measured, and to get the timeframe for measurement right.
I also wonder about your comment on the % of government funding that would be required to get institutional attention. My gut feel from being in executive budget meetings is that the 10% number is about right to get attention; I think combining a few possible directions to earn that performance funding doesn’t require that each be at the 10% threshold (but I am not aware of any studies that would either back up that ‘informed guess’ or contradict it).
Interesting item about the 40% source of funding…how much does that “shift” by size of institution? And do you have other breakdowns? I was wondering if your tax policies (such as on alumni funding) could get additional results if, for example, they got slightly better tax break on charitable donations from alumni if they target the funds to the same goals too.
Paul