Judging from various commentary I’ve seen/heard over the last few days, I suspect we’re all about to hear quite a lot of nonsense about Ontario’s proposed new Performance-based Funding (PBF) system. Some of this is a natural consequence of the Conservatives announcing a general policy without announcing any details, which allows people’s imaginations to run rampant (and where Conservatives and higher education are concerned, academics’ imaginations can get pretty wild). So, let’s just go through a few basics about PBF before we draw too many conclusions, shall we?
The most important thing to understand about PBF is that can take a whole bunch of different forms, and the impact of PBF can consequently vary enormously.
Very broadly speaking, PBF takes one of three forms. The first is as a separate envelope on top of base funding. A fairly large proportion of North American PBF schemes have worked this way, as have the very small schemes tried to date in Alberta and Ontario. These systems function effectively as small bonuses on top of enrolment-based funding (e.g. “here’s $5,000 for every graduate”), and to be honest they achieve almost nothing because the size of the envelope is usually quite small, so the amount of money at risk for institutions is almost never large enough to substantially sway institutional thinking.
The second type of PBF is where performance is incorporated as an integral part of the funding algorithm. So, for instance, a number of European countries choose to fund their institutions based on the number of graduates they have (weighted for subject costs), rather than on weighted student numbers the way Ontario does (a couple of US states do this too, most notably Tennessee, but they are outliers). Where teaching and research funding envelopes are separate (as they often are in Europe but never are in North America), the research envelopes quite often are based on publication/citation counts, or as a function of the results of competitive research grants, or what have you. Unfortunately, there is very little research on the effectiveness of these kinds of systems, mainly because they do not lend themselves to difference-in-differences approaches to analysis.
(Note that there is a bit of a grey zone between these two types of schemes: a two-envelope system where the performance envelope is quite large is almost indistinguishable from a system where the performance is integrated into the base. Romania is an example of this, where about 30% of funding is distributed based on the basis of fifteen indicators. In some ways, these can be seen as two ends of a continuum rather than as two separate phenomenon)
A third type of PBF is what might be called “performance-based contracts”, which have been used in Austria among other places. In these, institutional funding or a portion thereof is made conditional on attaining a set of results. In Ontario, we’ve had contracts (the Strategic Mandate Agreements) for over a decade now; just imagine those with actual dollars attached to the individual goals and you have a rough idea of what a performance-based contract looks like.
Even a cursory glance at these three types of PBF funding should tell you that the effects of “performance-based funding” are likely to be quite different. For instance, the Ontario Confederation of University Faculty Association’s claim that performance-based funding that PBF “pits institutions against one another” might be true in the second PBF variant, depending on how the system is structured, but really isn’t true in either the first or third system, where an institution is effectively only competing against itself. (Note: it could also be added that enrolment-based funding also “pits institutions against one another” to a substantial degree, too, but we’ll leave that out for the moment).
Similarly, you may hear a lot of stories over the coming days about how performance-based funding either “don’t work” or “can impede access to post-secondary education”. These are both possible ways to read the evidence from the United States, but applying either uncritically to what might be quite a different system in Ontario is a bit fraught. First of all, the American research tends to lump together dozens of different state-level policies to get “sample size”; the problem is that these policies are often quite different in terms of what they are incentivizing and the size of the incentives and so the nature of the “treatment” being measured is pretty ambiguous. And while certain early (and simple) pay-per-graduate schemes do seem to have had the effect of making institutions slightly more selective in their admissions, thus harming access, a turn in recent years to schemes which paid extra specifically for graduates from minority populations seems to mitigate this problem.
But more importantly, nearly all American PBF schemes are a) of the first type and b) focussed exclusively on graduation. From the descriptions we got on budget day, it would seem that neither of these things is true in Ontario: we are talking about a multi-indicator system, not one narrowly focussed on completion, and though it’s not entirely clear from the budget papers/briefings, it sure seems like we’re probably talking about a system of performance-based contracts. In which case, the relevance of any American data to this Ontario exercise should be pretty limited.
Or you may hear things such as this statement from NDP Leader Andrea Horwath – that performance outcomes means the government wants everyone to stop teaching humanities or something. The thing is, nothing we know about the plan we have learned to date suggests that this is the case. First of all, graduate outcomes are very clearly not the only things being measured there and second, we’ve been told each institution will be being measured against its own past record, and not against other institutions which may offer a different palette of programs. Algoma isn’t going to be asked to become like Waterloo, basically. Until we’ve seen more details on the program, I’d say any declarations on this front are hugely premature.
I don’t mean to sound Pollyanna-ish about the Ontario government’s scheme: there may turn out to be all sorts of problematic aspects to the initiative when it is eventually given form, some of which I covered back on Friday. For one thing, I am pretty sure ten indicators are too many, particularly if institutions are given some ability to weigh a couple of them down to zero or near-zero. The definition and operationalization of each indicator will be tricky, and I guarantee you that at least one of them will be as dumb as a bag of hammers. And even the non-dumb ones could be problematic; it’s all very well to reward institutions for improving their graduates’ employability, but what happens to institutions in a recession when everyone’s employment numbers tank simultaneously? Will that be viewed as “poor performance” and funds docked accordingly? Then there is the ambiguity about what happens to dollars that each institution “loses” – does it stay in the system or does it go back to Treasury? – which is deeply problematic as it suggest the motive of the scheme is less continual institutional improvement than it is generating savings. In other words, thrashing out this policy is going to be a tough and complicated set of discussions. Let’s not make it more complicated by dealing in poorly-though-though or weakly-evidenced critiques that aren’t based on the actual proposals at hand.