Yesterday, we looked at some of the math behind Ontario’s new funding system, and how a system which is allegedly “60% performance-based funding” will at most result in about $15 million, or 0.4%, of funding re-allocated in some way. But there are still more oddities to explore.
One of my favourite foibles about this funding system is how it seems to have been constructed one element at a time, with no overall strategic intent. For instance: the main goal of the enrolment-based portion of the new funding system is to stop institutions from growing: it doesn’t actually forbid institutions from enrolling more students, but beyond a certain point, the government no longer provides funding. The logic here is that the previous system drove institutions to always try to enrol an ever-larger share of system enrolments, thus driving a kind of arms-race. To the extent one believes this was a problem, the new system is an effective means of solving it.
But. But. But. Over on the performance-based side of the funding system, there are indicators which encourage higher enrolments. The first is one which encourages higher enrolments in “areas of institutional focus” (see below). The other is the “community impact” indicator – that is the one I routinely describe (with reason, I think) as being dumber than a bag of hammers. This indicator is simply the institution’s enrolment divided by the size of the community. Since the denominator is not something within the institution’s control (short of unleashing deadly bioagents into the community), the only way they can improve their “performance” on this indicator is to enrol more students. So basically, Ontario now has a system where one side of the funding formula is discouraging more enrolments and the other side of the formula is encouraging them. If the government wanted to go out of its way to signal that its left hand did not know what its right hand was doing, this is exactly the kind of measure to include.
Maybe one of the more innovative features of the Ontario system is the idea that institutions can to some degree create their own indicators. In theory, this allows institutions to focus on something which matters to them and to their own regional/historic mission; it also allows them to game indicators. Since all these individual indicators were published last week, we now have some evidence as to how different institutions have chosen to play this game.
Let’s start with the indicator on “enrolments in areas of institutional focus”. You can sort of see the attraction of this kind of indicator: it has the potential to help institutions specialize a bit in their program offerings, and if you believe in greater institutional differentiation, that’s a good thing. Some institutions took this indicator in that spirit and proposed fairly narrow areas of specialization: OCAD chose “design and digital” as its area of focus; Waterloo picked Engineering Math and Computer Science, etc. Under this indicator, these institutions will be rewarded if these areas of the institutions grow their share of total enrolment.
One or two institutions I think took the “specialization” thing too literally. Nipissing, a former teacher’s college, chose “Education” as its area of focus, which makes perfect sense historically but it liable to get the institution in trouble financially next time there is a bust in education enrolments (numbers in this sector are highly cyclical). On the flip side, some institutions seemed to be taking great amusement in devising the least specialized mandates possible. Wilfrid Laurier’s areas of focus is “Arts and Sciences”. Carleton’s is “interdisciplinary programs”. Queen’s chose as its focus (this one is my favourite) “Engineering, Computer Science, Business, Arts and Sciences, including Health Sciences”, which is the literally the entire university minus the law faculty. Hard to lose when you play it like that.
The other institutionally chosen indicator is “Economic Impact”. A lot of institutions went with micro-level definitions like “number of international students” (Algoma), “economic impact of students coming from outside <insert region>” (Nipissing, Trent, Windsor, Ottawa) or in one case (Guelph) just “student spending in Guelph region”. Lakehead didn’t really try at all on this indicator and just went for induced institutional economic impact (which – as we saw back here – is just institutional expenditures times a number in the vicinity of 1.5). A few – Brock, Hearst, Ontario Tech – tried to tie it to the number of co-op or internship placements (which seems badly advised because they are very vulnerable to economic downturns); Waterloo tried a slightly difficult tack and went with total wages earned by co-op students.
Several institutions went with “startups” as a measure of economic impact. I imagine the Minister’s office was very excited by this, because Ministers tend to equate start-ups with economic dynamism. But a number of institutions (Toronto, Queen’s, York) basically hoodwinked the government on this measure by defining “startups” as “startups supported by the university through incubator/accelerator facilities”. This is brilliant, since a) institutions can simply add more spots if they want and b) the startups in these incubators can usually be from the community at large and need not have any relation to campus activities or scientific/research efforts. High-quality indicator gaming, much applause. Only a couple of institutions chose indicators which are both related to economic dynamism and are difficult to game, namely Western’s choice of “active revenue-generating license agreements” and McMaster’s “invention disclosures”.
A final word, perhaps, about data and COVID. A majority of the indicators currently in use are going to get screwed by the pandemic. Remember from yesterday that every target indicator is based on the previous three years data. We’re about to have one or two years of extraordinarily strange data that will stay in the system and distort “target rates” for four or five years to come. Graduate employment rates? They’re going to come crashing down for the class of 2020, which means, ceteris paribus, institutions will be punished by the formula in 2022 (because low employment), but then rewarded for the years 2023 through 2025 (because the benchmark average will be artificially low). Same with any institutional-level indicators having to do with co-ops, interns, placements, etc. Research dollars have, to some extent, been mashed around this year due to pandemic. Etc. etc.
The Minister of Colleges and Universities, Ross Romano, claimed yesterday on CBC radio this is not a problem because there is a mechanism (see section 7 of the PBF Technical Manual) to re-distribute any dollars lost through underperformance to institutions which overperform. But there are two problems with this answer. First of all, any redistribution occurs within the context of a single indicator – to the extent some lose, some can also win. But if everyone underperforms – which is pretty clearly going to be the case with the employment indicator – then there is no redistribution. Second of all, it doesn’t account for how those anomalous one-year impacts are going to stay in the system and affect funding calculations for years to come.
I will repeat what I said a little over a year ago: these problems are unnecessary and could have been avoided had the government not taken the goofy (albeit perhaps politically astute) position that PBF should be about institutions competing “with themselves” rather than “against each other” and hence to select a “contract system” rather than an “envelope system” of PBF. That way institutions that performed relatively better (even if everyone was declining) could still be rewarded. In fact, I would go further: what is so striking about the Ontario model is that what it incentivizes really is stability, or avoiding retrogression, whereas an envelope model (like Tennessee’s, for instance) actually rewards continual improvement over a number of years.
This whole thing could have been handled so much better. It saddens me what a missed opportunity it all is. And now it seems Alberta is about to replicate this entire problem. Right now, our best hope for sensible performance-based funding seems to lie in Manitoba. Maybe we can still salvage something from all this yet.
While it is not uncommon for these one-thought-blogs to make me laugh, it is rare for one to do so thrice while also shaking my head in disbelief. Bravo!
Somebody should write an academic satire, in which a university does release deadly bioagents into the community to change its scores on some sort of index.