What People Are Complaining About When They Complain About Performance-Based Funding

If you are a faithful reader of this blog, you’ll know I am not a big fan of the Performance-Based Funding (PBF) systems being developed by the governments of Alberta and Ontario (though the latter is a bit less hopeless than the former).  But unlike many who oppose these initiatives, I don’t think PBF is a bad idea in principle: I’ve written quite extensively about why they’re a good idea, at least when designed correctly.  Today I want to talk about the anti-PBF diehards and why they are wrong.

The main protagonists for the No pasaran PBF crowd are the big academic faculty unions; in particular the Ontario Confederation of University Faculty Associations, the Confederation of Alberta Faculty Associations and the Canadian Association of University teachers.  Just google those names and “performance-based funding” and you’ll see it right away.  None has made any attempt over the last two years to focus on the specifics of poor program design (of which, let’s face it, examples are legion); rather, their tactic has been to dismiss this idea as “de-stabilizing”, “ineffective” or having “a long history of failing” etc.  All things they claim are backed by research.

But what the actual research shows is something more limited.  The favourite go-to on this is a research summary compiled by the University of Wisconsin’s Nicholas Hillman, and published by the Century Foundation, entitled Why Performance-Based College Funding Doesn’t Work

Let’s get down to brass tacks: what is the evidence that PBFs do not achieve their goals?  Let’s look at the famous Nicholas Hillman article that is cited to show that PBF doesn’t work.  Hillman reviews 12 articles to come to his conclusion – which, let me stress, was only with reference to student completions.  I went back and looked at 11 of these (the other being behind a paywall I couldn’t get through).  Of these, one didn’t look at the effect of PBFs and completion at all.   Of the ten that remained, six used an identification strategy which lumped together every type of performance-based funding into a single category, meaning they provide no hope of identifying whether PBF can do well, only whether a heterogenous grab-bag of initiatives labelled PBF have, on average, had much effect (the answer is no, but it’s not clear that this means very much).  Moreover, all of these articles are looking at data from a period when performance-based incentives were very small – typically only 1-3% of total budgets in theory and less in practice (recall from back here how Ontario’s 60% Performance-Based Funding system in fact only puts about 0.4% of total budgets at risk.)  And again, they were also looking exclusively at the relationship between PBF and student completion rates.

That leaves four genuine efforts to look at state-level initiatives in a clear way.  Of these, two looked at new interventions where less than 2% of the state budget was devoted to performance-based funding (and as we saw back here the percentage of funds truly at risk can be a tiny percentage of the budget “theoretically” at risk).  The other two look at examples where the total funds for performance were between 2-8% (Tennessee pre-2010, and Pennsylvania).  Neither found any significant effect of PBF measures: one hypothesized that the amounts involved were too small; the other that principal-agent problems get in the way of institutions responding effectively to PBF incentives.

This is not what reasonable people would describe as “overwhelming evidence” in favour of the proposition that performance-based finding “doesn’t work”, regardless of what is being incentivized or the size of the incentive.  Extrapolating from these conditions to say that PBF systems never work (e.g. “A wide body of research shows that performance funding is incapable of credibly reflecting the breadth and depth of a student’s education, the long-term benefits of basic research projects, or the contributions of a faculty or staff member”, OUCC), regardless of the amount of money or the types of performance are being measured is simply reading way beyond the evidence.  A more reasonable conclusion is that PBFs with relatively small amounts of money directed at student completion rates don’t work. 

Another frequent claim is that PBFs have negative effects on equity (see page 3): in particular, that performance-based completion measures have negative effects on racial equity as institutions try to cherry-pick higher-achieving (meaning disproportionately white) high school graduates in order to achieve higher completion rates.  A lot of this comes from a single study in Indiana; for the most part the evidence is mixed.

On the question of whether performance-based funding creates perverse incentives, there are certainly a litany of studies which suggests this is case.  Still, the evidence is more ambiguous, as the work of people like Amy Li and Denisa Gandara and Robert Kelchen suggest.  In fact, there is an entire recent book which mostly concludes that more recent (i.e. post-2020) PBF policies have not, for the most part, had negative effects because the indicators are being crafted better (Outcomes-based Funding and Race: Can Equity be Bought).   

But even if that weren’t true, citing the existence of perverse incentives isn’t a reason to dismiss PBFs, it’s a reason to pay more attention to detail.  It is very possible to design metrics that don’t have these perverse consequences, and a good recent paper from the Center for Post-Secondary Success shows us how to do it.  In some US states, there are extra performance bonuses for graduating Black students, or Pell-eligible (i.e “low income” students).  In France, the 5% of the funding system that is based on graduation rates is calibrated in such a way that the academic strength of the incoming class (measured through baccalauréat results) are taken into account, meaning that there is no advantage to becoming more selective.  In short, not coming up with metrics that can be used to promote equity shows a limit in imagination. 

And here’s the thing: many people who oppose performance-based funding measures aren’t really opposed to ideas like incentivizing completion rates for Indigenous or Black students.  They understand, theoretically, that incentives can be structured in such a way as to promote all sorts of outcomes.  What they really object to are incentivizing the kinds of things that Conservative governments seem to want to incentivize: like graduate employment rates and private research contracts (personally, I am happy in theory incentivizing both those things although in practice institutions are already good at the former and I don’t think incentives would achieve very much in practice).  It is not about the instrument, it is about who wields it.

I find this enormously disappointing, because it means the attacks on PBF aren’t actually being made in good faith.  If you want to attack conservatives, attack conservatives: there’s no reason to materially misrepresent what a policy instrument does or is capable of doing.  And second, it means non-Conservative governments are less likely to take up a promising tool to make institutions better in a non-invasive way.  Because believe me, the alternative is not governments leaving universities to their own business when it comes to achieving big public goals.  The alternative is a much more invasive government, trying to micromanage institutional affairs.

Posted in

2 responses to “What People Are Complaining About When They Complain About Performance-Based Funding

  1. “And here’s the thing: many people who oppose performance-based funding measures aren’t really opposed to ideas like incentivizing completion rates for Indigenous or Black students. They understand, theoretically, that incentives can be structured in such a way as to promote all sorts of outcomes.”

    I’m certainly not opposed to improving completion rates for indigenous or black students, but my problem is with incentivizing it. If you’re only improving completion rates for the sake of the incentive, the temptation would be to lower standards until targets are met.

    The problem isn’t with one goal or another, it’s with governing ourselves by metricized goals in the first place. All of them are subject to Goodhardt’s law.

  2. Good afternoon, I really enjoyed your article and I share your disappointment. I fully agree “that incentives can be structured in such a way as to promote all sorts of outcomes. What they really object to are incentivizing the kinds of things that Conservative governments seem to want to incentivize.”

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.