Performance-Based Funding (Part 2)

So, as we noted yesterday, there are two schools of thought in the US about performance-based funding (where, it should be noted, about 30 states have some kind of PBF criteria built into their overall funding system, or are planning to do so).  Basically, one side says they work, and the other says they don’t.

Let’s start with the “don’t” camp, led by Nicholas Hillman and David Tandberg, whose key paper can be found here.  To determine whether PBFs affect institutional outcomes, they look mostly at a single output – degree completion.  This makes a certain amount of sense since it’s the one most states try to incentivize, and they use a nice little quasi-experimental research design showing changes in completion rates in states with PBF and those without.  Their findings, briefly, are: 1) no systematic benefits to PBF – in some places, results were better than in non-PBF systems, in other places they were worse; and, 2) where PBF is correlated with positive results, said results can take several years to kick-in.

Given the methodology, there’s no real arguing with the findings here.  Where Hillman & Tandberg can be knocked, however, is that their methodology assumes that all PBF schemes are the same, and are thus assumed to be the same “treatment”.  But as we noted yesterday, the existence of PBF is only one dimension of the issue.  The extent of PBF funding, and the extent to which it drives overall funding, must matter as well.  On this, Hillman and Tandberg are silent.

The HCM paper does in fact give this issue some space.  Turns out that in the 26 states examined, 18 have PBF systems, which account for less than 5% of overall public funding.  Throw in tuition and other revenues, and the amount of total institutional revenue accounted by PBF drops by 50% or more, which suggests there are a lot of PBF states where it would simply be unrealistic to expect much in the way of effects.  Of the remainder, three are under 10%, and then there are five huge outliers: Mississippi at just under 55%, Ohio at just under 70%, Tennessee at 85%, Nevada at 96%, and North Dakota at 100% (note: Nevada essentially has one public university and North Dakota has two: clearly, whatever PBF arrangements are there likely aren’t changing the distribution of funds very much).  The authors then point to a number of advances made in some of these states on a variety of metrics, such as “learning gains” (unclear what that means), greater persistence for at-risk students, shorter times-to-completion, and so forth.

But while the HCM report has a good summary of sensible design principles for performance-based funding, there is little that is scientific about it when it comes to linking policy to outcomes. There’s nothing like Hillman and Tandberg’s experimental design at work here; instead, what you have is an unscientific group of anecdotes about positive things that have occurred in places with PBF.  So as far as advancing the debate about what works in performance-based funding, it’s not up to much.

So what should we believe here?  The Hillman/Tandberg result is solid enough – but if most American PBF systems don’t change funding patterns much, then it shouldn’t be a surprise to anyone that institutional outcomes don’t change much either.  What we need is a much narrower focus on systems where a lot of institutional money is in fact at risk, to see if increasing incentives actually does matter.

Such places do exist – but oddly enough neither of these reports actually looks at them.  That’s because they’re not in the United States, they’re in Europe.  More on that tomorrow.

Posted in

One response to “Performance-Based Funding (Part 2)

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.