HESA

Higher Education Strategy Associates

February 20

Performance-Based Funding (Part 4)

I’ve been talking about performance-based funding all week; today, I’ll try to summarize what I think the research and experience actually says.

Let’s return for a second to a point I made Tuesday.  When determining whether PBF “works”, what matters is to be able to show that incentivizing particular outcomes actually changes institutional behaviour, and leads to improvements in outcomes. However, no study to date has actually bothered to link quantifiable changes in funding with any policy outcomes.  Hillman and Tandberg – who found little-to-no positive effects – came closest to doing this, but they looked only at the incidence of PBF, and not the size of PBF; as such, their results can easily be read to suggest that the problem with PBF is that it needs to be bigger in order to work properly.  And indeed, that’s very likely: in over half of US states with PBFs, the proportion of operating income held for PBF purposes is 2.5%; in practice, the size of the re-distribution of funds from PBFs (that is, the difference between how that 2.5% is distributed now versus how it was distributed before PBFs were introduced) is probably a couple of orders of magnitude smaller still.

I would argue that there’s a pretty simple reason why most PBFs in North America don’t actually change the distribution of funds: big and politically powerful universities tend to oppose changes that might “damage” them.  Therefore, to the extent that any funding formula results in something too far from the status quo (which tends to reward big universities for their size), they will oppose it.  The more money that suddenly becomes at risk, the more the big universities scream.  Therefore, the political logic of PBFs is that to have a chance of implementation they have to be relatively small, and not disturb the status quo too much.

Ah, you say: but what about Europe?  Surely the large size of PBF incentives must have caused outrage when they were introduced, right?  That’s a good question, and I don’t really have an answer.  It’s possible that, despite their size, PBF schemes did not actually change much more in terms of distribution than did their American counterparts.  I can come up with a few country-specific hypotheses about why that might be: the Danish taximeter system was introduced at a time when universities were still considered part of governments (and academics part of the civil service), the Polish system was introduced at a time of increasing government funding, etc.  But those are just guesses.  In any case, such lit as I can find on the subject certainly doesn’t mention much in terms of opposition.

So, I think we’re kind of back to square one.  I think the Hillman/Tandberg evidence tells us that simply having a PBF doesn’t mean much, and I think the European evidence suggests that at a sizeable enough scale, PBFs can  incentivize greater institutional efficiency.  But beyond that, I don’t think we’ve got much solid to go on.

For what it’s worth, I’d add one more thing based on work I did last year looking at the effect of private income on universities in nine countries: and that is, only incentivize things that don’t already carry prestige incentives.  Canadian universities are already biased towards activities like research; incentivizing them further through performance funding is like giving lighter fluid to a pyromaniac.

No, what you want to incentivize is the deeply unsexy stuff that’s hard to do.  Pay for Aboriginal completions in STEM subjects.  Pay for Female Engineering graduates. Pay big money to the institution that shows the greatest improvement in the National Survey of Student Engagement (NSSE) every two years.  Offer a $20 million prize to the institution that comes up with the best plan for measuring – and then improving – learning, payable in installments to make sure they actually follow through (ok, that’s competitive funding rather than performance-based funding, but you get the idea).

Neither the pro- nor anti-camp can point to very much genuinely empirical evidence about efficacy; in the end it all comes down to whether one thinks institutions will respond to incentives.  I think it’s pretty likely that they do; the trick is selecting the right targets, and structuring the incentives in an intelligent way.  And that’s probably as much art as science.

February 19

Performance-Based Funding (Part 3)

As I noted yesterday, the American debate on PBF has more or less ignored evidence from beyond its shores; and yet, in Europe, there are several places that have very high levels of performance-based funding.  Denmark has had what it calls a “taximeter” system, which pays institutions on the basis of student progression and completion, for over 20 years now, and it currently makes up about 30% of all university income.  Most German Länder have some element of incentive-based funding on either student completion or time-to-completion; in some cases, they are also paid on the basis of the number of international students they attract (international students pay no tuition in Germany).  In the Netherlands, graduation-based funding makes up over 60% of institution operating grants (or, near as I can tell, about 30% of total institutional income).  The Czech Republic now gives out 20% of funding to institutions on a quite bewildering array of indicators, including internationalization, research, and student employment outcomes.

Given this, you’d think there might be a huge and copious literature about whether the introduction of these measures actually “worked” in terms of changing outcomes of the indicators in question.  But you’d be wrong.  There’s actually almost nothing.  That’s not to say these programs haven’t been evaluated.  The Danish taximeter system appears to have been evaluated four times (haven’t actually read these – Danish is fairly difficult), but the issue of dropouts doesn’t actually seem to have been at the core of any of them (for the record, Danish universities have relatively low levels of dropouts compared to other European countries, but it’s not clear if this was always the case or if it was the result of the taximeter policy).  Rather, what gets evaluated is the quite different question of: “are universities operating more efficiently?”

This is key to understanding performance indicators in Europe. In many European countries, public funding makes up as close to 100% of institutional income as makes no odds.  PBF has therefore often been a way of trying to introduce a quasi-market among institutions so as to induce competition and efficiency (and on this score, it usually gets fairly high marks).  In North America, where pressures for efficiency are exerted through a competitive market for students, the need for this is – in theory at least – somewhat less.  This largely explains the difference in the size of performance-based funding allocations; in Europe, these funds are often the only quasi-competitive mechanism in the system, and so (it is felt) they need to be on the scale of what tuition is in North America in order to achieve similar competitive effects.

Intriguingly, performance-based funding in Europe is at least as common with respect to research as it is to student-based indicators (a good country-by-country summary from the OECD is here).  Quite often, a portion of institutional operating funding will be based on the value of competitive research won, a situation made possible by the fact that many countries in Europe separate their institutional grants into funding for teaching and funding for research in a way that would give North American universities the screaming heebie-jeebies.  Basically: imagine if the provinces awarded a portion of their university grants on the same basis that Ottawa hands out the indirect research grants, only with less of the questionable favouritism towards smaller universities.  Again, this is less about “improving overall results” than it is about keeping institutions in a competitive mindset.

So, how to interpret the evidence of the past three days?  Tune in tomorrow.

February 18

Performance-Based Funding (Part 2)

So, as we noted yesterday, there are two schools of thought in the US about performance-based funding (where, it should be noted, about 30 states have some kind of PBF criteria built into their overall funding system, or are planning to do so).  Basically, one side says they work, and the other says they don’t.

Let’s start with the “don’t” camp, led by Nicholas Hillman and David Tandberg, whose key paper can be found here.  To determine whether PBFs affect institutional outcomes, they look mostly at a single output – degree completion.  This makes a certain amount of sense since it’s the one most states try to incentivize, and they use a nice little quasi-experimental research design showing changes in completion rates in states with PBF and those without.  Their findings, briefly, are: 1) no systematic benefits to PBF – in some places, results were better than in non-PBF systems, in other places they were worse; and, 2) where PBF is correlated with positive results, said results can take several years to kick-in.

Given the methodology, there’s no real arguing with the findings here.  Where Hillman & Tandberg can be knocked, however, is that their methodology assumes that all PBF schemes are the same, and are thus assumed to be the same “treatment”.  But as we noted yesterday, the existence of PBF is only one dimension of the issue.  The extent of PBF funding, and the extent to which it drives overall funding, must matter as well.  On this, Hillman and Tandberg are silent.

The HCM paper does in fact give this issue some space.  Turns out that in the 26 states examined, 18 have PBF systems, which account for less than 5% of overall public funding.  Throw in tuition and other revenues, and the amount of total institutional revenue accounted by PBF drops by 50% or more, which suggests there are a lot of PBF states where it would simply be unrealistic to expect much in the way of effects.  Of the remainder, three are under 10%, and then there are five huge outliers: Mississippi at just under 55%, Ohio at just under 70%, Tennessee at 85%, Nevada at 96%, and North Dakota at 100% (note: Nevada essentially has one public university and North Dakota has two: clearly, whatever PBF arrangements are there likely aren’t changing the distribution of funds very much).  The authors then point to a number of advances made in some of these states on a variety of metrics, such as “learning gains” (unclear what that means), greater persistence for at-risk students, shorter times-to-completion, and so forth.

But while the HCM report has a good summary of sensible design principles for performance-based funding, there is little that is scientific about it when it comes to linking policy to outcomes. There’s nothing like Hillman and Tandberg’s experimental design at work here; instead, what you have is an unscientific group of anecdotes about positive things that have occurred in places with PBF.  So as far as advancing the debate about what works in performance-based funding, it’s not up to much.

So what should we believe here?  The Hillman/Tandberg result is solid enough – but if most American PBF systems don’t change funding patterns much, then it shouldn’t be a surprise to anyone that institutional outcomes don’t change much either.  What we need is a much narrower focus on systems where a lot of institutional money is in fact at risk, to see if increasing incentives actually does matter.

Such places do exist – but oddly enough neither of these reports actually looks at them.  That’s because they’re not in the United States, they’re in Europe.  More on that tomorrow.

February 17

Performance-Based Funding (Part 1)

I was reading the Ontario Confederation of University Faculty Association (OCUFA)’s position statement on a new funding formula for the province.  Two things caught my eye.  One, they want money to make sure Ontario universities can do world-class research and teaching; and two, they demand strict opposition to any kind of performance-based funding formula (PBF).  Put differently: OCUFA wants great teaching and research to be funded, but are adamantly opposed to rewarding anyone for actually doing it.

Except that’s slightly uncharitable.  OCUFA’s larger point seems to be that performance-based funding formulae (also known as output-based funding) “don’t actually achieve their goals”, and point to work done by University of Wisconsin professor Nicholas Hillman and Florida State’s David Tandberg on the topic.  From a government-spending efficacy point of view, this objection is fair enough, but it’s a bit peculiar from an institutional or faculty standpoint; the Hillman/Tandberg evidence doesn’t indicate that institutions were actually harmed in any way by the introduction of said arrangements, so what’s the problem?

Anyways, last week HCM associates in Washington put out a paper taking a contrary view to Hillman/Tandberg, so we now have some live controversy to talk about.  Tomorrow, I’ll examine the Hillman/Tandberg and HCM evidence to evaluate the claims of each, but today I want to go through what output-based funding mechanisms can actually look like, and in the process show how difficult it is for meta-analyses – such as Hillman’s and HCM’s – to calculate potential impact.

At one level, PBF is simple: you pay for what comes out of universities rather than what goes in.  So: don’t pay for bums in seats, pay for graduates; don’t pay based on research grants earned, pay based on articles published in top journals, etc.  But the way these get paid-out can vary widely, so their impacts are not all the same.

Take graduation numbers, which happens to be the simplest and most common indicator used in PBFs.  A government could literally pay a certain amount per graduate – or maybe “weighted graduate” to take account of different costs by field of study.  It could pay each institution based on its share of total graduates or weighted graduates.  It could give each institution a target number of graduates (based on size and current degree of selectivity, perhaps) and pay out 100% of a value if it hits the target, and 0% if it does not.  Or, it could set a target and then pay a pro-rated amount based on how well the institution did vis-a-vis the target.  And so on, and so forth.

Each of these methods of paying out PBF money plainly has different distributional consequences.  However, if  you’re trying to work out whether output-based funding actually affects institutional outcomes, then the distributional consequence is only of secondary importance.  What matters more is how different the distributional outcomes are from whatever distribution existed in the previous funding formula.

So, say the province Saskatchewan moves from its current mix of historical grant and formula grant to a fully PBF system, where 100% of the funding is based on the number of (field-weighted) graduates produced.  Currently, the University of Saskatchewan gets around three times as much in total operating grants as the University of Regina.  If USask also produced three times as many (field-weighted) graduates as URegina, even the shift to a 100% PBF model wouldn’t change anything in terms of distribution, and hence would have limited consequences in terms of policy and (presumably) outputs.

In effect, the real question is: how much funding, which was formerly “locked-in”, becomes “at-risk” during the shift to PBF?  If the answer is zero, then it’s not much of a surprise that institutional behaviour doesn’t change either.

Tomorrow: a look at the duelling America research papers on PBF.

February 13

Meetings vs. Management

It’s always difficult to make accurate observations about differences in national higher education cultures.  But one thing I can tell you that is absolutely not true is the perception that Canadian universities are suffering under some kind of unprecedented managerialist regime.  If anything, Canadian academics are among the least managed employees in the entire world;

When academics complain of over-management, they aren’t using that term in a way that workers in other fields would recognize.  They are not, for instance, required to be in any one place other than the six to nine hours per week they are teaching: it is simply understood that they are working and being efficient at a place of their choosing.  The content of their work largely escapes scrutiny: no one checks-in on their classes to see what is being taught (though Queen’s university may be wishing they had a bit more hands-on management after revelations of anti-vaxxing material in a health class last week).  Research topics are largely left to the individual researchers’ interests.  In other words, subject to contractual obligations around teaching, they mostly do what they want.  In most respects, they resemble a loosely connected set of independent contractors rather than actual employees.

Rather, what academics in Canada are actually complaining about when they talk about managerialism is three things:

1)      The existence (and growth) at universities of a class of managers that are almost as well paid as senior academics.  The fact that these people rarely impact the working life of academics is irrelevant; their mere presence is evidence of “managerialism”.

2)      The existence of apparently pointless bureaucracy around purchasing, reimbursement, and travel.  This annoyance is easy to understand, but it’s not clear to me that this problem is any worse at universities than it is at other organization of similar size.

3)      Meetings.  Lots and lots of meetings.  Yet the thing about meetings in universities is that they are rarely decision-making affairs.  More often than not, in fact, they are decision-retarding events (or possibly even decision-preventing events), whose purpose is more about consultation than administration.

In a real managerial university, courses would be ruthlessly overseen, if for no other reason than to ensure that classes met minimum enrolment counts.  In a real managerial university, individual professors’ research programs would be reviewed continuously in order to ensure that it was attracting maximum funding.  In a real managerial university, the managers would know where employees were from 9-5 every day. But almost none of that exists in Canada.  To really see that stuff you need to go to the UK or – to a lesser extent – Australia.

Professors are, of course, right to worry about managerialism, because UK universities sound pretty horrid.  But a dose of actual managerialism (as opposed to just having more meetings) probably wouldn’t hurt in Canadian universities – particularly when it comes to ensuring curriculum coherence and enforcing class-size minima.

February 12

Free Election Manifesto Advice

OK, federal political parties.  I have some election manifesto advice for you.  And given that you’ve all basically accepted Tory budget projections and promised not to raise taxes, it’s perfect.  Completely budget neutral.  Here it is:

Do Less.

Seriously.  After 15 years of increasingly slapdash, haphazard policy-making in research and student aid, a Do Less agenda is exactly what we need.

Go back to 1997: we had three granting councils in Canada.  Then we got the Canadian Foundation for Innovation.  Then the Canadian Foundation for Sustainable Development Technology.  Then Brain Canada, Genome Canada, Grand Challenges Canada, the Canadian Foundation for Healthcare Improvement, The Canada First Research Excellence Fund – and that’s without mentioning the proliferation of single-issue funds created at SSHRC and NSERC.  On commercialization, we’ve got a College and Community Innovation Program, a College-University Idea to Innovation Program, a dozen or so Centres of Excellence for Commercialization and Research (CECRs) – plus, of course, the wholesale revamp of the National Research Council to turn it into a Canadian version of the Fraunhofer Institute.

It’s not that any of these initiatives are bad.  The problem is that by spreading out money thinly to lots of new agencies and programs, we’re losing something in terms of coherence.  Funding deadlines multiply, pools of available cash get smaller (even if overall budgets are more or less what they used to be), and – thanks to the government requirement that a large portion of new funding arrangements be leveraged somehow – the number of funders whose hands need to held (sorry, “whose accountability requirements need to be met”) is rising very fast.  It all leaves less time to, you know, do the actual science – which is what all this funding is supposed to be about, isn’t it?

Or take student assistance.  We know how much everyone (Liberals especially) loves new boutique student aid programs.  But that’s exactly the wrong way to go.  Everything we know about the $10 billion/year student aid business is that it’s far too complicated, and no one understands it.  That’s why people in Ontario scream about affordability and accessibility when in fact the province is nearly as generous as Quebec when it comes to first-year low-income university students.  For people to better appreciate what a bargain Canadian higher education is, we need to de-clutter the system and make it more transparent, not add more gewgaws.

So here’s the agenda: take a breather on new science and innovation programs; find out what we can do to make the system simpler for researchers; merge and eliminate programs as necessary (is Genome Canada really still worth keeping, or can we basically fold that back in to CIHR?) – while ensuring that total funds available do not diminish (a bump would be nice, too, but the simplification is more important).

As for student aid?  Do a deal with the provinces to simplify need assessment and make it easier for students to know their likely aid eligibility much further in advance.  Do a deal with provinces and institutions to convert tax credits into grants to institutions for a large one-time tuition reduction.  Do not, under any circumstances, do anything to make the system more complex.

I know it goes against the grain, guys.  I know you need “announceables” for the campaign.  But in the long run, it’s more important to do things well.  And to do that, we really need to start doing less.

February 11

Who Owns Courses?

After the preposterous CAUT report on the University of Manitoba’s Economics Department was released, President David Barnard offered a wonderfully robust and thought-provoking refutation of CAUT’s accusations.

One of the most interesting observations Barnard makes relates to a specific incident from the report, namely the request by a departmental council to review an existing Health Economics course after having approved a new Economic Determinants of Health Course taught by the same professor.  CAUT viewed this as a violation of the professor’s academic freedom (basically – she/he can teach whatever she/he likes).

In an age when we are all intensely aware of intellectual property rights issues, we have, over time, come to focus on the professor’s role as a creator of content.  And this is absolutely right.  The way in which Economics Macro 300 or Organizational Behaviour 250 gets taught is a reflection of a professor’s lifetime of scholarship, and many hundreds of hours of hard work in creating a pedagogy and syllabus that conveys the necessary information to students.  The idea that this “belongs” to anyone other than the professor is ridiculous – which is why there have been such fierce battles over the terms of universities’ involvement with private for-profit companies, like Coursera, with respect to online education.

Barnard responds to this line of thinking by reminding us of a very important truth: Macro 300 and OB 250 exist independently of the professors who currently teach them.  When they are approved by Senate, they become the property of the university as a whole (with the department in which the course is situated taking special responsibility).  After the incumbent of a particular course retires or leaves, someone else will be asked to takeover.  The course, in this sense, is eternal and communal.  It does not “belong” to the professor.

There’s an obvious tension here between the way a course gets taught (owned by the prof) and the course objectives and outcomes (owned by the university).  Usually – at least in Canada and the United States – we solve the problem by always leaning in favour of the professor.  Which is certainly the easier option.  However, this attitude, which gives total sovereignty to professors at the level of the individual course, inevitably leads to programs become disjointed –  especially in Arts and Sciences.  Students end up missing key pieces of knowledge, or have to learn it and re-learn it two or three times.

Universities own courses in the sense that a course is a building block towards a degree, (which the university very definitely owns – its entire existence is predicated on being a monopoly provider of degrees).  As a result, course objectives, how a course fits into the overall program goals, course assessment guidelines, and course delivery mechanisms (online, blended, or in-person) are all legitimately in the hands of the university and its academic decision-making bodies.  The actual syllabus – that is, what material gets taught in pursuit of the objectives – and the pedagogical methods used is what belongs to the professor.

The problem here is that, in Arts and Science at least (less so elsewhere), our smorgasbord thinking about curriculum makes us prone to assuming that courses stand alone, and do not contribute to a larger programmatic structure.  Hence the widespread fallacy that professors “own” courses, when the reality is that courses are a shared enterprise.

February 10

The Unbearable Mediocrity of Canadian Public Policy

A few months ago, I wrote a very harsh review of a paper written by the former head of the Canadian Council on Learning, Paul Cappon.  I was mostly cheesed off by Cappon’s mindless (and occasionally mendacious) cheerleading on behalf of an expanded role for the federal government in education.  But in one respect, Cappon had a point: though I disagree with him about what level of government should be doing it, we need someone in Canada setting goals for our systems of higher education.  Because as it stands, we effectively have none.

Take Ontario (please).  Here we have a government that will sanctimoniously tell you how much it cares about access.  My God, they love access.  They love access so much that they will hand out money to just about anyone in its name, no matter how preposterous.  But ask yourself: if the government cares so much about access, why does it not have a measurable access goal against which to evaluate progress?  In the absence of such a goal, one gets the sense that the Ontario government measures progress by how much money it spends, not by what it actually achieves.

Ontario’s Ministry of Training, Colleges and Universities publishes an annual “Results-based Plan”, which contains a list of “goals” that are laughable in their vagueness (the most specific goal being: “raise Ontario’s post-secondary attainment rate to 70%”, without either defining what is meant by post-secondary attainment rate, or attaching a date to the goal).  But none of the provinces to Ontario’s east have any targets for access, either; the closest we get is Quebec, where the ministry does have quite a list of annual targets, but they tend toward the picayune – there are no targets on access either in terms of participation rates as a whole, or for under-represented groups.

Heading west, things hardly get better.  Ministries of Advanced Education in Manitoba and Saskatchewan have annual reports that track trends on key educational goals, but that offer no associated targets to meet.  British Columbia does have a target for “increasing participation and successful completion of all students”, but bizarrely, the indicator chosen for this is not participation rates, but rather unemployment rates for graduates (and the “target” – if we can call it that – is to have unemployment rates less than or equal to high school graduates, a bar so low it may actually be underground).  Alberta alone actually publicly sets itself goals on participation rates.

It’s more or less the same for other policy areas.  Retention?  Quebec has a couple of commitments with respect to doctoral students, but that’s about it.  Research?  Again, only Alberta.  Post-graduate employment rates?  Only those seriously unambitious ones from British Columbia.

Does it have to be this way?  I point your attention to this very useful document from the European Commission on the “Modernisation of Higher Education in Europe” (which in this case covers issues of access, retention, flexibility of studies, and transitions to the labour market).  It shows quite clearly how many governments are adopting specific, measurable targets in each of these areas.  Ireland has set a target of 20% of new university entrants being mature students.  Finland wants to increase male participation to equal that of women by 2025.  A few years ago, France set a target 31.5% of its undergraduates to come from disadvantaged socioeconomic groups by 2015.  Similarly, Slovenia set a target of reducing non-completion rates from 35% to 12% by 2020.

Goal-setting is important.  It encourages a focus on outcomes and not activities and, as a result, makes governments more open to experimentation.  But it’s also hard: it exposes failure and mediocrity. Canadian policymakers, for reasons that I think are pretty deeply etched in our national character, prefer a model of “do your best” or “let’s spend money and see what happens”.  It’s a model where there can never be failure because no one is asked to stretch, and no one is held accountable for results.

Policy-making in higher education doesn’t have to be this way.  We could do better; we choose not to do so.  What does that say about us?

February 09

Funding Universities’ Research Role

A couple of weeks ago, I wrote a series of pieces looking at the economics of teaching loads; specifically, I was focussed on the relationship between per-student funding and the teaching loads required to make universities self-sustaining.  I had a number of people write to me saying, in effect, “what about research?”

Good question.

The quick answer is that in provinces with explicit enrolment-driven funding formula (e.g. Ontario, Quebec, Nova Scotia), governments are not in fact paying universities to do any research, and neither are students.  They are paying simply for teaching.  There is nothing in these funding formulae or the tuition agreements with students that says, “this portion of the money is for research”.

Now that doesn’t mean governments don’t want faculty to conduct research.  It could mean that government just wants any research to occur after a certain number of credits are completed.  But I’m not sure this is, in fact, the correct interpretation.  In Ontario, for instance, universities sign multi-year agreements with governments.  Not a word can be found in these agreements about research – they are entirely about enrolments and teaching.  Admittedly that’s just Ontario, but I don’t think it’s substantially different elsewhere. British Columbia’s institutional mandate letters, for instance, do not mention research, and while Alberta’s do, they really only suggest that the institution’s priorities align at least somewhat with that of the Alberta Research and Innovation Plan, a commitment so loose that any half-way competent government relations person could make it appear to be true without ever bothering any actual academics to alter their programs of research.

So I might go further and say it’s not that provincial governments want research to occur after a certain number of credits have been offered; rather, I would suggest that provincial governments do not actually care what institutions do with their operating grants, provided they teach a certain number of credits.  Certainly, to my knowledge, there is not a single provincial government in Canada that has ever endorsed the formula by which professors spend their time 40-40-20 teaching/research/service.  That’s an internal convention of universities, not a public policy objective.

There’s a case to be made that the research component of provincial funding needs to be made more transparent – a case, for instance, made by George Fallis in a recent book.  But universities will resist this; if research subsidies are made transparent, there will inevitably be a push to make institutions accountable for the research they produce.  That way lies assessment systems, such as the UK’s Research Excellence Framework (formerly known as the Research Assessment Exercise), or the Excellence in Research for Australia.  Both of these have driven differentiation among universities, in that institutions have tended to narrow research foci in response to external evaluations.  This, of course, is something universities hate: no one wants to have to tell the chemistry department (or wherever) that their research output is sufficiently weak that from now on they’re a teaching-only unit.

To put this another way: sometimes, ambiguity benefits universities.  Where research is concerned, it’s probably not in universities’ interest to make things too transparent.  Whether this opacity is actually in students’ and taxpayers’ interests is a different question.

February 06

Accusations About Operating Surpluses

One interesting development in labour-management relations over the past few years has been the increasing tendency of academic unions to claim that administration is spending “too much” on capital, and is raiding the operating budget (i.e. salaries) to pay for it.  It’s possible that there is some truth to this in some places, but on the whole there seems to be a misunderstanding about the difference between how the terms “operating” and “capital” are defined in budgets, and how they are used in formal financial accounting systems.

Let’s take, for example, the situation at the University of Manitoba, where President Barnard recently laid out a case for a 4% cut to operating budgets, and asked for suggestions on implementation.  The University of Manitoba Faculty Association (UMFA) responded saying no cuts were necessary because, according to the financial statements, the operating fund was consistently generating $40 million or more in Net Revenue, which was then being transferred to Capital.  If only the institution would stop diverting these surpluses to new buildings, the argument goes, there would be no need for a 4% cut.

It is certainly true that if you look at the U of M financial statements, you will regularly see inter-fund transfers in the eight-figure range ($40-50M, often), which do not appear in the institution’s annual budgets.  But the reason for this is that the term “operating expenses” means something different in budgets than it does in financial statements.  In a budget, operating expenses means “what we need in order to keep things ticking over”, and includes salaries, benefits, heating, new computers, library acquisitions, as well as day-to-day repairs.

However, in a set of financial statements, the definition changes somewhat.  Here, “operating expenses” means “whatever we spent money on this year that has no depreciation value”.  So when you look at a budget, or if you look at CAUBO reports, there are a whole bunch of things from the operating budget that suddenly appear in the capital budget when you see them in financial statements.  In particular, things like debt payments, renovations, library acquisitions, furniture, and ICT equipment.  At the University of Manitoba, for example, once you add up the various operating budget categories that cover those items, you find you can explain well over half of the “surplus” that’s been “transferred” from one budget to another.  It’s not a question at all of “profits” and “diversions”, it’s simply a question of accounting conventions.

If this were something that happened at a single university, I wouldn’t be writing about this: I’ve got no particular beef with UMFA.  But it’s not.  Allegations about management taking huge operating “surpluses” and making them disappear into capital budgets has been a feature of labour-management confrontations at a number of schools in the past few years.  Just from memory, it was a significant point of contention at Windsor, UNB, and St. FX – it may well have been an issue elsewhere, and I didn’t notice.

But my brief meander through this topic raises an important question: if I can figure this stuff out (and I’m definitely not an accountant), why can’t faculty associations?   Are all of the faculty unions that dispute this point genuinely making the same honest error?  If so, wouldn’t it be good to apologize for the error and false insinuations, and move on to more productive matters?

Or: is there a deliberate strategy at work of delegitimizing management by accusing them of fiddling the numbers, while knowing that these accusations rest, at least in part, on a confusion of accounting terms?

I really hope that latter possibility isn’t true.  But the next time you hear a story about management plundering the operating budget for capital funds, just remember to ask some questions about definitions before you swallow the story whole.

Page 3 of 7512345...102030...Last »