HESA

Higher Education Strategy Associates

Author Archives: Alex Usher

February 27

Clearer Thinking About Student Unions

Student associations have difficulty being effective, what with leadership turnovers over every year or so, and corporate memories that rarely extend beyond 36 months.  But every once in awhile, either because of some astute hires, or a lucky co-incidence of good leaders being elected at the same time, a student group gets on a hot streak.  StudentsNS, which represents the majority of associations in Nova Scotia, is in that zone right now.

The latest evidence: their recent review of governance at student unions.  Quite simply, it’s incredibly refreshing to have representative associations think aloud – thoughtfully, I might add – about their own deficiencies in terms of effectiveness and democratic procedures.  For that alone, StudentsNS deserves high praise and widespread emulation.

One of the key issues the paper deals with is elections.  Student associations have enough problems with legitimacy, stemming from low participation in student elections; but they often complicate this problem by making an absolute farce of how they conduct these elections.  At many associations, election rules are from the pre-internet era, and are fixated on trying to create level playing fields by means that, by any modern standard, violate freedom of speech (not to mention common sense).  Chief Electoral Officers are given enormous powers to set the terms of the game – and with that power comes the ability to potentially game the election if they so choose, something they are frequently accused of doing.  The StudentsNS paper gives some very good suggestions in that respect.

It also gives some very good general advice about the relationship between student unions and universities.  Rightly, it says this attitude needs to be collaborative rather than adversarial: both have an interest in seeing students complete their studies with the tools (academic and otherwise) they need to succeed in their subsequent careers, and both have a role to play in helping students deal with social and academic barriers to integrating into an institution.  They can do a lot more together to affect and improve campus culture than they can separately.  That’s not to say students shouldn’t hold institutions to account: particularly when it comes to keeping universities focussed on their teaching mission.  But the basic tenor of the relationship needs to be one of partnership.

Where the report goes slightly awry is in its recommendations on governance.  The paper conceptualizes student unions as dispensers of member services, and student union councils as needlessly focusing on organizational minutiae instead of more narrowly on governance.  Of the latter there is little doubt.  But the paper’s solution is effectively to get rid of most of the campus-wide elected positions (for instance, Presidents and Vice-Presidents) and just get students to elect a governing board, which can then elect a president who in turn manages a largely professionalized staff.

This strikes me as an unnecessarily bloodless definition of a student association.  Granted, there is real ambiguity about their true role: they aren’t “unions”, though they do provide political representation, and they aren’t “governments”, though they do manage services for members.  This paper tries to do away with this tension by redefining political representation as simply another service to members, one more thing to hand over to unelected staff whose work is overseen by a President and governed by a council.

I don’t buy this, and I kind of doubt students will either.  Representation is a matter of politics, not just “governance”.  Students want and need a forum to express how they feel about major issues with respect to how universities are governed, and how provinces pay for universities and colleges.   The main way they do that is by voting for specific representatives who run on specific platforms.  Under this plan, representation would be handled by someone who is hired (perhaps annually, perhaps longer) by a President to execute the (possibly quite muddled) compromise views of a governing council elected on widely differing platforms.  This is both more complex and (probably) less effective than what exists now, and I suspect would lead to a decline in student engagement with their student unions rather than an increase.

But that’s quibbling on my part.  The report is basically a good one, and student associations across the country should ponder its recommendations.  The more important question for the country as a whole is: how can we develop more student associations as thoughtful as StudentsNS?

February 26

Data on Textbook Costs

This data is a little old (2012), but it’s interesting, so my colleague Jacqueline Lambert and I thought we’d share it with you.  Back then, when HESA was running a student panel, we asked about 1350 university students across Canada about how much they spent on textbooks, coursepacks, and supplies for their fall semester.  Here’s what we found:

Figure 1: Distribution of Expenditures on Textbooks (Fall Semester 2012)

unnamed

 

 

 

 

 

 

 

 

 

 

 

 

Nearly 85% of students reported spending on textbooks.  What Figure 1 shows is a situation where the median amount spent is just below $300, and the mean is near $330.  In addition to spending on textbooks, another 40% or so bought a coursepack (median expenditure $50), and another 25% reported buying other supplies of some description (median expenditure: also $50).  Throw that altogether and you’re looking at average spending of around $385 for a single semester.

That’s a fair whack of cash.  But what’s interesting here is not what they paid, but how they chose to save money. After all, students have a number of potential strategies to avoid purchasing textbooks: they can sign them out of the library, they can buy them used, they can share with friends, and in some cases find pirated electronic copies on the internet.  To observe how students were actually behaving, we asked them not just how much money they spent, but also: i) whether they actually bought all the required books and materials; and if not, ii) how much they would have spent if they actually had bought all the books.

Overall, two-thirds of students said that they bought all of their required textbooks.  But the proportion who said they did fell fairly dramatically as the overall cost of buying textbooks increased.

Figure 2: Percent of Students Saying they Bought all Required Textbooks, by Overall Required Textbook Costs

unnamed-1

 

 

 

 

 

 

 

 

 

 

 

 

So most students pay the full amount – but as figure 3 shows, those who don’t pay the full amount can actually underspend by quite a bit.  Somewhat surprisingly, while the proportion of those paying the full amount goes down in as costs increase, the same is not true of the portion of the full bill paid by those who do not pay.  In fact, the relationship between total required costs and the proportion of total costs paid by those paying less than 100% is completely non-linear.

Figure 3: Proportion of Total Textbook Costs Paid by Those Students who buy Less Than 100% of Recommended Books

unnamed-2

 

 

 

 

 

 

 

 

 

 

 

 

So there you have it.  We’re not sure any of this means much, but more data is better than less data.

 

February 25

Rankings in the Middle East

If you follow rankings at all, you’ll have noticed that there is a fair bit of activity going on in the Middle East these days.  US News & World Report and Quacquarelli Symonds (QS) both published “Best Arab Universities” rankings last year; this week, the Times Higher Education (THE) produced a MENA (Middle East and North Africa) ranking at a glitzy conference in Doha.

The reason for this sudden flurry of Middle East-oriented rankings is pretty clear: Gulf universities have a lot of money they’d like to use on advertising to bolster their global status, and this is one way to do it.  Both THE and QS tried to tap this market by making up “developing world” or “BRICs” rankings, but frankly most Arab universities didn’t do too well on those metrics, so there was a niche market for something more focused.

The problem is that rankings make considerably less sense in MENA than they do elsewhere. In order to come up with useful indicators, you need accurate and comparable data, and there simply isn’t very much of this in the region.  Let’s take some of the obvious candidates for indicators:

Research:  This is an easy metric, and one which doesn’t rely on local universities’ ability to provide data.  And, no surprise, both US News and the Times Higher Ed have based 100% of their rankings on this measure.  But that’s ludicrous for a couple of reasons.  First is that most MENA universities have literally no interest in research.  Outside the Gulf (i.e. Oman, Kuwait, Qatar, Bahrain, UAE, and Saudi Arabia) there’s no available money for it.  Within the Gulf, most universities are staffed by expats teaching 4 or even 5 classes per term, with no time or mandate for research.  The only places where serious research is happening are at one or two of the foreign universities that are part of Education City in Doha, and in some of the larger Saudi Universities.  Of course the problem with Saudi universities, as we know, is that at least some of the big ones are furiously gaming publication metrics precisely in order to climb the rankings, without actually changing university cultures very much (see for example this eye-raising piece).

Expenditures:  This is a classic input variable used in many rankings.  However, an awful lot of Gulf universities are private and won’t want to talk about their expenditures for commercial reasons.  Additionally, some are personal creations of local rulers who spend lavishly on them (for example, Sharjah and Khalifa Universities in UAE); they’d be mortified if the data showed them to spending less than the Sheikh next door.  Even in public universities, the issue isn’t straightforward.  Transparency in government spending isn’t universal in the area, either; I suspect that getting financial data out of an Egyptian university would be a pretty unrewarding task.  Finally, for many Gulf universities, cost data will be massively wonky from one year to the next because of the way compensation works.  Expat teaching staff (in the majority at most Gulf unis) are paid partly in cash and partly through free housing, the cost of which swings enormously from one year to the next based on changes in the rental market.

Student Quality: In Canada, the US, and Japan, rankings often focus on how smart the students are based on average entering grades, SAT scores, etc.  But those simply don’t work in a multi-national ranking, so those are out.

Student Surveys: In Europe and North America, student surveys are one way to gauge quality.  However, if you are under the impression that there is a lot of appetite among Arab elites to allow public institutions to be rated by public opinion then I have some lakeside property in the Sahara I’d like to sell you.

Graduate Outcomes:  This is a tough one.  Some MENA universities do have graduate surveys, but what do you measure?  Employment?  How do you account for the fact that female labour market participation varies so much from country to country, and that many female graduates are either discouraged or forbidden by their families from working? 

What’s left?  Not much.  You could try class size data, but my guess is most universities outside the Gulf wouldn’t have an easy way of working this out.  Percent of professors with PhDs might be a possibility, as would the size of the institution’s graduate programs.  But after that it gets pretty thin.

To sum up: it’s easy to understand commercial rankers chasing money in the Gulf.  But given the lack of usable metrics, it’s unlikely their efforts will amount to anything useful, even by the relatively low standards of the rankings industry.

February 24

Offices

Here’s a stat I’d really like to see: how much time do professors spend in their offices?

There’s been an enormous shift in the way people work over the past thirty years.  Digitization of documents and the availability of remote access computing, the growth of email, the explosion of doctoral students available to do the research grunt work, the decreasing importance of collaborating with local colleagues, and increasing importance of collaborating with people around the world – it’s all given professors a lot more flexibility in deciding where to undertake their work.

Now, this flexibility likely hasn’t had an equal effect across disciplines.  In sciences – especially the wet ones – many professors have offices tied to their laboratories, and I don’t have the sense that they are spending any less time in their laboratories than they used to.  In social sciences and law, on the other hand, where outside consulting work is more common, and the means of academic communication is more journal-based than monograph-based, there’s a lot less reason to be tied to your office.

Other factors are at work, of course.  There are personal preferences.  Some people like working at home, and take advantage of flexibility to do so; others prefer keeping their home and work spaces quite separate.  Junior faculty probably have a greater interest in being seen at work than senior faculty.  And of course – this being Canada, and academic life being subject to collective agreements to a degree pretty much unknown anywhere else – some collective agreements will stipulate minimum office hours.

All of this is to say that it’s difficult to make generalizations about use of office space/time.  And I admit that I have no data on this at all, but I would guess that outside the sciences, there is a very significant portion of the faculty whose time in the office is fifteen hours per week or less.  And this makes me wonder: to the extent that this is true, why the hell do universities spend so much on office space?  If you think about a typical non-science faculty – Arts, Business, Education, etc. – and you divide up their total usable space (excluding things like washrooms, and hallways, and the like), offices probably take up about two-thirds of it.  And while many profs make copious use of their offices, in many cases these offices get used less than half the time.  Why?

It’s difficult to know what to do about this: taking away office space – even the little-unused type – would set off riots.  But if I were designing a university from scratch, I think I’d do everything I could to minimize the use of dedicated offices.  Provide as much shared office space as possible.  Have dedicated shared spaces for meetings with students.  Use modular walls to reconfigure spaces as necessary, and offer bonuses to those who use less space.  Pretty much anything to reduce the use of space and the associated utilities costs that go with them.

Profs are becoming more mobile, and there’s nothing wrong with that.  But the legacy costs of the days when most work was done in offices are quite significant.  Finding a way to reduce them over time – sans riots, of course – is worth a try.

February 23

Demand Sponges

If you’ve ever spent any time looking at the literature on private higher education around the world – from the World Bank, say, or the good folks at SUNY Albany who run the Program for Research on Private Higher Education (PROPHE) shop – you’ll know that private higher education is often referred to as “demand-absorbing”; that is, when the public sector is tapped-out and, for structural reasons (read: government underfunding, unwillingness to charge tuition), can’t expand, private higher education comes to the rescue.

To readers who haven’t read such literature: in most of the world, there is such a thing as “private higher education”.  For the most part, these institutions don’t look like US private colleges in that (with the exception of a few schools in Japan, and maybe Thailand) they tend not to be very prestigious.  But they aren’t all bottom-feeding for-profits, either.  In fact, for-profits are fairly rare.  Yes, there are outfits like Laureate with chains of universities around the world, but most privates have either been setup by academics who didn’t want to work in the state system (usually the case in East-central Europe) or are religious institutions trying to do something for their community (the case in most of Africa).

Anyways, privates as “demand-absorbers” – that’s still the case in Africa and parts of Asia.  But what’s interesting is what’s happening as to private education in countries heading into demographic decline, such as those in East-central Europe.  There, it’s quite a different story.

Let’s start with Poland, which is probably the country in the region that got private education regulation the least wrong.  There was a massive explosion of participation in Poland after the end of socialism, and not all of it could be handled by the private sector, even if they could charge tuition fees (which they sort of did, and sort of didn’t).  The private sector went from almost nothing in the mid-90s to over 650,000 students by 2007.  But since then, private enrolments have been in free-fall.  Overall, enrolments are down by 20%, or close to 400,000 students.  But that drop has been very unequally distributed: in public universities the drop was 10%; in privates, it was 40%.

Tertiary Enrolments by Sector, Poland, 1994-2013

unnamed

 

 

 

 

 

 

 

 

 

 

 

 

In Romania, the big picture story is the same as in Poland, but as always, once you get into the details it’s a bit more on the crazy side – this is Romania, that’s the way it is.  Most of the rise in private enrolments from 2003 to 2008 was due to a single institution named Spiru Haret (after a great 19th century educational reformer), which eventually came to have 311,000 students, or over a third of the entire country’s enrolment.

Eventually – this is Romania, these things take time – it occurred to people in the quality assurance agency that perhaps Spiru Haret was closer to a degree mill than an actual university; they started cracking down on the institution, and enrolments plummeted.  And all this was happening at the same time as: i) the country was undergoing a huge demographic shift (abortion was illegal under Ceausescu, so in 1990 the birthrate fell by a third, which began to affect university enrolments in 2008); and, ii) the national pass-rate on the baccalaureate (which governs entrance to university) was halved through a combination of a tougher exam and stricter anti-cheating provisions (which I described back here).  Anyways, the upshot of all this is that while public universities have lost a third of their peak enrolments, private universities have lost over 80% of theirs.

Tertiary Enrolments by Sector, Romania, 2003-2013

unnamed-1

 

 

 

 

 

 

 

 

 

 

 

 

There’s a lesson here: just as private universities expanded quickly when demand was growing, they can contract quickly as demand shrinks.  The fact of the matter is that with only a few exceptions, they are low-prestige institutions and, given the chance, students will pick high-prestige institutions over low-prestige ones most of the time.  So as overall demand falls, demand at low-prestige institutions falls more quickly.  And when that happens, private institutions get run over.

So maybe it’s time to rethink our view of private institutions as demand-absorbing institutions.  They are actually more like sponges: they do absorb, but they can be wrung out to dry when their absorptive capacities are no longer required.

February 20

Performance-Based Funding (Part 4)

I’ve been talking about performance-based funding all week; today, I’ll try to summarize what I think the research and experience actually says.

Let’s return for a second to a point I made Tuesday.  When determining whether PBF “works”, what matters is to be able to show that incentivizing particular outcomes actually changes institutional behaviour, and leads to improvements in outcomes. However, no study to date has actually bothered to link quantifiable changes in funding with any policy outcomes.  Hillman and Tandberg – who found little-to-no positive effects – came closest to doing this, but they looked only at the incidence of PBF, and not the size of PBF; as such, their results can easily be read to suggest that the problem with PBF is that it needs to be bigger in order to work properly.  And indeed, that’s very likely: in over half of US states with PBFs, the proportion of operating income held for PBF purposes is 2.5%; in practice, the size of the re-distribution of funds from PBFs (that is, the difference between how that 2.5% is distributed now versus how it was distributed before PBFs were introduced) is probably a couple of orders of magnitude smaller still.

I would argue that there’s a pretty simple reason why most PBFs in North America don’t actually change the distribution of funds: big and politically powerful universities tend to oppose changes that might “damage” them.  Therefore, to the extent that any funding formula results in something too far from the status quo (which tends to reward big universities for their size), they will oppose it.  The more money that suddenly becomes at risk, the more the big universities scream.  Therefore, the political logic of PBFs is that to have a chance of implementation they have to be relatively small, and not disturb the status quo too much.

Ah, you say: but what about Europe?  Surely the large size of PBF incentives must have caused outrage when they were introduced, right?  That’s a good question, and I don’t really have an answer.  It’s possible that, despite their size, PBF schemes did not actually change much more in terms of distribution than did their American counterparts.  I can come up with a few country-specific hypotheses about why that might be: the Danish taximeter system was introduced at a time when universities were still considered part of governments (and academics part of the civil service), the Polish system was introduced at a time of increasing government funding, etc.  But those are just guesses.  In any case, such lit as I can find on the subject certainly doesn’t mention much in terms of opposition.

So, I think we’re kind of back to square one.  I think the Hillman/Tandberg evidence tells us that simply having a PBF doesn’t mean much, and I think the European evidence suggests that at a sizeable enough scale, PBFs can  incentivize greater institutional efficiency.  But beyond that, I don’t think we’ve got much solid to go on.

For what it’s worth, I’d add one more thing based on work I did last year looking at the effect of private income on universities in nine countries: and that is, only incentivize things that don’t already carry prestige incentives.  Canadian universities are already biased towards activities like research; incentivizing them further through performance funding is like giving lighter fluid to a pyromaniac.

No, what you want to incentivize is the deeply unsexy stuff that’s hard to do.  Pay for Aboriginal completions in STEM subjects.  Pay for Female Engineering graduates. Pay big money to the institution that shows the greatest improvement in the National Survey of Student Engagement (NSSE) every two years.  Offer a $20 million prize to the institution that comes up with the best plan for measuring – and then improving – learning, payable in installments to make sure they actually follow through (ok, that’s competitive funding rather than performance-based funding, but you get the idea).

Neither the pro- nor anti-camp can point to very much genuinely empirical evidence about efficacy; in the end it all comes down to whether one thinks institutions will respond to incentives.  I think it’s pretty likely that they do; the trick is selecting the right targets, and structuring the incentives in an intelligent way.  And that’s probably as much art as science.

February 19

Performance-Based Funding (Part 3)

As I noted yesterday, the American debate on PBF has more or less ignored evidence from beyond its shores; and yet, in Europe, there are several places that have very high levels of performance-based funding.  Denmark has had what it calls a “taximeter” system, which pays institutions on the basis of student progression and completion, for over 20 years now, and it currently makes up about 30% of all university income.  Most German Länder have some element of incentive-based funding on either student completion or time-to-completion; in some cases, they are also paid on the basis of the number of international students they attract (international students pay no tuition in Germany).  In the Netherlands, graduation-based funding makes up over 60% of institution operating grants (or, near as I can tell, about 30% of total institutional income).  The Czech Republic now gives out 20% of funding to institutions on a quite bewildering array of indicators, including internationalization, research, and student employment outcomes.

Given this, you’d think there might be a huge and copious literature about whether the introduction of these measures actually “worked” in terms of changing outcomes of the indicators in question.  But you’d be wrong.  There’s actually almost nothing.  That’s not to say these programs haven’t been evaluated.  The Danish taximeter system appears to have been evaluated four times (haven’t actually read these – Danish is fairly difficult), but the issue of dropouts doesn’t actually seem to have been at the core of any of them (for the record, Danish universities have relatively low levels of dropouts compared to other European countries, but it’s not clear if this was always the case or if it was the result of the taximeter policy).  Rather, what gets evaluated is the quite different question of: “are universities operating more efficiently?”

This is key to understanding performance indicators in Europe. In many European countries, public funding makes up as close to 100% of institutional income as makes no odds.  PBF has therefore often been a way of trying to introduce a quasi-market among institutions so as to induce competition and efficiency (and on this score, it usually gets fairly high marks).  In North America, where pressures for efficiency are exerted through a competitive market for students, the need for this is – in theory at least – somewhat less.  This largely explains the difference in the size of performance-based funding allocations; in Europe, these funds are often the only quasi-competitive mechanism in the system, and so (it is felt) they need to be on the scale of what tuition is in North America in order to achieve similar competitive effects.

Intriguingly, performance-based funding in Europe is at least as common with respect to research as it is to student-based indicators (a good country-by-country summary from the OECD is here).  Quite often, a portion of institutional operating funding will be based on the value of competitive research won, a situation made possible by the fact that many countries in Europe separate their institutional grants into funding for teaching and funding for research in a way that would give North American universities the screaming heebie-jeebies.  Basically: imagine if the provinces awarded a portion of their university grants on the same basis that Ottawa hands out the indirect research grants, only with less of the questionable favouritism towards smaller universities.  Again, this is less about “improving overall results” than it is about keeping institutions in a competitive mindset.

So, how to interpret the evidence of the past three days?  Tune in tomorrow.

February 18

Performance-Based Funding (Part 2)

So, as we noted yesterday, there are two schools of thought in the US about performance-based funding (where, it should be noted, about 30 states have some kind of PBF criteria built into their overall funding system, or are planning to do so).  Basically, one side says they work, and the other says they don’t.

Let’s start with the “don’t” camp, led by Nicholas Hillman and David Tandberg, whose key paper can be found here.  To determine whether PBFs affect institutional outcomes, they look mostly at a single output – degree completion.  This makes a certain amount of sense since it’s the one most states try to incentivize, and they use a nice little quasi-experimental research design showing changes in completion rates in states with PBF and those without.  Their findings, briefly, are: 1) no systematic benefits to PBF – in some places, results were better than in non-PBF systems, in other places they were worse; and, 2) where PBF is correlated with positive results, said results can take several years to kick-in.

Given the methodology, there’s no real arguing with the findings here.  Where Hillman & Tandberg can be knocked, however, is that their methodology assumes that all PBF schemes are the same, and are thus assumed to be the same “treatment”.  But as we noted yesterday, the existence of PBF is only one dimension of the issue.  The extent of PBF funding, and the extent to which it drives overall funding, must matter as well.  On this, Hillman and Tandberg are silent.

The HCM paper does in fact give this issue some space.  Turns out that in the 26 states examined, 18 have PBF systems, which account for less than 5% of overall public funding.  Throw in tuition and other revenues, and the amount of total institutional revenue accounted by PBF drops by 50% or more, which suggests there are a lot of PBF states where it would simply be unrealistic to expect much in the way of effects.  Of the remainder, three are under 10%, and then there are five huge outliers: Mississippi at just under 55%, Ohio at just under 70%, Tennessee at 85%, Nevada at 96%, and North Dakota at 100% (note: Nevada essentially has one public university and North Dakota has two: clearly, whatever PBF arrangements are there likely aren’t changing the distribution of funds very much).  The authors then point to a number of advances made in some of these states on a variety of metrics, such as “learning gains” (unclear what that means), greater persistence for at-risk students, shorter times-to-completion, and so forth.

But while the HCM report has a good summary of sensible design principles for performance-based funding, there is little that is scientific about it when it comes to linking policy to outcomes. There’s nothing like Hillman and Tandberg’s experimental design at work here; instead, what you have is an unscientific group of anecdotes about positive things that have occurred in places with PBF.  So as far as advancing the debate about what works in performance-based funding, it’s not up to much.

So what should we believe here?  The Hillman/Tandberg result is solid enough – but if most American PBF systems don’t change funding patterns much, then it shouldn’t be a surprise to anyone that institutional outcomes don’t change much either.  What we need is a much narrower focus on systems where a lot of institutional money is in fact at risk, to see if increasing incentives actually does matter.

Such places do exist – but oddly enough neither of these reports actually looks at them.  That’s because they’re not in the United States, they’re in Europe.  More on that tomorrow.

February 17

Performance-Based Funding (Part 1)

I was reading the Ontario Confederation of University Faculty Association (OCUFA)’s position statement on a new funding formula for the province.  Two things caught my eye.  One, they want money to make sure Ontario universities can do world-class research and teaching; and two, they demand strict opposition to any kind of performance-based funding formula (PBF).  Put differently: OCUFA wants great teaching and research to be funded, but are adamantly opposed to rewarding anyone for actually doing it.

Except that’s slightly uncharitable.  OCUFA’s larger point seems to be that performance-based funding formulae (also known as output-based funding) “don’t actually achieve their goals”, and point to work done by University of Wisconsin professor Nicholas Hillman and Florida State’s David Tandberg on the topic.  From a government-spending efficacy point of view, this objection is fair enough, but it’s a bit peculiar from an institutional or faculty standpoint; the Hillman/Tandberg evidence doesn’t indicate that institutions were actually harmed in any way by the introduction of said arrangements, so what’s the problem?

Anyways, last week HCM associates in Washington put out a paper taking a contrary view to Hillman/Tandberg, so we now have some live controversy to talk about.  Tomorrow, I’ll examine the Hillman/Tandberg and HCM evidence to evaluate the claims of each, but today I want to go through what output-based funding mechanisms can actually look like, and in the process show how difficult it is for meta-analyses – such as Hillman’s and HCM’s – to calculate potential impact.

At one level, PBF is simple: you pay for what comes out of universities rather than what goes in.  So: don’t pay for bums in seats, pay for graduates; don’t pay based on research grants earned, pay based on articles published in top journals, etc.  But the way these get paid-out can vary widely, so their impacts are not all the same.

Take graduation numbers, which happens to be the simplest and most common indicator used in PBFs.  A government could literally pay a certain amount per graduate – or maybe “weighted graduate” to take account of different costs by field of study.  It could pay each institution based on its share of total graduates or weighted graduates.  It could give each institution a target number of graduates (based on size and current degree of selectivity, perhaps) and pay out 100% of a value if it hits the target, and 0% if it does not.  Or, it could set a target and then pay a pro-rated amount based on how well the institution did vis-a-vis the target.  And so on, and so forth.

Each of these methods of paying out PBF money plainly has different distributional consequences.  However, if  you’re trying to work out whether output-based funding actually affects institutional outcomes, then the distributional consequence is only of secondary importance.  What matters more is how different the distributional outcomes are from whatever distribution existed in the previous funding formula.

So, say the province Saskatchewan moves from its current mix of historical grant and formula grant to a fully PBF system, where 100% of the funding is based on the number of (field-weighted) graduates produced.  Currently, the University of Saskatchewan gets around three times as much in total operating grants as the University of Regina.  If USask also produced three times as many (field-weighted) graduates as URegina, even the shift to a 100% PBF model wouldn’t change anything in terms of distribution, and hence would have limited consequences in terms of policy and (presumably) outputs.

In effect, the real question is: how much funding, which was formerly “locked-in”, becomes “at-risk” during the shift to PBF?  If the answer is zero, then it’s not much of a surprise that institutional behaviour doesn’t change either.

Tomorrow: a look at the duelling America research papers on PBF.

February 13

Meetings vs. Management

It’s always difficult to make accurate observations about differences in national higher education cultures.  But one thing I can tell you that is absolutely not true is the perception that Canadian universities are suffering under some kind of unprecedented managerialist regime.  If anything, Canadian academics are among the least managed employees in the entire world;

When academics complain of over-management, they aren’t using that term in a way that workers in other fields would recognize.  They are not, for instance, required to be in any one place other than the six to nine hours per week they are teaching: it is simply understood that they are working and being efficient at a place of their choosing.  The content of their work largely escapes scrutiny: no one checks-in on their classes to see what is being taught (though Queen’s university may be wishing they had a bit more hands-on management after revelations of anti-vaxxing material in a health class last week).  Research topics are largely left to the individual researchers’ interests.  In other words, subject to contractual obligations around teaching, they mostly do what they want.  In most respects, they resemble a loosely connected set of independent contractors rather than actual employees.

Rather, what academics in Canada are actually complaining about when they talk about managerialism is three things:

1)      The existence (and growth) at universities of a class of managers that are almost as well paid as senior academics.  The fact that these people rarely impact the working life of academics is irrelevant; their mere presence is evidence of “managerialism”.

2)      The existence of apparently pointless bureaucracy around purchasing, reimbursement, and travel.  This annoyance is easy to understand, but it’s not clear to me that this problem is any worse at universities than it is at other organization of similar size.

3)      Meetings.  Lots and lots of meetings.  Yet the thing about meetings in universities is that they are rarely decision-making affairs.  More often than not, in fact, they are decision-retarding events (or possibly even decision-preventing events), whose purpose is more about consultation than administration.

In a real managerial university, courses would be ruthlessly overseen, if for no other reason than to ensure that classes met minimum enrolment counts.  In a real managerial university, individual professors’ research programs would be reviewed continuously in order to ensure that it was attracting maximum funding.  In a real managerial university, the managers would know where employees were from 9-5 every day. But almost none of that exists in Canada.  To really see that stuff you need to go to the UK or – to a lesser extent – Australia.

Professors are, of course, right to worry about managerialism, because UK universities sound pretty horrid.  But a dose of actual managerialism (as opposed to just having more meetings) probably wouldn’t hurt in Canadian universities – particularly when it comes to ensuring curriculum coherence and enforcing class-size minima.

Page 1 of 7212345...102030...Last »