Higher Education Strategy Associates

February 03

The Economics of Interdisciplinary Programs at Small Universities

A minor kerfuffle blew up yesterday in Sackville when the coordinator of Mount Allison University’s Women’s and Gender Studies announced that, due to budget cuts, she had been informed that the university would no longer be offering classes in this program, as of next fall.  Cue petitions, angry students, a buzzfeed listicle, etc.

What follows here is a little explainer with respect to the economics of this situation:

Mount Allison is a small school.  Enrolment last year was 2,369, which was down 8.5% from four years earlier.  Not good.  Total projected operating revenue for the university this year and net money from the feds, like Canada Research chairs, is a shade over $44 million, of which very slightly under 50% comes from tuition fees, with domestic students paying $746.50/course.  A similar amount comes from the provincial government in a lump sum, which is not formula-driven.

The Women’s and Gender Study Program is one of those typical interdisciplinary programs you see at Canadian universities.  It does not offer a major, only a minor.  In practice, it consists of four courses (one each at the 100, 200, 300, and 400 levels), plus some fourth-year independent study and “special topic courses”, which in practice don’t get taught much.  In order to obtain the minor, one must take each of the three lower-year courses, plus at least one of the fourth-year courses, and then another 12 credits from a selection of about forty related courses spread across a dozen or so disciplines (see program description here).

For quite a long time, the program seems to have only had a single dedicated academic staff person, who sadly died late last year.  The coordinator role has since passed to a faculty member in the Psychology Department, and all of the teaching responsibilities have passed to an Instructor (i.e. sessional/adjunct) who – if you think RateMyProfessor.com is of any value – gets rave reviews from her students.

Enrolment is reasonably healthy.  There appears to be roughly 190 course-enrolments across all four of the courses – or about 19 FTE students.  Now, how you turn that student count into revenue is a bit tricky.  In a formula-funded system you could just add per-credit tuition, plus per-student grant, and voila!  In a block-funded system it’s trickier.  One could argue that this money simply doesn’t belong to any particular unit because even if one program disappeared, those students (and that money) would still be in the institution.  So, if you only count tuition as revenue, this program earns $145,255; if you choose to count government grant money as being associated with specific enrolments, then you get double that, about $290,510.

Now, I don’t have access to financial expense data at Mount Allison but it’s not hard to do a back-of-the-envelope estimation of program costs.  A sessional with a little bit of experience costs $10,000 per course at Mount Allison, give or take $1K (that’s cost to the institution, including payroll taxes, benefits, etc.); so a 4-course program like this would likely cost $40K a year, or so.  Coordinators usually also get some course-release, which implies another $10K to hire a sessional to cover this.  The program also shares an administrative assistant with two other departments.  I have no idea what the actual cost-sharing arrangement is, but let’s say it’s another $10,000, or so.  Throw in some other direct costs – phone, mail-outs, maybe a wine-and-cheese once a year, plus a guest speaker flown in – and you get to $70,000, give or take.

But that’s without overhead.  Now, how you count overhead on an academic department is a bit tricky.  It’s easy enough to simply take all costs like utilities, IT, student services, registrar, physical plant, and admin, and then divide it across all students: according to CAUBO finance statistics, that would give you a number not far off $7,700 per student (or $146,300 total).  But on the other hand, there’s also the argument that this is money the university would pay anyways, even if the unit didn’t exist (i.e. the same argument why you shouldn’t count the government block grant money, only in reverse).

For simplicity’s sake then, let’s not count either the government grant or the overhead costs.  We’ve got a program that appears to cost $65,000, and brings in $145,255.  So, what’s the problem?

The problem is that this fantastic situation only works as long as a sessional is the one doing all the teaching.  If the teaching is done by an Associate Professor (as indeed it was until quite recently), the economics change completely.  The minimum salary this year for associate professors at Mount Allison is $85,568.  Add in the costs of benefits, pension, etc., and you’re looking at something in the range of $110,000 at the absolute minimum for compensation.  Then throw in any costs associated with hiring replacement faculty for research leave, sabbaticals, etc., and of course admin costs on top of that, and you’re very quickly back to about $130,000.  But that’s minimum, assuming the lowest pay rung for an associate professor.  With annual pay rises, top-salary associate professors make almost $50,000 per year more than newbies.  In other words, it might break-even for a couple of years with a full-time prof, but would be unlikely to do so over the long-term.

Let that sink in for a second: at Mount Allison – and many other universities – it takes more than 19 FTEs (or 190 course enrolments) to support a mid-career Associate Professor.   That’s what our combination of faculty salaries and tuition policies have brought us to.

Now, I haven’t spoken to anyone in the Mount Allison administration about this issue: but it seems to me the logic would go something like this:

i)  As an institution we’re on seriously thin ice, financially: our per-student operating income is about $5,000 per head below what it is at U15 universities, and about $3,000 per head lower than Acadia;

ii)  We cannot sensibly run an entire program with nothing but sessional instructors;

iii)  This program will have difficulty breaking-even over the long-run unless it is taught by sessionals;

iv)  Maybe we shouldn’t offer this program anymore.

One could of course make the case that Women’s and Gender Studies is so important that it deserves cross-subsidies from elsewhere in the university.  And at larger and wealthier universities, this would be the case.  But at an institution as small and as cash-strapped as Mount Allison, it’s a tougher argument to make.  Most other departments are only just getting by, too.

Unpalatable choice, to be sure.  But that’s what running a university is all about these days.

February 02

Boards of Governors

One interesting piece of fallout from the UBC imbroglio is a newfound focus on governance.  A new group called Take Back #Tuumest (“Tuum est” being UBC’s Latin motto, meaning “it’s yours”) has started up, with the goal of reviewing how the university’s Board of Governors functions, and reducing the proportion of its government-appointed members (you can read their initial manifesto here).

So what should we make of this?  Is UBC’s Board too subservient to government, not attuned enough to actual campus issues?  To answer that, let’s take a quick tour of external governance around the world.

Board governance in Canada varies quite a bit from province-to-province.  As a general rule of thumb, the presence of government-appointees on Boards increases as you head from East to West.  In many places in eastern Canada, the institution pre-dates the province and so they never had government appointees to begin with (McGill, for example).  These Boards are, in effect, self-perpetuating oligarchies – similar to Boards at private US institutions.

In Canada, government appointments are given to friends of the government of the day.  As a result, Boards usually do not become overly partisan.  When governments change, the Board members appointed under different administrations stay in their positions for awhile, and Governors of different political stripes get along reasonably well, reflecting a fairly wide consensus about how universities should be governed.  In most instances, political appointments are more or less free to act and vote on their best judgement.  In the US, on the other hand, we are increasingly seeing state boards (often entirely made up of government appointees) acting like appendages of the Governor’s office, which makes them hyper-partisan.  This isn’t just bad for governance, it’s ridiculous – why have 100% government appointees when government is paying less than a third of the bill?

If you go further afield – say, to Europe where universities began – the tradition of external boards is not nearly as strong.  Indeed, there are some countries where governing boards are entirely free of external representation.  But the movement in much of Europe towards increased external oversight has intensified over the last two decades, or so: universities in Denmark and the UK are both required to have 50% plus one external governors (note: “external” does not necessarily mean government-appointed).  The reason?  Essentially, governments simply don’t trust universities to spend public money properly without external supervision.

The trade-off is essentially about what kind of relationship publicly-funded universities want to have with government.  Refusing government oversight through external board members just means government will try to re-impose control through other, more intrusive means – audits, budget control, greater control over procurement, you name it.  It is not, to be honest, a productive use of anyone’s time.

Is there a “magic proportion” of external governors – whether appointed by government or not – which is “right” for universities?  Not really.  There’s nothing particularly sacred about 50% plus one, other than it gives governments assurance that the lunatics (from their point of view) can’t start running the asylum.  At the University of Toronto, the proportion of externals on the Governing Board is considerably lower than 50%; though, in part, this is because the University’s anomalous unicameral system means that the Governing Board also acts as Senate.  And there’s nothing saying that external appointments have to be government appointments: McGill has proved a good steward of public money simply by appointing its own external overseers (direct government appointments in Quebec are arguably much less successful at doing this – see UQAM’s half-billion dollar construction fiasco).

But this observation cuts two ways: on one hand, there’s nothing particularly dangerous about #tuumest’s push for fewer government appointees; on the other, there’s nothing saying that altering the proportion of appointees is actually going to change much, either.  Boards are made-up of people: some are good and some are bad.  Nobody gave much thought to the UBC Board’s composition until it made a decision with which many disagreed.  And it’s not clear that moving a board member or two around at the margin would have changed the outcome.

February 01

Questions and Answers about UBC

So, what happened last week?  On Monday, pursuant to a freedom-of-information request submitted last fall, UBC finally released documents – mainly emails – related to the events surrounding the departure of Arvind Gupta.  Much of it was redacted, including a flurry of fairly long exchanges that happened in May and June.  On Wednesday, somebody figured out how to un-redact the document in adobe, and all of a sudden everyone could see the crucial exchanges.  Then on Thursday, in view of the fact that the UBC leak effectively violated the privacy clause of the non-disclosure agreement with the former President, Gupta himself decided to give a couple of interviews to the press.

What did we actually learn from the documents? Apart from the fact that folks at UBC are really bad at electronically redacting documents?  Less than you’d think. 

We do have a better understanding of the timeline of where things went wrong.  A discussion about a proposed strategic plan stemming from the February Board meeting seems to have been the start of the deteriorating relationship between Gupta and at least a portion of the Board.  Clear-the-air talks about weaknesses in Gupta’s performance were held following the April board meeting.  And then downhill from there.  The documents make clear there were a lot of complaints within the Board about Gupta’s leadership: in particular, his relationship with his own leadership team and his handling of relationships with the Board.  Read the May 18th letter from Montalbano to Gupta: it’s rough.

Some of the specifics were new, but frankly there isn’t much surprising in there.  You didn’t need to know the details to realize that the heart of the whole affair was that Gupta lost the backing of the Board, and that this was something that probably happened gradually over time.

What has Gupta said in his interviews?  He has said, first: the released documents provided a one-sided representation of the events of the spring, which is true enough.  Second, that despite having resigned because he had lost the confidence of the full Board, he now regrets not having pushed back hard and wishes he could have fought back, which is puzzling (if you’ve lost the confidence of a body, how would kicking back have aided anything?).  Third, he doesn’t understand why the Board didn’t support him because he had lots of support from professors, which seems to be a major instance of point-missing.  Fourth, that the whole push against him on the Board came from an ad-hoc, possibly self-selected sub-committee of the executive committee.

Wait, what?  There’s a lot of quivering about the fact that much of the Board were bystanders to the interplay between Montalbano and a few other key Board members, and Gupta – look, it’s a cabal, they had it in for him, hid it from the Board, etc.  But some of this is overwrought.  Generally speaking, a CEOs performance review is handled by the Chair of the Board and a few others, rather than by full Board.  The unanswered process question here is: what was the relationship of this group to the executive?  Was it duly constituted, or was it just a few people the Board Chair thought were “sound”?  In the grand scheme of things, this is kind of beside the point.  The fact that not a single other person on the Board has stepped forward and said “yeah, we were wrong about Gupta” suggests substantial unanimity on the key point: that even if something was amiss procedurally, any other procedure would have led to the same result. 

(Similarly for the argument that there wasn’t “due process” for Gupta because he didn’t get the job performance evaluation that was in his contract: once the person/people responsible for evaluating a CEO decide the CEO needs to be replaced, what’s the point of a formal job evaluation?  If you were the CEO in question, wouldn’t you resign rather than go through a formal review where a negative outcome is certain?)

Is any of this going to change anyone’s mind about what happened?  I doubt it.  Gupta’s backers will say “it shows the Board had it in for him for the start”; any evidence that could be read as saying “gosh, maybe relations weren’t going so well” is simply regarded as “a pretext” so the mean old Board could stitch Gupta up.  A new set of rhetorical battle-lines seem to be forming: Gupta as champion of faculty (a point he himself seems keen to make) and the Board as the enemy of faculty.  There is little-to-no evidence this was actually the reason for Gupta’s dismissal, but it’s nevertheless the hill upon which a lot of other people want to believe he died.

That’s unfortunate, because it entirely misses the point about this affair.  Whether Gupta was popular with faculty, or whether he was a good listener and communicator with them, is irrelevant.  Presidents have to run a university to the satisfaction of a Board of Governors – some directly elected, some appointed by an elected government – who are there to maintain and ensure that the public interest is being served.  They have to do a large number of other things as well, but this is the really basic bit.  Whatever other beneficial things Gupta did or might have accomplished – and I think he might have done quite a lot – this wasn’t something he managed to achieve.  However nice or progressive a guy he may have seemed in the other aspect of his job doesn’t change this fact.  And so he and the board parted company.  End of story.

January 29

Asleep at the Switch…

… is the name of a new(ish) book by Bruce Smardon of York University, which looks at the history of federal research & development policies over the last half-century.  It is a book in equal measures fascinating and infuriating, but given that our recent change of government seems to be a time for re-thinking innovation policies, it’s a timely read if nothing else.

Let’s start with the irritating.  It’s fairly clear that Smardon is an unreconstructed Marxist (I suppose structuralist is the preferred term nowadays, but this is York, so anything’s possible), which means he has an annoying habit of dropping words like “Taylorism” and “Fordism” like crazy, until you frankly want to hurl the book through a window.  And it also means that there are certain aspects of Canadian history that don’t get questioned.  In Smardon’s telling, Canada is a branch-plant economy, always was a branch-plant economy, and ever shall be one until the moment where the state (and I’m paraphrasing a bit here) has the cojones to stand up to international capital and throw its weight around, after which it can intervene to decisively and effectively restructure the economy, making it more amenable to being knowledge-intensive and export-oriented.

To put it mildly, this thesis suffers from the lack of a serious counterfactual.  How exactly could the state decisively rearrange the economy so as to make us all more high-tech?  The best examples he gives are the United States (which achieved this feat through massive defense spending) and Korea (which achieved it by handing over effective control of the economy to a half-dozen chaebol).  Since Canada is not going to become a military superpower and is extremely unlikely to warm to the notion of chaebol, even if something like that could be transplanted here (it can’t), it’s not entirely clear to me how Smardon expects something like this to happen, in practice.  Occasionally, you get a glimpse of other solutions (why didn’t we subsidize the bejesus out of the A.V. Roe corporation back in the 1960s?  Surely we’d be an avionics superpower by now if we had!), but most of these seem to rely on some deeply unrealistic notions about the efficiency of government funding and procurement as a way to stimulate growth.  Anyone remember Fast Ferries?  Or Bricklin?

Also – just from the perspective of a higher education guy – Smardon’s near-exclusive focus on industrial research and development is puzzling.  In a 50-year discussion of R&D, Smardon essentially ignores universities until the mid-1990s, which seems to miss quite a bit of relevant policy.  Minor point.  I digress.

But now on to the fascinating bit: whatever you think of Smardon’s views about economic restructuring, his re-counting of what successive Canadian governments have done over the past 50 years to make the Canadian economy more innovative and knowledge-intensive is really quite astounding.  Starting with the Glassco commission in the early 1960s, literally every government drive to make the country more “knowledge-intensive” or “innovative” (the buzzwords change every decade or two) has taken the same view: if only publicly-funded researchers (originally this meant NRC, now it means researchers in university) could get their acts together and talk to industry and see what their problems are, we’d be in high-tech heaven in no time.  But the fact of the matter is, apart from a few years in the 1990s when Nortel was rampant, Canadian industry has never seemed particularly interested in becoming more innovative, and hence why we perennially lag the entire G7 with respect to our record on business investment in R&D.

You don’t need to buy Smardon’s views about the potentially transformative role of the state to recognize that he’s on to something pretty big here.  One is reminded of the dictum about how the definition of insanity is doing the same thing over and over, and expecting a different result.  Clearly, even if better co-ordination of public and private research efforts is a necessary condition for swifter economic growth, it’s not a sufficient one.  Maybe there are other things we need to be doing that don’t fit into the Glassco framework.

At the very least, seems to me that if we’re going to re-cast our R&D policies any time soon, this is a point worth examining quite thoroughly, and Smardon has done us all a favour by pointing this out.

Bon weekend.

January 28

The Future of Work (and What it Means for Higher Education), Part 2

Yesterday we looked at a few of the hypotheses out there about how IT is destroying jobs (particularly: good jobs).  Today we look at how institutions should react to these changes.

If I were running an institution, here’s what I’d do:

First, I’d ask every faculty to come up with a “jobs of the future report”.  This isn’t the kind of analysis that makes sense to do at an institutional level: trends are going to differ from one part of the economy (and hence, one set of fields of study) to another.  More to the point, curriculum gets managed at the faculty level, so it’s best to align the analysis there.

In their reports, all faculties would need to spell out: i) who currently employs their grads, and in what kinds of occupations (an answer of “we don’t know” is unacceptable – go find out); ii) what is the long-term economic outlook for those industries and occupations? iii) what is the outlook for those occupations with respect to tasks being susceptible to computerization (there are various places to look for this information, but this from two scholars at the University of Oxford is a pretty useful guide)? And, iv) talk to senior people in these industries and occupations to get a sense of how they see technology affecting employment in their industry.

This last point is important: although universities and colleges keep in touch with labour market trends through various types of advisory boards, the question that tends to get asked is “how are our grads doing now?  What improvements could we make so that out next set of grads is better than the current one?”  The emphasis is clearly on the very short-term; rarely if ever are questions posed about medium-range changes in the economy and what those might bring.  (Not that this is always front and centre in employers’ minds either – you might be doing them a favour by asking the question.)

The point of this exercise is not to “predict” jobs of the future.  If you could do that you probably wouldn’t be working in a university or college.  The point, rather, is to try to highlight certain trends with respect to how information technology is re-aligning work in different fields over the long-term.  It would be useful for each faculty to present their findings to others in the institution for critical feedback – what has been left out?  What other trends might be considered? Etc.

Then the real work begins: how should curriculum change in order to help graduates prepare for these shifts?  The answer in most fields of study would likely be “not much” in terms of mastery of content – a history program is going to be a history program, no matter what.  But what probably should change are the kinds of knowledge gathering and knowledge presentation activities that occur, and perhaps also the methods of assessment.

For instance, if you believe (as economist Tyler Cowen suggests in his book Average is Over that employment advantage is going to come to those who can most effectively mix human creativity with IT, then in a statistics course (for instance), maybe put more emphasis on imaginative presentation of data, rather than on the data itself.  If health records are going to be electronic, shouldn’t your nursing faculty be developing a lot of new coursework involving the manipulation of information on databases?  If more and more work is being done in teams, shouldn’t every course have at least one group-based component?  If more work is going to happen across multi-national teams, wouldn’t it be advantageous to increase language requirements in many different majors?

There are no “right” answers here.  In fact, some of the conclusions people will come to will almost certainly be dead wrong.  That’s fine.  Don’t sweat it.  Because if we don’t look forward at all, if we don’t change, then we’ll definitely be wrong.  And that won’t serve students at all.

January 27

The Future of Work (and What it Means for Higher Education), Part 1

Back in the 1990s when we were in a recession, Jeremy Rifkin wrote a book called The End of Work, which argued that unemployment would remain high forever because of robots, information technology, yadda yadda, whatever.  Cue the longest peacetime economic expansion of the century.

Now, we have a seemingly endless parade of books prattling on about how work is going to disappear: Brynjolfsson and McAfee’s The Second Machine Age, Martin Ford’s Rise of the RobotsJerry Kaplan’s Humans Need not Apply, Susskind and Susskind’s The Future of the Professions: How Technology will Transform the Work of Human Experts (which deals specifically with how info tech and robotics will affect occupations such as law, medicine, architecture, etc.), and from the Davos Foundation,  Klaus Schwab’s The Fourth Industrial Revolution. Some of these are insightful (such as the Susskinds’ effort, though their style leaves a bit to be desired); others are hysterical (Ford); while others are simply dreadful (Schwab: seriously, if this is what rich people find insightful we are all in deep trouble).

So how should we evaluate claims about the imminent implosion of the labour market?  Well first, as Martin Wolf says in this quite sober little piece in Foreign Affairs, we shouldn’t buy into the hype that “everything is different this time”.  Technology has been changing the shape of the labour market for centuries, sometimes quite rapidly.  We will go on changing.  The pace may accelerate a bit, but the idea that things are suddenly going to “go exponential” are simply wrong.  Just because we can imagine technology creating loads of radical disruption doesn’t mean it’s going to happen.  Remember the MOOC revolution, which was going to wipe out universities?  Exactly.

But just because the wilder versions of these stories are wrong doesn’t mean important things aren’t happening.  The key is to be able to lose the hype.  And to my mind, the surest way to get past the hype is to clear your mind of the idea that advances in robotics or information technology “replace jobs”.  This is simply wrong; what they replace are tasks.

We get a bit confused by this because we remember all the jobs that were lost to technology in manufacturing.  But what we forget is that the century-old technology of the assembly line had long turned jobs into tasks, with each individual performing a single task, repetitively.  So in manufacturing, replacing tasks looked like replacing jobs.  But the same is not true of the service sector (which covers everything from shop assistants to lawyers).  This work is not, for the most part, systematic and routinized, and so while IT can replace tasks, it cannot replace “jobs”  per se.  Jobs will change as certain tasks get automated, but they don’t necessarily get wiped out.  Recall, for instance, the story I told about ATMs a few months ago: that although ATMs had become ubiquitous over the previous forty years, the number of bank tellers not only hadn’t decreased, but had actually increased slightly.  It’s just that, mainly, they were now doing a different set of tasks.

Where I think there are some real reasons for concern is that a lot of the tasks that are being routinized are precisely the ones we used to give to new employees.  Take law, for instance, where automation is really taking over document analysis – that is, precisely the stuff they used to get articling students to do.  So now what do we do for an apprenticeship path?

Working conditions always change over time in every industry, of course, but it seems reasonable to argue that job change in white-collar industries – that is, the ones for which university education is effectively an entry-requirement – are going to change substantially over the next couple of decades.  Again, it’s not job losses; rather, it is job change.  And the question is: how are universities thinking through what this will mean for the way students are taught?  Too often, the answer is some variation on “well, we’ll muddle through the way we always do”.  Which is a pretty crap answer, if you ask me.  A lot more thought needs to go into this.  Tomorrow, I’ll talk about how to do that.

January 26

Tenure and Aboriginal Culture

You may or may not have noticed a story in the National Post over the weekend relating to a scholar at the University of British Columbia named Lorna June McCue, who has brought a human rights tribunal case against UBC for denying her tenure.  The basics of the story are that UBC didn’t think she’d produced enough – or indeed, any – peer-reviewed research to be awarded tenure in the Faculty of Law; Ms. McCue argues that since she adheres to an indigenous oral tradition (she is also a hereditary chief of the Ned’u’ten at Lake Babine, a few hundred kilometres northeast of Vancouver), she needs to be judged by a different standard.

Actually, Ms. Mcue brought the case in the fall of 2012; UBC moved to have it dismissed; the hearing last week was on the motion to dismiss, which failed.  So now, 39 months later, the hearing can proceed (justice in Canada, Ladies and Gentlemen!  A big round of applause!).  Anyways, I have a feeling this story is going to run and run (and not just because of the glacial pace of the legal system), so I thought I would get some thoughts in early on this.

A couple of obvious points:

The spread of the university around the world, mainly in the 19th century, eliminated a lot of different types of knowledge preservation/communication traditions.  They basically wiped out the academy tradition in East Asia, and did a serious number on the madrassas of the Indian subcontinent and the middle-east (though as we have seen, these are making a comeback in recent years in some fairly unfortunate ways).  And though universities do exhibit a lot of differences around the world in terms of finance and management, and to some extent around mission, there is no question that due to the strengths of the disciplines it houses, it has had some extraordinarily isomorphic effects on the way we think and talk about knowledge.  So it’s not crazy for non-western cultures to once in awhile say: look, there are other ways to construct and transmit knowledge, and we’d like a bit of space for them.  Maoris have done this successfully with their Wānanga, or Maori Polytechnics as they’re sometimes called.  Why not in Canada?

And there’s nothing immutable about the need for research as a professor.  Hell, 40 years ago in the humanities, research certainly wasn’t a hard pre-requisite for tenure; even today in the newer professional schools (I’m thinking journalism, specifically), people often get a pass on publication if they are sufficiently distinguished prior to arriving at the university.  Different strokes, etc.

But of course, all that said, the fact is that accommodation for different knowledge paradigms is the kind of thing you work out with your employer before you start the tenure process, not afterwards.  It’s not as though McCue’s views render her incapable of writing; the university hired her on the basis of her 1998 L.L.M. dissertation, which was a good 250 pages long, and presumably expected they’d get more work of similar quality.  And yes, it’s probably a good idea to have and fund institutions that more fully value Aboriginal ways of knowing, and are prepared to take a broader view of what scholarship means (the relevant tenure criteria at First Nations University, for instance, is “consistently high achievement in research and scholarship useful to First Nations’ communities”).  But even if it is located on unceded Musqueam land, UBC ain’t that institution.

I have a hard time imagining this will go anywhere, but Human Rights cases are funny things.  Keep an eye on this case, anyway.

January 25

One In, One Out

I had a discussion a few months ago with a government official who was convinced she knew what was wrong with universities.  “They have no discipline,” she said.  “They just go out and create new programs all the time with no thought as to what the cost implications are or what the labour market implications are, and so costs just keep going up and up.”

I told her she was only half right.  It’s absolutely true that universities have no discipline when it comes to academic programs, but the problem really isn’t on the creation side.  When universities start a new program, it has to go through a process where enrolment is projected, labour market uptake estimated, and all that jazz.  And yes, there is a certain amount of creativity and outright bullshit in these numbers since no one really knows how to estimate this stuff in a cost-effective manner.  But basically, these things have a decent track record: they hit their enrolment targets often enough that they haven’t fallen into disrepute.

The problem is that these enrolment targets aren’t hit exclusively by attracting new students to the institution; there is always some cannibalization of students from existing programs involved.  Therefore, while each new program might be successful in its own terms, these programs were succeeding only by making every other program in the faculty slightly less effective.

And here’s where the lack of discipline comes in.  At some point, institutions need to sit back and take a look at existing programs, and be able to prune them judiciously.  When resources – particularly staffing resources – are static, if you keep trying to pile on new programs without getting rid of the old ones, all you get are a lot of weak programs (not to mention more courses staffed by sessionals).

And here’s one of the biggest, dirtiest secrets of academics: they suck at letting things go.  They are hoarders; nothing, once approved by Senate, must ever be taken away.  Prioritization exercises?  Never!  After all, something might be found not to be a priority.

Getting rid of academic programs is one of the purest examples of Mancur Olson’s Collective Action problem. Getting rid of any given program will hurt a few people a lot, while the majority will barely feel the benefits.  The advantage in terms of political mobilization always goes to the side who perceive themselves to have the most at stake, and so they are very often able to mobilize support and stop the cuts (this point is made very well in Peter Eckel’s excellent book Changing Course: Making the Hard Decisions to Eliminate Academic Programs).  But over time, if you can never cut any programs, then the collective does start to hurt, because of the cumulative effect of wasted resources.

Of course, Olson’s theory also gives us a clue as to how to solve this problem: there need to be stronger incentives within institutions for people to support program closures.  One way to do that would be to introduce a one in, one out rule.  That is, every time Senate endorses a new program, it has to cut one somewhere else.  Such a rule would mean that pretty much anyone in the university who has an ambition to open a program at some point would have an incentive, if not to support specific program closures, then at least to support an effective process for identifying weak programs.

Might be worth a try, anyway.  Because this hoarding habit really needs to stop.

January 22

Higher Education in Developing Countries is Getting Harder

Here’s the thing about universities in developing countries: they were designed for a past age.  In Latin America, the dominant model was that of Napoleon’s Universite de France – a single university for an entire country, which was all the rage among progressives for the first half of the nineteenth century.  In Africa (and parts of Asia), it was a colonial model – whatever the University of London was doing in the late 1950s, that’s basically what universities (the bigger ones, anyway) in Anglophone Africa are set up to do now.  We think of universities as being about teaching and research; by and large, in the global south, universities were about training future governing elite and transmitting ideology.

Of course, for a long time now, governments and foreign donors have been trying to nudge institutions in the direction of modernization.  By and large, the preference seems to be something like a 1990s Anglo/American model: market-focused for undergraduate studies, more of an emphasis of knowledge creation, etc.   This has been a tough shift, and not just because of the usual academic foot-dragging.

The problems are manifold.  If you want research, you need PhDs.  In much of Africa and Latin America, less than half of full-time academics have them.  And because only PhDs can give PhDs that’s a pretty serious bottleneck.  A few years ago, South Africa announced that it wanted to triple the number of PhDs in the country.  Great, said the universities.  Who’s going to train them?

And of course you need money, but that’s in exceedingly short supply.  Money for equipment, for instance (quick, how many electron microscopes are there in sub-Saharan African universities?  Take out South Africa, and I’m pretty sure the answer is zero).  But also money for materials, dissemination, conferences, etc.  In some African flagship universities, close to 80% of money for research comes from foreign donors.  That money is welcome, of course, but it means your research programs are totally at the whim of changing fads in international aid programs.

As for being market-focused: how does that work in countries where 80% of the formal economy is dominated by government and parastatals?  What’s even the point of building up a good reputation for graduating employable students when public sector HR managers aren’t allowed to discriminate between universities when hiring?

Now, making things worse are some fairly worrying macro-economic trends.  Not the commodities collapse, thought that doesn’t help.  No, it’s the secular change in the way development is actually happening; specifically, that countries are starting to de-industrialize at ever lower levels of manufacturing intensity (a phenomenon that economist Dani Rodrik explains very well here).  To put it bluntly, countries are no longer going to be able to get rich through export-driven manufacturing.  There aren’t going to be any more Taiwans or Koreas.  In future, if countries are going to get rich, it’s going to be through some kind of services and knowledge-intensive products.

This, to put it mildly, places enormous pressure on countries to have institutions that are knowledge-intensive and market-oriented.  When human capital trained for services industries become the only route for development, universities become vital to national success in a way they simply are not in a society that already has a major manufacturing base.  Simply put, no good universities, no development.  And that’s a world first because the developed world – including China – got rich before it got good universities.  It’s simply an unprecedented position for higher education anywhere.

But it’s a job for which these universities are simply not ready.  In Africa at least, even when the nature of the challenge is fully understood, universities are neither funded nor staffed adequately for the task; not only are their own internal cultures insufficiently entrepreneurial, but also they simply lack entrepreneurial partners with whom to work on knowledge and commercialization projects.

Getting a whole new set of challenges when you’ve barely got to grips with the old ones is a tall order. It’s a structural issue that international development and co-operation agencies need to think about, and invest in more than they currently do.

January 21

Marginal Costs, Marginal Revenue

Businesses have a pretty good way of knowing when to offer more or less of a good.  It’s encapsulated in the equation MC = MR, and shown in the graphic below.
















Briefly, in the production of any good, unit-costs fall to start with as the benefits of economies of scale start to rise.  Eventually, however, if production is expanded far enough you get diseconomies of scale, and the marginal cost begins to rise.  Where the marginal cost of producing one more unit of a good rises above the marginal revenue one receives from selling it (in the above diagram, Q1), that’s the point where you start losing money, and hence where you stop producing the good.

(This gets more complicated for products like software or apps where the marginal cost of production is pretty close to zero, but we’ll leave that aside for the moment.)

Anyway, when it comes to delivering educational programs, you’d ideally like to think you’re not doing so at a loss (otherwise, you eventually have a bit of a problem paying employees).  You want each program to more or less, over time, come close to paying for itself.  It’s not the end of the world if they don’t, cross-subsidization of programs is a kind of core function of a university after all; but it would be nice if they did.  In other words, you really want each program to have a production function where the condition MC=MR is fulfilled.

But here’s the problem.  Marginal revenue’s relatively easy to understand: it’s pretty close to average revenue, after all, though it gets a bit more complicated in places where government grants are not provided on a formula basis, and there’s some trickiness when you start calculating domestic fees vs. international fees, etc.  But the number of universities that genuinely understand marginal cost at a program level is pretty small.

Marginal costs in universities are a bit lumpy.  Let’s say you have a class of twenty-five students and a professor already paid to teach it.  The marginal cost of the twenty-sixth student is essentially zero – so grab that student!  Free money!  Maybe the twenty-seventh student, too.  But after awhile, costs do start to build.  Maybe on the 30th student there’s a collective bargaining provision that says the professor gets a TA, or assistance in marking.  Whoops!  Big spike in marginal costs.  Then where you get to forty, the class overfills and you need to split the course into two, get a new classroom, and a new instructor, too.  The marginal cost of that forty-first student is astronomical.  But the forty-second is once again almost costless. And so on, and so on.

Now obviously, no one should measure marginal costs quite this way; in practice, it would make more sense to work out averages across a large numbers of classes, and work to a rule of thumb at the level of a department or a faculty.  The problem is very few universities even do that (my impression is that some colleges have a somewhat better record here, but the situation varies widely).  Partly, it’s because of a legitimate difficulty in understanding direct and indirect costs: how should things like light, heat, and the costs of student services, admissions, etc., be apportioned – and then there is the incredible annoyance of working out how to deal with things like cross-listed courses.  But mostly, I would argue, it’s because no one wants to know these numbers.  No one wants to make decisions based on the truth.  Easier to make decisions in the dark, and when something goes wrong, blame it on the Dean (or the Provost, or whoever).

Institutions that do not understand their own production functions are unlikely to be making optimal decisions about either admissions or hiring.  In an age of slow revenue growth, more institutions need to get a grip on these numbers, and use them in their planning.

Page 14 of 103« First...1213141516...203040...Last »