Higher Education Strategy Associates

Category Archives: Universities

April 23


Some of you have been calling and e-mailing over the last few weeks, asking me about the new global higher education rankings system called U-Multirank (full disclosure: I played a very minor “advisory” role in this project, in 2009).  To save everyone else a call, I thought I’d give you the skinny, via this blog.

U-Multirank is a creation of the European Union.  Stung by the THE and Shanghai rankings, which showed continental European (especially French) universities lagging badly, France took advantage of their EU Presidency in 2009 to announce that the EU would create a new rankings system, which, in the words of the French Minister of Higher Education, would create a new and fairer global ranking, one that would prove that French universities were the best.  (Yes, seriously.)

The people to whom this project was entrusted are among the smartest people in all of higher education: namely, the folks at the Centre for Higher Education (CHE) in Germany, and the Center for Higher Education Policy Studies (CHEPS) in the Netherlands.  The system they have created is how rankings would look if universities had created rankings themselves, rather than left it to newspapers and magazines.  It includes indicators on teaching and learning, research, knowledge transfer, international orientation, and regional engagement, and it portrays the data on each of these separately – no summing across indicators to come up with a single league table with a single “winner”.  Read the project’s feasibility report, here – it’s an important piece of work which fundamentally re-imagines the notion of global rankings.

It turns out, though, that not everyone likes the idea of rankings without winners.  In one of the most cynical pieces of university politics I’ve ever seen, the group known as the “Leading European Research Universities” (LERU) announced a couple of months ago that it would not participate – ostensibly this is because the rankings system is, “ill-conceived and badly designed”, but really it’s because they don’t like rankings in which they don’t come out on top.

As you can tell, I’m a fan of the U-Multirank concept; but I’m cautious about its overall prospects for success.  I think its ability to add non-European institutions will be limited because many of its indicators are euro-centric and will require non-european institutions to incur some cost in data collection.  And I have my doubts about demand: after many years in this business, I’m increasingly convinced that, given the choice, consumers prefer the simplicity of league tables to the more accurate – but conceptually taxing – multi-dimensional rankings.

Still, if your institution is on the fence about participating, I urge you to give it a try.  This is a good faith effort to improve rankings; failure to support it means losing your moral right to kvetch about rankings ever again.

April 19

A Two-Tier Tuition Regime in Quebec?

Things are getting interesting in Quebec.  First Laval and now l’Université de Montreal are publicly threatening to leave the Conseil des Receteurs et Principaux des Universites du Quebec (CREPUQ).  In the discreet and diplomatic world of Canadian University politics, this is like blowing a vuvuzela during a piano recital.

At one level, this is a delayed reaction to CREPUQ’s limp performance during last year’s tuition fee debate.  At the outset, all institutions agreed to take a common position and speak through CREPUQ, a strategy fatally undermined by CREPUQ’s subsequent decision to spend the crisis hiding under a blanket.  I don’t have any inside information, but reading between the lines, it seems that there was a split between the independent universities (McGill, Concordia, Bishop’s, Laval, Montreal, Sherbrooke) and the UQs, with the former mostly thinking the Charest government didn’t go far enough, and the latter – possibly with an eye on an incoming PQ government – being more ambivalent.  The result was a deafening and damaging silence from the reform’s key beneficiaries.

The lesson Laval and Montreal seem to have taken from this is that CREPUQ and their UQ colleagues are no longer to be trusted.  And so they are now out actively lobbying for a two-tier solution, which would promote their interests over those of the UQ system’s.  Specifically, they are arguing for a two-tier tuition structure which would allow research-intensive institutions to charge a higher fee, while allowing the government to claim it is preserving access by giving students a low-fee option through the UQs.

I think there is some merit in a two-tiered solution.  Clearly, a lot of (mainly francophone) students have made it known that they value cheap universities over good universities.  So, fine, let those be the UQs.  For everyone else, there’s a better-resourced solution, funded by fees rather than government.

But the specific details of the plan are a bit sketchy.  First of all, the link between tuition and research is a bit ridiculous.  What’s the value proposition: “pay us more, so we can pay less attention to you”?  Even if it weren’t ridiculous, the idea that it would apply to just Laval, Montreal, McGill, and Sherbrooke is nuts.  On any research measure other than, “do you have a medical school”, Concordia kicks Sherbrooke’s behind; for it not to be on that list is a transparent piece of linguistic politics and institutional snobbery.

If you’re going down the two-tiered road, it seems to me that there’s a logically solid case for restricting it to just two universities (McGill and Montreal, genuinely world-class and special) or expanding it to six by including all the “independent” universities (i.e. including Concordia and Bishop’s).   Anything else seems arbitrary.

April 12

In Praise of Downward Mobility

One much-used trope, among those wanting to bash higher education, attacks the idea of “downward mobility”.  Typically, a journalist finds a kid from a nice middle-class family, having a hard time making-it in the labour market, and uses this as a platform for a string of Wente-isms:  “Higher education is supposed to be about upward mobility – but now graduates are downwardly mobile!  Won’t somebody please think of the children?” Etc. etc.

But upward mobility is greatly overrated.  Downward mobility is where our focus should be.  And here’s why:

Part of the problem with the notion of upward mobility is that, with respect to education, the term gets used in two distinct ways.  The first is a, “rising-tide-lifts-all-boats” interpretation, where everyone is upwardly mobile in the sense that everyone’s purchasing power is rising.  Universities and colleges, through their enriching of human capital, and their contributions to the national innovations system, are seen to be key actors in this process – though, obviously, there are many other things which also go into economic growth.  Right now, this kind of upward mobility is in short supply.

But even where there is little or no economic growth, upward mobility in a second sense – that of people changing their position within the overall social hierarchy – can still exist.  But this type of mobility is a zero-sum game.  Upward mobility can only exist to the extent that downward mobility does.

The book I discussed yesterday, for example (Paying for the Party), is full of stories about downwardly mobile middle-class kids (albeit mostly ones who don’t work very hard at their studies).  That’s sad, but what’s truly appalling is the complete lack of downward mobility among the upper-class students.  No matter how useless they are academically, mom and dad are always there to help them avoid the consequences of their inaction.

A fair society, one where social position is actually reflective of effort and ability, requires more downward mobility, not less.  We need to be finding ways to take inherited privilege away, not re-inforce it.  It’s why the rich need to pay more in tuition (and why the poor need grants to offset it).  It’s why legacy admissions and merit scholarships that don’t take social origins into account need to be fought.  It’s why all those unpaid internships in so-called “desirable” fields (mainly media and publishing) are not just illegal but are also immoral, because they tilt the playing field to the trustafarians who can afford them.

In a low-growth economy, allowing some to rise in social position means others must fall.  We in higher education have a vital role to play in this, and we shouldn’t be squeamish about it.

April 11

Paying For the Party

Paying for the Party: How College Maintains Inequality is a quite remarkable new work of ethnography, by sociologists Elizabeth Armstrong and Laura Hamilton.  I recommend it unreservedly for student professionals, or anyone interested in how university affects social mobility.

Embedded in a women’s dormitory at a large, unnamed Midwestern flagship state university (which, if I had to guess, is probably either Indiana or Illinois), the authors observed the girls on one floor for a year, and then conducted regular follow-up interviews with them for another four years.  The results are fascinating, in a horrifying sort of way.

The authors advance an argument that the culture of partying – specifically fraternity/sorority (or “Greek”) partying – is the key factor determining long-term success in college.  Rich kids can live the Greek life through a combination of massive parental subsidies and a proliferation of “business-lite” degrees, and their job prospects aren’t diminished because their success in post-graduation job searches depends much more on parental connections than it does on academics.  Their less-wealthy, and less-networked peers can either attempt to keep up (which can lead to much higher debt and/or reduced academic achievement, which hurts them more than their better-networked peers) or they end up feeling alienated and alone, which also has negative effects on completion rates.

The authors’ most interesting claim is that flagship public universities actively aid and abet the partying culture, both by providing the Greek system with legitimacy/prestige and by dumbing-down the curriculum with too many “business-lite” degrees.  (I’m usually skeptical about “dumbing-down” arguments, but some of this stuff shocked me – karate and ballroom dancing as for-credit courses?) Almost without exception, the kids in this book who studied hard turned out fine; the problem is that too many students are either distracted by other things, or not given institutional encouragement to study the right things.

Two small caveats about this book, though.  The first is that its arguments for state-level public policy change are much weaker than the ones it makes for campus reform.  Asking for more public funding because students party too much, and don’t study enough, just isn’t going to sell.

The second, simply, is that this is an American book.  Much of the really nasty stuff documented here occurs in Canada only in a very attenuated fashion.  The Greek system is less important here than it is in the US, and our class systems are different, too; so one shouldn’t assume the arguments translate directly.  Still, Armstrong and Hamilton’s message about how social class affects the pathways one takes in higher education, and how they affect post-graduation experiences, is nevertheless a very important one; we should all pay much closer attention than we currently do.

April 10

Time for a New Duff-Berdahl?

Reading Peter C. Kent’s book on the Strax Affair at UNB – in which the case’s denouement was significantly affected by the then-recently-released report of the Duff-Berdahl commission – got me thinking about university governance.

In Canada, university governance has mostly been run on a bicameral Senate/Board model for over a century.  In 1963, the Englishman, Sir James Duff, and the American, Robert O. Berdahl, were jointly appointed by AUCC and CAUT to look into how to modernize university governance, and reduce the increasing number of conflicts between campuses and their governing boards.  Their fix was essentially to strengthen Senates and make them more responsible to internal constituencies – hence the origin of student representations on Senates, and of both student and faculty representatives on Boards.  By the early 1970s, essentially all Canadian institutions moved in that direction, and governance has remained largely unchanged ever since.

It occurred to me while reading about Strax that Duff-Berdahl pre-dates the existence of the two most important forces on Canadian campuses today: professionalized administrations and faculty unions.  The latter don’t exist at all in the report, while the former get a very short chapter on administration called, “The President and his (sic) Administrative Group”, in which the central problem is whether or not the President uses administrators or Senate as his prime source of academic advice.

Union vs. Administration is not a modern equivalent of Senate vs. Board.   Administration has grown over the past fifty years for reasons both good (increased student numbers, growth of research, need for government, public, and alumni relations to boost income) and bad (empire building, reluctance of faculty to deliver pastoral services), but growth hasn’t been designed to increase the power of the Board.  Similarly, there are lots of good reasons for unions to have developed, but it wasn’t to deliver more power to the Senate.  Some even seem to dream of replacing the work of Senates with collective bargaining agreements, and at some universities, where unions have taken to grieving decisions of Senate (yes, really), we’re some ways down this road already.

CAUT’s line that having a unionized faculty “enhances collegiality” is pure codswallop.   Academic unions are both a symptom and a cause of diminished trust in a system which is making shared governance ever-less workable.  The increasing politicization of simple administrative tasks (like producing a budget) is an example of this – see, for instance, the Dalhousie Faculty Association’s critique of university accounting practices.  If relatively objective things like accounting can’t be carried out without union-administration recrimination, an institution is in deep trouble.

Bicameralism needs trust in order to work, and current union-adminstration dynamics may have killed it for good.  Maybe it’s time for a new Duff-Berdahl?

April 05

No to “World-Class” Research in the Humanities

You often hear talk about how Canadian institutions need to do more research.  Better research.  “World-class” research, even.  Research that will prove how smart our professors are, how efficient they are with public resources, and, hence, justify a claim to an even greater share of those resources.

In medicine, the biological sciences, and engineering, this call is easy to understand.  Developments in these areas can – with the right environment for commercialization – lead to new products, which, in turn, have direct economic benefits to Canadians.  In the social sciences, too, it makes sense.  Most social sciences have (or should have) some relevance to public policy; thus, having world-class research in the social sciences can (or should) mean an improvement in that country’s governance, and its ability to promote a strong, healthy, and equitable society.

But what about in the humanities?  Is there a national public interest in promoting world-class research in the humanities?

My answer is no.  For two reasons.

The first is kind of technical.  When it comes to research, “world-class” status tends to get defined by bibliometrics.  In the sciences, scholarly conversations are, by their nature, global, and so a single standard of measurement makes sense.  But in the humanities, an awful lot of the conversations are, quite properly, local.  And so while bibliometric comparisons in the humanities, within a single country (say, between institutions), might say something important about relative scholarly productivity, comparisons between countries are, to a large degree, only measuring the relative importance of different national polities.  A strategy favouring world-class bibliometric scores in History, for instance, would de-emphasize Canadian History and Aboriginal studies, and instead focus on the Roman and British Empires, and the United States.  And that, obviously, would be nuts.

But there’s a bigger issue here: namely, why do we assume that the worth of humanities has to be judged via research, in the same manner we judge scientific disciplines?  Arguments in defence of the humanities – from people like Martha Nussbaum, Stanley Fish, etc. – stress that the discipline’s value is in encouraging students to think critically, to appreciate differences, and to create meaning.  And it’s not immediately obvious how research contributes to that.  Even if you completely buy the argument that, “scholarly engagement is necessary to teaching”, can you really claim that an increased research load improves teaching?  Have students started thinking more critically since 3/3 teaching loads were cut to 2/2 in order to accommodate more research?

The real national public interest is in having a humanities faculty that can develop critical thinkers, promote understanding, and foster creativity.  Figuring out how to better support and celebrate those things is a lot more important than finding yet more ways for the humanities to ape the sciences.

April 04

More Data on Credit Transfer (Part 3)

So, yesterday we saw that, in fact, the vast majority of transfer students receive credit for their previous work, and in quite substantial amounts as well.  But what about the credits that didn’t get recognized?

There’s a pretty clear correlation between non-recognition and changing programs.  Overall, university transfer students said that more than 60% of their credits were accepted for transfer (among those who had any credit accepted, it was roughly 75%).  But as the figure below shows, the results were substantially better for students switching to a related program of study than for students who switched to an unrelated program.  This is important for understanding why credit recognition in colleges may actually be more restrictive than in universities, since students transferring into colleges are much more likely to be switching fields of study, compared with students who transfer into universities.

Figure 1 – Percentage of Credits Accepted for Transfer by Universities, by Relationship of Old Program of Study to New Program of Study














Another counter-intuitive finding is how open universities seem to be to accepting credit from colleges.  Among students transferring from another university, the average amount of previous credit awarded was 67%; for those transferring from colleges, the total was a remarkably high 58%.  Much of this result is driven by Alberta and BC, where credit-transfer systems have been designed specifically to smooth the path from colleges to universities.  However, in the other 8 provinces, the average amount of credit recognized was 43%, which points to a substantial willingness among universities to recognize college achievements, even in the absence of formal credit-transfer arrangements.

Institutions sometimes require transfer students to re-take a specific course, because the version taken at the previous institution doesn’t quite meet the new institution’s standard.  From students’ perspective this is pretty aggravating, and this phenomenon is always the focus of news coverage on credit transfer.

Despite the fact that, in general, Canadian institutions seem to have a pretty good record on credit transfer, this is an area which needs improvement.  Among students who transferred between two similar programs, 38% said they had to re-take a pre-requisite course. The elaborate systems of credit transfer in Alberta and British Columbia made this phenomenon much less prevalent, but still did not eliminate the problem, by any means.  Intriguingly, students transferring in from colleges were slightly less likely to have to re-take a pre-requisite, compared with students coming in from other universities.

Figure 2 – Percentage of Students Transferring From a Similar Program Who Were Required to Re-Take at Least one Pre-requisite Course













So there you have it.  The Canadian credit transfer system may look chaotic, but a lot of credit does in fact get transferred, and much of what doesn’t is excluded for the legitimate reason that students are switching programs.  The big irritant is universities being fussy about course pre-requisites.  Solve that, and we might just have a system to be proud of.

April 03

Actual Data on Transfer Credit (Part 2)

It’s easy to make transfer credit seem like a really big deal.  Outside of BC and Alberta, institutional credit transfer policies are pretty ad hoc, and there’s no shortage of anecdotes about students having to re-do courses they’ve already done.  But little data has hitherto been available to help us understand the extent to which credit transfer policies affect times-to-completion.

Until now.

Using HESA’s CanEd Student Panel, we examined this question from a couple of different angles.  Figure 1 gives an idea of the dimensions of this problem: one-in-six of the 1876 university students we surveyed on this issue had actually transferred credit from one institution to another, while one-in-five had considered transferring, but had not done so.

 University Students’ Credit Transfer Experiences













The percentage who had transferred was highest in Alberta and British Columbia (30%) and lowest in Ontario (11%).  Some might take that as evidence of the obvious superiority of those provinces’ transfer systems, especially the bits that allow credit to be transferred from colleges to universities.  But a closer look at the evidence reveals a more nuanced picture.

Among those Ontario students who transferred, the proportion that did so from college to university (35%) was almost identical to that in Alberta (38%).  In total, just over a third of students transferring credit to Canadian universities are doing so from colleges.

Percentage of University Transferees Whose Previous Institution was a College













We don’t have perfectly equivalent data for colleges, but when we completed a similar survey for Colleges Ontario, last year, 54% of transferees came from universities, and 46% from other colleges.

Now, here’s the interesting thing.  We asked university students in the CanEd panel, and Ontario college students, what percentage of credits earned in their previous program they were able to transfer.  Of students who transferred into universities, 89% were able to transfer at least some of their credits, and one-third were able to transfer all of their credits. On average, students who did transfer credit saw over 75% of their credits recognized.

Percentage of Previous Credits Recognized in Transfer Process, Canadian Universities













In Ontario colleges, we see similar numbers.  83% of students who transferred got transfer credit – but over three-quarters of those who did not get credit actually didn’t even apply for it.  Fully 95% of credit requests were at least partially successful, and, additionally,  a third of students who didn’t ask for credit were awarded it anyway.

That said, not all awarded credit necessarily helped to shorten the time-to-completion (on average, students said their studies were reduced by about one semester).  That’s because roughly three-quarters of transfer students in colleges were switching their area of study; simply having old credit recognized didn’t lessen the need to master all the material in the new program areas.

More tomorrow.

April 02

Understanding Credit Transfer (Part 1)

Every once in awhile, the issue of credit transfer pops up.  Usually, it’s in the context of “learning efficiency” – some politician or deputy minister starts off with, “why can’t my son/daughter/constituent get full credit for previous learning”, and follows that with some diatribe about how universities and colleges “just don’t get it”, etc, etc.

Right now, this script is playing out in Alberta, where the Advanced Education Minister is asking institutions to create ten per cent more “seamless learner pathways”, whatever that means.

Now, it’s quite true that universities have incentives not to accept transfer credit, the reasons are both financial (receiving universities can make more money if they accept fewer credits), and reputational (high-status receiving universities will seem more exclusive if they accept fewer credits).  And there is certainly a public interest in reducing barriers of this kind.

The problem, though, is that there are some perfectly good reasons not to accept transfer credit.  Credits are not a universal currency; they need to be taken in particular sequences and combinations if they are to result in a degree.  One can’t take 120 History credits and expect to get a B.Eng.

Basically, a school accepting a transfer student needs to ask two questions: are this student’s credits of the right “level”?  And, are the credits relevant to the new program the student will be attending?  If the answer to either of those questions is no, then they are perfectly justified in rejecting the credit.

Most people focus on the issue of determining the “level” of credits – should it be done by bulky credit-transfer systems, like those in Alberta and BC? Or, should it be done through standards-based systems, like the European Credit Transfer System?  Those are important questions: the efficacy of the former seems to fall as the number of participating institutions rises, but nobody in Canada seems inclined to try the latter, because it’s too much work.

But, in fact, the bigger challenge is determining the relevance of old credits to new programs.  The whole point of specialized degree programs is that they offer something specific that others don’t; the more unique programs are, the harder it should be to transfer in credits.  This is why it’s completely baffling when politicians insist that institutions should simultaneously reduce program duplication and allow more transfer credit.  They’re two directives operating at cross-purposes.

And to make matters worse, the one thing usually ignored in this debate is actual data.  While n=1 might work for a features story, that doesn’t prove there’s a generalized problem.

But never fear, we at HESA actually have some real data on this subject, both from colleges and universities.  More tomorrow.

March 05

Times Higher Education Reputation Rankings 2013

You’ll recall that yesterday, in reference to the orgy of hype that accompanies the annual release of the THE World Reputation Rankings, I made the point that universities’ reputation really doesn’t change all that much on a year-by-year basis, and that, therefore, perhaps said orgy was a wee bit overdone.

When all was said and done, five universities (Arizona, Indiana, Leeds, U Zurich, and Tel Aviv) fell out of the rankings, with a similar number replacing them (Monash, Moscow State, Freie Universitat Berlin, New South Wales, and Maryland).  Now, this might sound eminently stable – a 95% recidivism rate!  But actually, having five positions in the top 100 shift in a single year is actually relatively volatile.  What explains this?

Well, how about a change in the geographic balance of respondents?   Unlike baseball’s all-star ballot, the THE rankings doesn’t discount voting for home-town favourites.  And yes, stuff like that is happening. How else can you explain Turkey’s Middle East Technical University regularly cracking the THE’s top 100 when it doesn’t even make the top 500 in the ARWU?

THE isn’t consistent about the way it reports respondent populations, but here’s my attempt to compare the 2012 and 2013 samples:










Now, I’m sorry, but when 10% of respondents in a sample – which is meant to be globally representative – are from Australia and New Zealand (possibly more, depending on how that 12% “unspecified” pans out), you have a skewed sample.  It beggars belief to know this and still claim with a straight face that Australia’s moving from 4 to 6 institutions in the top 100 actually means something.  The least they could do is weight the results.

One final treat before I leave this topic for a few months. The keen-eyed among you may have noticed that the THE Reputation Ranking is based on two elements: a “research ranking” and a “teaching ranking”.  The even keener-eyed among you may have read the fine print to learn that, in fact, the teaching reputation is a misnomer: it’s actually a ranking of “graduate teaching”, which you’d have to figure is so correlated to research as to makes no odds – and this is, in fact, the case.  I plotted  each institutions’ score for “Teaching” and “Research” Reputation on a scatterplot, and here’s what I found:













Yes, you’re reading that right.  An R-square of .99.  Kind of makes you wonder why they bother, doesn’t it?

Page 20 of 29« First...10...1819202122...Last »