The father of modern university rankings is James McKeen Cattell, a well-known early 20th-century psychologist, scientific editor (he ran the journals Science and Psychological Review) and eugenecist. In 1903, he began publishing American Men of Science, a semi-regular rating of the country’s top scientists, as rated by university department chairs. He then hit on the idea of counting how many of these scientists were graduates of the nation’s various universities. Being a baseball enthusiast, it seemed completely natural to arrange these results top to bottom, as in a league table. Rankings have never looked back.
Because of the league table format, reporting on rankings tends to mirror what we see in sports. Who’s up? Who’s down? Can we diagnose the problem from the statistics? Is it a problem attracting international faculty? Lower citation rates? A lack of depth in left-handed relief pitching? And so on.
The 2018, QS World University Rankings, released last night, are another occasion for this kind of analysis. The master narrative for Canada – if you want to call it that – is that “Canada is slipping”. The evidence for this is that the University of British Columbia fell out of the top 50 institutions in the world (down six places to 51st) and that we also now have two fewer institutions in the top 200, (Calgary fell from 196th to 217th and Western from 198 to 210th) than we used to.
People pushing various agendas will find solace in this. At UBC, blame will no doubt be placed on the institution’s omnishambular year of 2015-16. Nationally, people will try to link the results to problems of federal funding and argue how implementing the recommendations of the Naylor report would be a game-changer for rankings.
This is wrong for a couple of reasons. The first is that it is by no means clear that Canadian institutions are in fact slipping. Sure, we have two fewer in the 200, but the number in the top 500 grew by one. Of those who made the top 500, nine rose in the rankings, nine slipped and one stayed constant. Even the one high-profile “failure” – UBC – only saw its overall score fall by one-tenth of a point; the fall in the rankings was more due to an improvement in a clutch of Asian and Australian universities.
The second is that in the short-term, rankings are remarkably impervious to policy changes. For instance, according to the QS reputational survey, UBC’s reputation has taken exactly zero damage from l’affaire Gupta and its aftermath. Which is as it should be: a few months of communications hell doesn’t offset 100 years of scientific excellence. And new money for research may help less than people think. In Canada, institutional citations tend to track the number of grants received more than the dollar value of the grants. How granting councils distribute money is at least as important as the amount they spend.
And that’s exactly right. Universities are among the oldest institutions in society and they don’t suddenly become noticeably better or worse over the course of twelve months. Observations over the span of a decade or so are more useful, but changes in ranking methodology make this difficult (McGill and Toronto are both down quite a few places since 2011, but a lot of that has to do with changes which reduced the impact of medical research relative to other fields of study).
So it matters that Canada has three universities which are genuinely top class, and another clutch (between four and ten, depending on your definition), which could be called “world-class”. It’s useful to know that, and to note if any institutions have sustained, year-after-year changes either up or down. But this has yet to happen to any Canadian university.
What’s not as useful is to cover rankings like sports, and invest too much meaning in year-to-year movements. Most of the yearly changes are margin-of-error kind of stuff, changes that result from a couple of dozen papers being published in one year rather than another, or the difference between admitting 120 extra international students instead of 140. There is not much Moneyball-style analysis to be done when so many institutional outputs are – in the final analysis – pretty much the same.
In your last paragraph you mention “sports” as something not worth ranking. If you are referring to the “by discipline” category rankings in the QS rankings then you should be aware that “sports-related” refers to the following sub-set of academic disciplines that comprise the category: kinesiology, exercise or sport sciences, sport psychology, sport management. Kinesiology itself is a multi-disciplinary subject that comprises behavioural, biological, and social sciences all focused of course on human movement, exercise, and sport. I just wanted to clarify in the event you thought the category in the QS rankings referred to university athletics programs.
I’m reminded, oddly, of the Elizabeth Homily against Peril of Idolatry. In it, Bishop Jewell notes that it’s so difficult to have statues in churches without eliciting their worship as idols that it’s best just not to have statues. The same could be said for university rankings: if we don’t want them to become fetishes, we’d better get rid of them.
I would like to add one other thing: the damage of rankings isn’t just a problem of journalism, of understanding the situation. It also insinuates itself into departments, hiring decisions and tenure decisions, where the relevant authorities tend to internalize the criteria of rankings. If Maclean’s weighs its ranking heavily towards levels of research funding, then faculty will be urged to apply for money they don’t need, or hired because they’re in expensive fields. If it leans towards reputation, then faculty will be urged to set up blogs, or “demonstrate impact.” And so forth, all the way to Hell (which would be the UK, in this instance).