HESA

Higher Education Strategy Associates

Tag Archives: Methodology

June 09

Why we should – and shouldn’t – pay attention to World Rankings

The father of modern university rankings is James McKeen Cattell, a well-known early 20th-century psychologist, scientific editor (he ran the journals Science and Psychological Review) and eugenecist.  In 1903, he began publishing American Men of Science, a semi-regular rating of the country’s top scientists, as rated by university department chairs.  He then hit on the idea of counting how many of these scientists were graduates of the nation’s various universities.  Being a baseball enthusiast, it seemed completely natural to arrange these results top to bottom, as in a league table.  Rankings have never looked back.

Because of the league table format, reporting on rankings tends to mirror what we see in sports.  Who’s up?  Who’s down?  Can we diagnose the problem from the statistics?  Is it a problem attracting international faculty?  Lower citation rates?  A lack of depth in left-handed relief pitching?  And so on.

The 2018, QS World University Rankings, released last night, are another occasion for this kind of analysis.  The master narrative for Canada – if you want to call it that – is that “Canada is slipping”.  The evidence for this is that the University of British Columbia fell out of the top 50 institutions in the world (down six places to 51st) and that we also now have two fewer institutions in the top 200, (Calgary fell from 196th to 217th and Western from 198 to 210th) than we used to.

People pushing various agendas will find solace in this.  At UBC, blame will no doubt be placed on the institution’s omnishambular year of 2015-16.  Nationally, people will try to link the results to problems of federal funding and argue how implementing the recommendations of the Naylor report would be a game-changer for rankings.

This is wrong for a couple of reasons.  The first is that it is by no means clear that Canadian institutions are in fact slipping.  Sure, we have two fewer in the 200, but the number in the top 500 grew by one.  Of those who made the top 500, nine rose in the rankings, nine slipped and one stayed constant.  Even the one high-profile “failure” – UBC –  only saw its overall score fall by one-tenth of a point; the fall in the rankings was more due to an improvement in a clutch of Asian and Australian universities.

The second is that in the short-term, rankings are remarkably impervious to policy changes.  For instance, according to the QS reputational survey, UBC’s reputation has taken exactly zero damage from l’affaire Gupta and its aftermath.  Which is as it should be: a few months of communications hell doesn’t offset 100 years of scientific excellence.  And new money for research may help less than people think. In Canada, institutional citations tend to track the number of grants received more than the dollar value of the grants.  How granting councils distribute money is at least as important as the amount they spend.

And that’s exactly right.  Universities are among the oldest institutions in society and they don’t suddenly become noticeably better or worse over the course of twelve months.  Observations over the span of a decade or so are more useful, but changes in ranking methodology make this difficult (McGill and Toronto are both down quite a few places since 2011, but a lot of that has to do with changes which reduced the impact of medical research relative to other fields of study).

So it matters that Canada has three universities which are genuinely top class, and another clutch (between four and ten, depending on your definition), which could be called “world-class”.  It’s useful to know that, and to note if any institutions have sustained, year-after-year changes either up or down.  But this has yet to happen to any Canadian university.

What’s not as useful is to cover rankings like sports, and invest too much meaning in year-to-year movements.  Most of the yearly changes are margin-of-error kind of stuff, changes that result from a couple of dozen papers being published in one year rather than another, or the difference between admitting 120 extra international students instead of 140.   There is not much Moneyball-style analysis to be done when so many institutional outputs are – in the final analysis – pretty much the same.

October 30

Times Higher Rankings, Weak Methodologies, and the Vastly Overblown “Rise of Asia”

I’m about a month late with this one (apologies), but I did want to mention something about the most recent version of the Times Higher Education (THE) Rankings.  You probably saw it linked to headlines that read, “The Rise of Asia”, or some such thing.

As some of you may know, I am inherently suspicious about year-on-year changes in rankings.  Universities are slow-moving creatures.  Quality is built over decades, not months.  If you see huge shifts from one year to another, it usually means the methodology is flimsy.  So I looked at the data for evidence of this “rise of Asia”.

The evidence clearly isn’t there in the top 50.  Tokyo and Hong Kong are unchanged in their position.  Tsinghua Beijing and National University of Singapore are all within a place or two of where they were last year.  In fact, if you just look at the top 50, you’d think Asia might be going backwards, since one of their big unis (Seoul National) fell out of the top 50, going from 44th to 52nd in a single year.

Well, what about if you look at the top 100?  Not much different.  In Korea, KAIST is up a bit, but Pohang is down.  Both the Hong Kong University of Science and Technology and Nanyang were up sharply, though, which is a bit of a boost; however, only one new “Asian” university came into the rankings, and that was the Middle Eastern Technical University in Turkey, which rose spectacularly from the 201-225 band last year, to 85th this year.

OK, what about the next 100?  Here it gets interesting.  There are bad news stories for Asian universities.  National Taiwan and Osaka each fell 13 places. Tohoku fell 15, Tokyo Tech 16, Chinese University Hong Kong 20, and Yonsei University fell out of the top 200 altogether.  But there is good news too: Bogazici University in Turkey jumped 60 places to 139th, and five new universities – two from China, two from Turkey and one from Korea – entered the top 200 for the first time.

So here’s the problem with the THE narrative.  The best part of the evidence for all this “rise of Asia” stuff rests on events in Turkey (which, like Israel, is often considered as being European rather than Asian – at least if membership in UEFA and Eurovision is anything to go by).  The only reason THE goes on with its “rise of Asia” tagline is because it has a lot of advertisers and a big conference business in East Asia, and its good business to flatter them, and damn the facts.

But there’s another issue here: how the hell did Turkey do so well this year, anyway?  Well, for that you need to check in with my friend Richard Holmes, who runs the University Ranking Watch blog.  He points out that a single paper (the one in Physics Letters B, which announced the confirmation of the Higgs Boson, and which immediately got cited in a bazillion places) was responsible for most of the movement in this year’s rankings.  And, because the paper had over 2,800 co-authors (including from those suddenly big Turkish universities), and because THE doesn’t fractionally count multiple-authored articles, and because THE’s methodology gives tons of bonus points to universities located in countries where scientific publications are low, this absolutely blew some schools’ numbers into the stratosphere.  Other examples of this are Scuola Normale di Pisa, which came out of nowhere to be ranked 65th in the world, or Federica Santa Maria Technical University in Chile, which somehow became the 4th ranked university in Latin America.

So basically, this year’s “rise of Asia” story was based almost entirely on the fact that a few of the 2,800 co-authors on the “Observation of a new boson…” paper happened to work in Turkey.

THE needs a new methodology.  Soon.