As you likely noticed from the press generated by the release of the QS rankings: it’s now rankings season! Are you at a university that seems to care about global rankings? Are you not sure what the heck they all mean, or why institutions rank differently on different metrics? Here’s a handy cheat-sheet to understand what each of them does, and why some institutions swear by some, but not by others.
Academic Ranking of World Universities (ARWU): Also known as the Shanghai Rankings, this is the granddaddy of world rankings (disclaimer: I sit on the advisory board), having been first out of the gate back in 2003. It’s mostly bibliometric in nature, and places a pretty high premium on publication in a select few publications. It also, unusually, scores institutions on how many Nobel or Field prizes their staff or alumni have won. It’s really best thought of as a way of measuring large deposits of scientific talent. There’s no adjustment for size or field (though it publishes separate ratings for six broad fields of study), which tends to favour institutions that are strong in fields like medicine and physics. As a result, it’s among the most stable rankings there is: only eleven institutions have ever been in ARWU’s top ten, and the top spot has always been held by Harvard.
Times Higher Education (THE) Rankings: As a rough guide, think of THE as ARWU with a prestige survey and some statistics on international students and staff tacked-on. The survey is a mix of good and bad. They seem to take reasonable care in constructing the sample and, for the most part, questions are worded sensibly. However, the conceit that “teaching ability” is being measured this way is weird (especially since institutions’ “teaching” scores are correlated at .99 with their research scores). The bibliometrics are different from ARWU’s in three important ways, though. The first is that they are more about impact (i.e. citations) than publications. The second is that said citations are adjusted for field, which helps institutions that are strong in areas outside medicine and physics, like the social sciences. The third is that they are also adjusted for region, which gives a boost to universities outside Europe and North America. It also does a set of field rankings.
QS Rankings: QS used to do rankings for THE until 2009 when the latter ended the partnership, but QS kept trucking on in the rankings game. It’s superficially similar to THE in the sense that it’s mostly a mix of survey and bibliometrics. The former is worth more, and is somewhat less technically sound than the THE’s survey, and it gets regularly lambasted for that. The bibliometrics are a mix of publication and citation measures. Its two distinguishing features are: 1) data from a survey of employers soliciting their views on graduate employability; and, 2) they rank ordinally down to position 500 (other rankings only group in tranches after the first hundred or so institutions). This latter feature is a big deal if you happened to be obsessed with minute changes in ranking order, and regularly feature in the 200-to-500 range. In New Zealand, for instance, QS gets used exclusively in policy discussions for precisely this reason.
U-Multirank: Unlike all the others, U-Multirank doesn’t provide data in a league-table format. Instead, it takes data provided by institutions and allows users to choose their own indicators to provide of “personalized rankings”. That’s the upside. The downside is that not enough institutions actually provide data, so its usefulness is somewhat less than optimal.
Webometrics Rankings: As a rule of thumb: the bigger, and more complicated, and more filled with rich data a university website is, the more important a university it is likely to be. Seriously. And it actually kind of works. In any case, Webometric’s big utility is that it ranks something like 13,000 universities around the world, and so for many countries in the developing world, it’s the only chance for them to see how they compare against other universities.