The Fracturing of Global Rankings

Interesting news out of China last week, as three major research universities – Lanzhou University, Renmin University and Nanjing University (the last of which is a genuine global player)  announced that they would stop participating in annual rankings conducted by overseas rankings agencies.  The purported reason?  Because, apparently, they wanted to focus more on delivering “education with Chinese characteristics,” echoing a recent call from Xi Jinping to universities to avoid “copy(ing) foreign standards and models when we build world-class universities of our own…based on Chinese conditions and with Chinese characteristics”.

The question is: should we believe them?

Well, to be fair, XI is not entirely without a point here.  International rankings do create pressures for a uniformity of global scholarship in that it incentivizes publications in the most highly-ranked journals, whose editors do tend to have biases towards topics that matter to developed-world audiences and in particular their (mainly) American/British editors.  That these biases exist is not seriously in question, but the extent of the bias does seem to vary quite a bit by field.  It is probably biggest in fields like medicine (where tropical medicine struggles to get equal treatment) and the social sciences (it’s *especially* bad in business, economics, and political science where there is a pronounced bias in favour of articles that either deal with “American” subjects or with “global” ones which put a high premium on theory and subtract points for specialist local knowledge).  It is less of an issue in mathematics and the hard sciences.  Curiously, these are precisely the areas in which Chinese universities tend to excel, so it’s not entirely clear what the problem is.

But perhaps we should not take these claims at face value.  Because saying institutions “will not take part in international rankings” is not the same thing as saying they will not *appear* in international rankings.  

A quick refresher on how international rankings work is in order.  For any set of rankings, there are only three possible data sources.  The first is data from independent third-party sources (mainly bibliometrics), the second is surveys (mainly asking profs what they think are good universities) and the third is data from the institutions themselves about things like how much money they make from outside businesses or how many international students they have.  Institutions cannot stop rankings agencies from using either of the first two data sources: they can only “withdraw” from rankings by choosing not to provide their own data.  This exact scenario happened in Canada fifteen years ago when universities decided to stop providing Maclean’s with such data; Maclean’s pivoted partly by using a different set of indicators and partly by trying to use Freedom of Information laws to get the same data institutions used to provide voluntarily (for reasons that should be reasonably obvious, the FOI route is not one that works in China).

Now, of the big three international rankings, the Times Higher Education World Rankings and the QS World Rankings both employ all three sources of data to populate their indicators, while the third – the Academic Ranking of World Universities (or Shanghai Rankings) only uses third-party data.  Thus, if a university says it won’t “participate” in rankings, what it means effectively is that it is withdrawing from Times Higher and QS, but not the Shanghai Rankings (technically, Lanzhou would only be withdrawing from QS since as far as I can tell it has not submitted data to THE in quite some time, if ever).  Because the Shanghai rankings place all three of these institutions at similar or higher levels than the foreign institutions, one might be forgiven for suspecting that they can get similar levels of public kudos with a lot less work, and none of them really need the rankings to drive international student flows (neither Renmin nor Lanzhou have many international students and Nanjing doubtless feels it will get international students regardless because of its status as one of the major centres of elite Chinese language instruction).

So, the question really is – why now and why not earlier?  Well, I expect it has something to do with the fact that there are probably more bouquets than brickbats to be had in China for institutions seen to be thumbing their noses at the west right now.   And really, when one looks at the real sources of differences between the three sets of rankings, what it comes down to is that while Shanghai focuses on research outputs, THE and QS are effectively popularity contests, with surveys used to measure prestige.  Given various movements in the west to try to stigmatize Chinese universities as “not really universities” on account of western notions of academic freedom – something I wrote about roughly a year ago – these universities may simply have said “who needs the hassle” and elected not to be judged by Westerners.

I’m not sure anyone can or should blame them for this.  No one needs to take part in rankings they don’t think portray them in a flattering or even a fair light.  Very few big research universities in the US or Europe (or China for that matter) choose to submit data to the Times Impact Rankings, precisely because they think they would not look as good as they do in solely research-based rankings.  There’s no difference in the impulses at work here.

And what, if anything, does this mean for rankings in general? If – and it’s still a big if at this point – this movement gains ground in China, it’s possible that QS and the Times will have to make some hard decisions about the kinds of indicators they use to populate their rankings, just as Maclean’s did fifteen years ago.   But whether that happens, I think the bigger lesson here is that there are increasingly deep fractures across global academia about the larger purposes of higher education and how – if at all – institutional progress on these purposes should be measured.  For a few brief years after the turn of the millennium, we had something like unity about the centrality of research production, but now that is gone.  This doesn’t mean global rankings are going away – indeed, they may well continue to multiply; but it does mean that their meanings will be constructed on an increasingly regional and national basis.   

Posted in

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.