As I mentioned yesterday, I was at a conference of the International Rankings Expert Group (IREG) in Tashkent last week, hosted at the Tashkent Institute of Irrigation and Agricultural Mechanization Engineers. I’ve been going to these meetings for close to 20 years now (I even had a minor role in drafting the “Berlin Principles on Rankings” in 2006) and I highly recommend them. One learns a great deal about the differences in how rankings work, ways that higher education works differently around the world, and how the measurement of institutional outputs plays out different contexts.
Broadly speaking, in “the west”, institutions face two sets of rankings: domestic and international. On the whole, the domestic rankings tend to focus directly either on resources (e.g., dollars per student, or professors per student, which is tantamount to the same thing), or students’ secondary school entering averages (i.e., selectivity), or some kind of measurement of student engagement that tends to be highly correlated to resources and selectivity. The international rankings tend to focus on research output/citation impact or proxies for the same (e.g. total budget, a survey of academics rating the best institutions in their field).
Not all countries in the west regard rankings in the same way: in particular, the relative amounts of attention to domestic rankings (which tend to focus on issues related to undergraduate students) vs. international rankings (which tend to revolve around research and hence graduate students) differs quite a bit. But in pretty much all cases, a lot of rankings discourse revolves around the idea that rankings carry too many negative externalities: that is, they “pervert” the way universities work by creating incentives to do things they would not otherwise do. Where domestic rankings are dominant, as in the US, that usually means critiquing an exaggerated focus on increasing selectivity, and where international rankings are dominant, it means a critique of an over-emphasis on publications.
(That is, of course, where the critique focuses on actual outcomes. Sometimes the critiques are related to validity and reliability of results – which are fair game – and sometimes they are just about claiming that any comparison of institutions is invalid because any kind of measurement at the institutional level is simply jejune. I’m less well-disposed to the second argument, though to the extent the proposed alternative is better measurement at the field-of-study or program level, I’m here for it. The problem is that too often the anti-rankings argument is simply an argument against comparisons of any sort, which, you know, is ridiculous and comes off as either highly elitist – “who are these peasants who deign to measure our massive value to society?” – or simply in bad faith.)
In the East – and particularly China – both domestic and international rankings are similar in that they tend to elevate the importance of research metrics. The simplicity and one-dimensionality of these metrics make it easier for university leadership to incentivize: and in truth they have been very successful at it over the years. If there is one undeniable truth about Chinese higher education in the last few years it is that their universities have become extraordinarily good at research and have overtaken many western nations, particularly in areas such as crystallography and materials sciences.
In general, China and the rest of East Asia has seen less push-back on rankings. Basically, all those worries about whether rankings pervert academic missions? It seems those are literally first-world problems. And these first-world problems have led over the last half-dozen years to a big schism in the world of rankings due to the way rankers have tried to respond to these criticisms. While in Asia they remain focused on research outputs, in the west, new rankings like the Times Higher Impact Rankings and the QS Sustainability Rankings have tried to put the spotlight on other aspects of universities’ missions like contributions to society or the environment or other factors. This outbreak of new rankings hasn’t occurred, as Richard Holmes claimed in UWN a couple of weeks ago, to “mask Western weakness in research” – frankly, rankers aren’t clever enough to come up with this idea. Rather, it comes down to a difference in how universities in different parts of the world think about the value of universities and what is worth measuring.
So far, this is familiar ground to those who follow rankings and topics I have covered before. But now I want you to think about a third perspective, one which is increasingly coming to the fore in world debates about rankings. And that is the perspective of the global south.
For the first few years of global rankings, one of the biggest criticisms levelled at rankings was how few institutions were ranked, resulting in large parts of this world being excluded. Over time, different rankings have started to include a lot more institutions. It’s now a minimum of 1000 institutions globally at most ranking agencies, and 2000 is not uncommon. Institutions in places like South Asia, Central Asia Sub-Saharan Africa, and the Middle East, which formerly were shut out of global rankings, are starting to be admitted.
And who’s pushing for inclusion in these rankings? Universities themselves, partly. But to a large extent it’s governments in the global south who are driving the agenda. They want their institutions to be ranked, either domestically or internationally, not because they are besotted with neo-liberalism or managerialism or marketization, as the first-world ranking critics would have it, but because they know they need modern universities in order to try to compete with the North in the 21st century, and frankly they have no way other than rankings to have a sense of how close they are to catching up.
That is, in the global south, global rankings are seen as a useful tool in system management and, at a broader level, in economic modernization. And more to the point, they are used for benchmarking purposes rather than marketing ones.
Now, one could argue that rankings aren’t the best tool by which to measure the progress of universities, and they’d have a point. But any alternative to rankings is still going to have to use a set of indicators very similar to those used in rankings, particularly with respect to scientific output. At this point, we’d just be arguing whether these indicators need to be considered individually, or whether they should be bundled into a weighted algorithm and considered collectively through a reductive but heuristically simple score/rank. The difference really isn’t that significant.
In practice, any “south-oriented” rankings – whether done internationally or domestically – are inevitably going to look more like Asian research rankings than North American/European “impact” or “sustainability” rankings. Quite simply, the amount of institutional data collection capacity required to participate in these new western rankings excludes most universities in the global south, very few of which have the necessary institutional research capacity to participate. You can’t be in a ranking if you can’t produce the data, so for poor countries, relatively simple rankings are the way to go.
Anyways, discussions which crossed all these subject areas were had in Tashkent. Far from dwelling on the ins-and-outs of rankings which are more oriented to institutional marketing than anything else (e.g., Times Higher rankings, or US News & World Report’s venerable US College Rankings), we were discussing how emerging institutions can become more visible and how countries can design national modernization programs using rankings as a form of benchmarking.
It was a blast. And it was nothing like what first-world ranking critics imagine discussions amongst rankers to be.