Cast your minds back, if you will, by about 15 years. Paul Martin had yet to show us why great finance ministers make lousy Prime Ministers. The ghastly CROCS fad was still three years away. And in China, Professor Nian Cai Liu had just released the inaugural Academic Ranking of World Universities, known more colloquially as the Shanghai Rankings.
While national rankings were old hat, the Shanghai Rankings’ global nature was something genuinely new. The sadly-defunct magazine Asiaweek had tried doing a regional ranking in the late 1990s but it failed because – like US News and Maclean’s – they relied on a methodology which required institutional participation and once a few big institutions say no, the whole thing fell apart. But Professor Liu, who had developed the rankings in an attempt to answer the question “how will we know if Chinese universities are world-class,” chose a different methodology (Liu’s home university was one of the 30-plus institutions to be part of the Chinese government’s Project 985, which aimed to develop “world-class universities, so it was a pressing question at the time). Using only bibliometric measures and presence of major awards (Nobels, Field medals, etc.) he sidestepped the institutional veto while still directly comparing institutions around the globe.
This idea was so novel it was immediately copied by QS and Times Higher Education (who at the time worked together on a single set of rankings), though the latter chose to ask institutions for data and did their own survey to measure reputation. But the results of the two exercises were broadly similar: the US looked really good, most of continental Europe was a long way behind, and non-OECD countries were essentially invisible. These created enormous political waves, which led a number of countries to launch government initiatives ploughing significant new money into research. These initiatives were often executed in ways that allegedly increased institutional stratification within these countries.
What’s difficult to remember, 15 years out, is why on earth anyone took these things seriously (which initially, outside the Americas, most people did). There were very good reasons to reject them. Specifically, why are universities the right unit of analysis? Why not individual programs within an institution? Or, to go the other way, national systems of research? France and Germany have argued that latter point quite effectively since the Shanghai approach eliminates from view the work of CNRS, Max Planck, etc. Or, alternatively, why would we choose to accept an exercise which ranked universities solely based on their research? (yes, the QS/THE rankings had some other indicators in there, but they were highly correlated to research intensity, so it amounted to the same thing.)
The reason these objections were not made was because at a gut level, a lot of people believed the ranking. These rankings had external validity of a sort: back in 2003-04, the American economy was considered the envy of the world. Dotcom bust? Major terrorist attacks and two wars in the Middle East? These had but little effect on the astonishing growth/prosperity engine that was the America of 2004. And so, people around the world looked at these rankings and said “hey… maybe their economy is great…because they have great universities?” As you can imagine, this was an argument that appealed to universities and perhaps dulled their critical faculties a bit. And in Europe, where the Lisbon agenda (“make Europe the World’s Most Innovative Economy by 2010”) had been agreed in 2000, spending money on universities seemed like a magic way of reaching that goal without doing all the tedious things to actually make their economies more innovative, like implementing thoroughgoing competition reform and (in some places anyway) shrinking the state from gargantuan to merely comprehensive. So, there was a coalition of forces, particularly in Europe, which saw some benefit to the policy implications of the rankings if not to the rankings themselves.
Now, what if Professor Liu has done this five years later? What if he had released his findings in the fall of 2008 instead of 2003? Those key pieces of external validation wouldn’t have been there. No one would have said to themselves “research universities are the cause of American prosperity”, they would have said “American prosperity is built on a whole bunch of bad cheques and the worst type of casino capitalism” (which is not true, but in 2008 it was hard to see beyond that). They might well have cast a more skeptical eye on rankings that placed so many American schools – even some mediocre ones – in a world top 500.
My point here isn’t that rankings are incorrect or that they have only pernicious effects. Yes, I wish those early rankings had been more precise and called themselves global research rankings rather than global university rankings – we’d have saved ourselves a lot of nonsense on all sides of the debate, frankly. And yes, rankings have had some pernicious effects, but they also sent a lot of extra money into basic research and in some countries they have measurably improved the quality of education (Russia springs to mind here).
My point, rather, is that the near-total acceptance of these rankings in the policy world and the deluge of national-level policy initiatives that followed were by no means a given. They were both a product of a particular moment in the global political economy, one which largely disappeared within a few years. At another moment they might easily have been ignored. On vagaries such as this, entire policy fields may pivot.
Surely a crucial factor was ranks’ adoption by prospective international students as an international comparison of universities’ relative positional value (Hirsch, 1976) which is still not otherwise available.
Hirsch, Fred (1976) The social limits to growth, Routledge & Kegan Paul, London.
Very nice