The Future of Rankings is Excellent

I‘ve been in Europe for most of the past two weeks on a number of rankings-related projects.  And as a result of these travels, I’m more optimistic about international rankings than I have been for a long time.  Here’s why.

First of all, we are getting a lot of new data at the international level.  There are two primary sources for this. The first is the THE rankings – in particular their new European Teaching Rankings, which use surveys to look at student engagement and the student experience.  This is excellent.  It’s not the first time this has been attempted – U-Multirank has done this for a couple of years now – but THE taking up this data brings this approach into some of the world’s largest universities (THE also did something similar in its US rankings). I think there’s some calibration required to do this properly at a global level – for instance, surveys critiquing teachers may be a different experience, culturally, for western students vs. ones in Confucian-influenced systems – but man, if THE ever starts attaching this kind of survey to its global rankings, things could get interesting.  (THE is also exploring institutional indicators based on UN sustainable development goals, which I am more skeptical about, but their desire to explore and innovate is highly commendable).

The second is U-Multirank, which has been slower to gain acceptance than its sponsors originally hoped, but it is now getting some reasonably high-quality data on a whole bunch of topics from over 1400 institutions world-wide.  In a sense, U-Multirank is getting institutions across the globe to up their game on internal data collection and – very slowly – fostering the emergence of international standards around data collection on things like on-time graduation rates, student internship, and study abroad opportunities.  This, too, is excellent.

Another excellent thing is the increasing amount of discourse in non-OECD countries critiquing the mainly research-based nature of rankings.  Obviously, that discourse has been there from the start in places like Latin America, and there have been attempts to create alternatives such as Universitas Indonesia’s “GreenMetric” University Sustainability Rankings.  More recently, there is the new Moscow University Rankings which focus (in part) on the “Third Mission” of universities (this is a European term for what we in North America call “service” or “outreach” and is equally amorphous/multifaceted).  In the United States, the Washington Monthly rankings have long fulfilled mostly the same role – but there are increasingly calls to mainstream this approach.  This last Monday, six Democratic Senators, including possible Presidential aspirants Cory Booker and Kamala Harris, wrote a public letter to US News & World Report asking that they include indicators around social mobility and inclusion (of which there are now quite a lot – see last week’s blog on this subject here) in their overall rankings.

(Aside: I’m not convinced that folding new inclusion/third mission/student experience indicators into conventional ranking systems that tend to privilege research intensity and selectivity is the way to go. If you do that, Harvard still always wins, and the impact of the new indicators are lost.  I think it’s probably better in the long run just to have mission specific rankings: rank everyone separately based on research, experience, third mission, etc.  Clearer that way).

More broadly, what I’m starting to see is a refocussing of the whole discourse around rankings.  It’s not really about ranking qua ranking anymore.  To an ever-greater extent it’s about benchmarking and, more importantly, about data availability and comparability.  There is a large and growing constituency for genuinely comparable institutional data in areas beyond bibliometrics.  This, too, is excellent.

The end goal here is laudable: that any institution in the world should be able to get high-quality comparable data about similar institutions around the globe which can help it benchmark and improve its performance.  We’re still decades away from this.  Developing this kind of data on topics other than research takes a lot of time and a lot of conversations.  But the trend is now moving in this direction much more quickly than it was even a couple of years ago.

Reaching this final goal will mean jumping one last hurdle: making the data more or less open.  There’s an obvious case for doing so: right now, institutions just give THE their money, which THE then turns around and sells to institutions for a hefty fee.  I don’t think this monopoly will last forever, and suspect that institutions outside the OECD, which can’t afford THE’s fees, may lead the way in creating some kind of open repository of ranking data.  (Equally, some kind of open, common data set such as the one which exists in the US may also arise because too many rankers want the same data and institutions will get tired of dealing with them all).  This may not happen overnight, and if the THE’s business model is ever destroyed this way we’ll lose a major innovator in the field, but I do believe it will happen in the long run.

Bottom line: globally, the rankings discussion is finally reaching a level of sophistication which makes more interesting discussions possible.  In the past, rankings have had a lot of pernicious effects on higher education; I’m a lot more optimistic about the role they will play in the future.

Posted in

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.