Times Higher Education (THE) is putting out its brand spanking new “Impact Rankings” tomorrow morning in North America (it’s an evening launch at an event in Korea but timed to hit the papers at lunch time in Europe and for the early news cycle over here). Today, I want to go through a little bit of background to these new rankings: tomorrow (Wednesday), the blog will be delayed a few hours so I can get you some analysis of the rankings as they are released. My apologies to those of you who prefer your higher education snark over morning coffee.
Now, I know for many of you the news of another global rankings will be met with cries of “Oh sweet Jesus, not another one.” But give THE some credit here. They have listened to complaints that rankings systematically measure the wrong things, privileging research over teaching and ignoring institutions’ wider contributions to society. I’m not sure there is much they can do about the teaching piece, but there is increasing interest in the contributions-to-society piece. For instance, there are the Washington Monthly College Rankings, which are put together by my colleague Robert Kelchen and are an interesting alternative to the status quo, and also the new-ish Moscow International University Rankings “The Three University Missions” (yes, it’s a clunky name….works better in Russian). The Times has decided to head into these fields too, and that’s probably not a bad thing.
THE has based this ranking on the UN’s Sustainable Development Goals (SDGs), of which there are seventeen and for which THE has developed rankings for each institution based on its contributions to eleven of them: Health & Well-Being, Quality Education, Gender Equality, Decent Work & Economic Growth, Industry Innovation and Infrastructure, Sustainable Cities and Communities, Responsible Construction and Production, Climate Action, Peace Justice & Strong Institutions, and Partnerships.
If you really want to see the whole list of 47 metrics, you can browse them here. They are…ambitious. Basically, there are three types of indicators. The first is bibliometrics (contain your shock and surprise). In each of the eleven areas, institutions will be scored based on their research outputs (it’s not clear at the moment if this is going to mean output or citations or some mix of the two) in sub-fields adjacent to those areas. So, for instance, your law output will be measured for the Justice domain, your output in certain medical fields named in the SDGs (e.g. HIV, tropical medicine, etc.) will be measured for the Health & Well-being domain and so on. To the extent that there is innovation here it is that specific research sub-fields (and I would argue: specific applied research sub-fields)will be ranked rather than fields as a whole.
The second is institutionally-provided data such as “number of graduates in health fields” or “number of graduates with primary school teaching qualifications”, “percentage of staff with disabilities”, “spending on local arts and heritage”, etc. A lot of this is data that most universities genuinely don’t have, which I suspect means that a lot of them are either going to make stuff up or submit data with a lot of holes in it. I hope it is the latter because THE do have an interesting work-around for that, which I will describe in a moment.
The third type of data is something new in rankings which might be considered a bit dodgy, data-wise. In the background information THE has released to date, they are calling this data source “pick lists”. So, for instance, under “make cities safe and inclusive” institutions are judged about whether or not they have nine different types of policies: a point for measures and targets on sustainable commuting, a point for providing affordable housing for staff, a point for building on brownfield sites, a point for allowing telecommuting, etc. Basically, institutions get points for simply having written policies rather than for getting results or even for having good policies. I understand why they are doing it: they want to address certain issues which are not easily measurable. I’m just not sure this is the right way to do it.
(There is possibly a fourth source named in the background document: namely, a YouGov survey of influencers on the question of whether institutions are partners on sustainable development questions. I have a hard time believing anyone would spend money acquiring data that trivial and unreliable, but you never know).
In theory, every institution will be scored on each of the 11 research areas; you can do that through Elsevier databases easily enough. Every institution will be capable of answering the “pick list” questions. That’s another 13 or so indicators. The other 20-odd indicators will need to be filled in by institutions, but as I noted above, my guess is most institutions don’t collect data on a majority of the indicators being asked about, at least not in a consistent and reliable way. But here’s the interesting bit: the overall ranking will be dependent on an institution’s best 4 SDG domain results. So not having any data in an area means you will not look good on the sub-ranking for that development goal, but it doesn’t necessarily hurt you on the overall score, since only your top four scores count.
In one way, this is a neat trick in terms of being inclusive in a global measurement system where data coverage is inconsistent. You don’t need to have data in every category in order to do well. On the other hand, it throws up the possibility that some very odd institutions are going to look pretty good on this. It depends in part on how each indicator gets weighted within each SDG (if research gets weighted heavily, you’re going to see a lot of the usual Anglo-American suspects doing really well). But I can see a few categories where it’s obvious some unusual institutions or countries are going to do *really* well. The indicators for Health and Learning/Education SDGs favour sheer size, so mega-institutions like UNAM or UBA will do really well (I kind of wonder how they will adjust for institutions like those in India and Pakistan where each university has a whole bunch of “colleges” where hundreds of thousands of students take their exams despite never setting foot on campus). The access metrics which privilege first generation students and percentage of students coming from underdeveloped countries is going to mostly privilege underdeveloped countries (e.g. almost everyone in Uganda is a first generation student, and to the extent Makerere University has international students, they are from countries as poor as Uganda, so they would do *really* well on this metric, if they can find the data). And as long as a few institutions can luck into three of four categories where they can score well by this kind of happenstance, you may indeed get some odd institutions looking pretty good on this measure.
In any case: this new ranking is a big deal because it means something other than money and research is being taken seriously. It may also turn out to be a schmozzle because they’re working with very incomplete data. We probably won’t know exactly how much of a schmozzle it is because I suspect they will stick to their usual practice of never publishing individual indicator data (because then they couldn’t sell it later on).
But, you know, we’ll see. Tune in tomorrow around lunch (EDT) for the full story.