I was in Athens this past June, at an EU-sponsored conference on rankings, which included a very intriguing discussion about the use of reputation indicators that I thought I would share with you.
Not all rankings have reputational indicators; the Shanghai (ARWU) rankings, for instance, eschew them completely. But QS and Times Higher Education (THE) rankings both weight them pretty highly (50% for QS, 35% for THE). But this data isn’t entirely transparent. THE, who release their World University Rankings tomorrow, hides the actual reputational survey results for teaching and research by combining each of them with some other indicators (THE has 13 indicators, but it only shows 5 composite scores). The reasons for doing this are largely commercial; if, each September, THE actually showed all the results individually, they wouldn’t be able to reassemble the indicators in a different way to have an entirely separate “Reputation Rankings” release six months later (with concomitant advertising and event sales) using exactly the same data. Also, its data collection partner, Thomson Reuters, wouldn’t be able to sell the data back to institutions as part of its Global Institutional Profiles Project.
Now, I get it, rankers have to cover their (often substantial) costs somehow, and this re-sale of hidden data is one way to do it (disclosure: we at HESA did this with our Measuring Academic Research in Canada ranking. But given the impact that rankings have for universities, there is an obligation to get this data right. And the problem is that neither QS nor THE publish enough information about their reputation survey to make a real judgement about the quality of their data – and in particular about the reliability of the “reputation” voting.
We know that the THE allows survey recipients to nominate up to 30 institutions as being “the best in the world” for research and teaching, respectively (15 from one’s home continent, and 15 worldwide); the QS allows 40 (20 from one’s own country, 20 world-wide). But we have no real idea about how many people are actually ticking the boxes on each university.
In any case, an analyst at an English university recently reverse-engineered the published data for UK universities to work out voting totals. The resulting estimate is that, among institutions in the 150-200 range of the THE rankings, the average number of votes obtained for either research or teaching is in the range of 30-to-40, at best. Which is astonishing, really. Given that reputation counts for one third of an institution’s total score, it means there is enormous scope for year-to-year variations – get 40 one year and 30 the next, and significant swings in ordinal rankings could result. It also makes a complete mockery of the “Top Under 50” rankings, where 85% of institutions rank well below the top 200 in the main rankings, and therefore are likely only garnering a couple of votes apiece. If true, this is a serious methodological problem.
For commercial reasons, it’s impossible to expect the THE to completely open the kimono on its data. But given the ridiculous amount of influence its rankings have, it would be irresponsible of it – especially since it is allegedly a journalistic enterprise – not to at least allow some third party to inspect its data and give users a better sense of its reliability. To do otherwise reduces the THE’s ranking exercise to sham social science.
3 responses to “The Problem with Global Reputation Rankings”