There are three types of rankings and ratings out there. The first are the ones published by The Globe and Mail, U.S. News and World Report, and Maclean’s – essentially consumer guides which try to focus on aspects of the undergraduate experience. Then there are very quantitatively-oriented research rankings, from places like Shanghai Jiao Tong, Leiden and HEEACT.
And then there are the beauty contests, like the Time Higher Education’s World University Reputation Rankings, which was issued last week. Or rather, re-issued, for, as my colleague Kris Olds pointed out, the data for this ranking is the same used for the reputation indicator in their fall World University rankings – all they’ve done is extracted it and issued it as a separate product.
There actually is a respectable argument to be made for polling academics about “best” universities. Gero Federkeil of the Centrum für Hochschulentwicklung in Gütersloh noted a few years ago that if you ask professors which institution in their country is “the best” in their field of study, you get a .8 correlation with scholarly publication output. Why bother with tedious mucking around with bibliometrics when a survey can get you the same thing?
Two reasons, actually. One is that there’s no evidence this effect carries over to the international arena (could you name the best Chinese university in your discipline?) and second is that there’s no evidence it carries over beyond an academic’s field of study (could you name the best Canadian university for mechanical engineering?).
So, while the Times makes a big deal about having a globally-balanced sample frame of academics (and of having translated the instrument into nine languages), the fact that it doesn’t bother to tell us who actually answered the questionnaire is a problem. Does the fact that McGill and UBC do better on this survey than on more quantitatively-oriented research measures have to do with abnormally high participation rates among Canadian academics? Does the fact that Waterloo fell out of the top 100 have to do with the fact that fewer computer scientists, engineers and mathematicians responded this year? In neither case can we know for sure.
A reporter asked me last week how institutions could improve their standing in this ranking. The answer is that these stats are tough to juke because you never know who’s going to answer the survey. This ranking’s methodology is such that (unlike, say, the Shanghai rankings) substantial volatility from one year to the next is guaranteed unless you’re in the top ten or so. All you can really do is put your head down, produce excellent impactful research, and hope that virtue is eventually rewarded.
One response to “The Times Higher Education Research Rankings”