Okay, everybody take a valium.
Quick recap: Times Higher Education, for a change, kept its methodology stable two years in a row. That means that for once, it’s okay to compare data across years. But – shock! Horror! – Canada’s three standard-bearers all fell in the rankings – U of T from 19th to 21st, UBC from 22nd to 30th and McGill from 28th to 34th. Cue the usual suspects grasping for an opportunity to talk about underfunding. “This is a wake-up call, we need to invest more in select institutions, we’re falling behind, fifth horsemen of the apocalypse,” etc. etc.
But if anyone is bothered to actually look at the data, they’d see it’s almost all nonsense. Both Toronto and McGill actually scored higher in the rankings this year than last; UBC fell, but only by a tenth of a percentage point. The reason the three fell is because others rose faster.
In total, there were nine institutions which moved up in the rankings and passed at least one of Canada’s big three: Of these, there were one each in the U.K. (Edinburgh), Japan (Tokyo), Singapore (National University Singapore, or “NUS”) and Australia (Melbourne), along with five in the U.S. (Carnegie Mellon, Cornell, Washington, Northwestern, Texas-Austin). Almost all of the movement for all of these nine institutions came on two groups of indicators – the “teaching” indicator (a 9% jump), and the “research” indicator (a 10.5% jump). There was almost no movement on any of the other indicators, and on “citations” – which you’d think would be a pretty good overall indicator for research strength – five of the nine actually fell. Only Tokyo and Texas-Austin actually improved on all four indicators.
It just so happens that the teaching and research indicator groups happen to be made up mostly of survey-based data. Every year, Thomson Reuters sends out a ton of surveys to academics in different parts of the world as part of this exercise, and this year it got a little over 16,000 responses. Responses on reputation count for 50% of the teaching score and 60% of the research score.
Though the THE ranking is based on 13 measures, no one is allowed to see anything beyond the five “capstone” indicators. We therefore don’t know how much of our nine schools’ movement was due to changes in survey results and how much was due to changes in other teaching and research indicators, such as the percentage of the student body in graduate programs, student-teacher ratios, etc. But it’s hard to imagine any of the other indicators showing much volatility or upwards bias, especially when so many of them have themselves been undergoing significant budget cuts (Northwestern, Carnegie Mellon and NUS are the only ones which are unambiguously better off over the last couple of years). The strong suspicion, therefore, must be that the reputational survey is what’s moving the numbers.
And why might that be? Have large numbers of academics around the world suddenly woken up to the wonders of Northwestern’s teaching and learning style? Well, no. A much simpler explanation for year-over-year change is simply that a lot more Americans and a lot fewer Canadians answered the survey this year. But we can’t tell for certain because THE doesn’t release any statistics on the geographical distribution of survey respondents.
In sum: the top Canadian institutions are doing better, not worse. The likeliest explanation for why other institutions seem to have a faster rate of improvement lies not with better funding, but with changing survey response patterns. There’s really nothing to see here.