HESA

Higher Education Strategy Associates

Category Archives: rankings

March 07

Those Times Higher Education Reputation Rankings, 2014

The Times Higher Education (THE) Rankings came out yesterday.  Compared to previous years, there was very little fanfare for the release this time.  And that’s probably because the results weren’t especially interesting.

The thing to understand about rankings like this is that they are both profoundly true and profoundly trivial.  A few universities are undoubtedly seen as global standards, and so will always be at the top of the pile.  Previous THE rankings have shown that there is a “Big Six” in terms of reputation: Harvard, Stanford, Berkeley, MIT, Cambridge, and Oxford – this year’s results again show that no one else comes close to them in term of reputation.  Then there are another thirty or so who can more or less hold their position at the top from year-to-year.

After that, though, results are capricious.  Below 50th position, the Times neither assigns specific ranks (it presents data in tens, i.e., 51st-60th, 61st-70th, etc.), nor publishes the actual reputation score, because even they don’t think the scores are reliable.  Just for kicks, I divided this year’s top 100 into those groups of ten – a top ten, a next ten, a third ten, and so on – to see how many institutions were in the same group last year.  Here’s what I got:

Number of Institutions in Each Ten-Place Grouping in THE Reputation Rankings, Which Remained in Same Grouping, 2013 and 2014

image001

 

 

 

 

 

 

 

 

 

 

 

 

You’d expect a little movement from group-to-group – someone 71st last year rising to 69th this year, for instance – but this is just silly.  Below about 40th spot, there’s a lot of essentially random survey noise because the scores are so tight together that even small variations can move an institution several places.

A few American universities rose spectacularly this year – Purdue came in at 49th, despite not even cracking the top 100 in the previous year’s rankings; overall, there were 47 (up 3 from last year) American universities in the top 100.  Seoul National University was the biggest riser within the top 50, going from 41st to 26th, which may suggest that people are noticing quality in Korean universities (Yonsei also cracked top 100 for the first time), or it may just mean more Koreans responded to the survey (within limits, national response rates do matter – THE re-weights responses by region, but not by country; if you’re in a region with a lot of countries, like Europe or Asia, and your numbers go up, it can tilt the balance a bit).  Surprisingly, Australian universities tanked in the survey.

The American result will sound odd to anyone who regularly reads the THE and believes their editorial line about the rise of the East and decline of West in higher education.  But what do you expect?  Reputation is a lagging indicator.  Why anyone thinks its worthy of measuring annually is a bit mysterious.

February 06

When the Times Higher Education Rankings Fail The Fall-Down-Laughing Test

You may have noted the gradual proliferation of rankings at the Times Higher Education over the last few years.  First the World University Rankings, then the World Reputation Rankings (a recycling of reputation survey data from the World Rankings), then the “100 under 50” (World Rankings, restricted to institutions founded since the early 60s, with a methodological twist to make the results less ridiculous), then the “BRICS Rankings” (World Rankings results, with developed countries excluded, and similar methodological twists).

Between actual rankings, the Times Higher staff can pull stuff out of the database, and turn small bits of analysis into stories.   For instance, last week, the THE came out with a list of the “100 most international” universities in the world.  You can see the results here.  Harmless stuff, in a sense – all they’ve done is take the data from the World University Rankings on international students, foreign faculty, and international research collaborations, and turned it into its own standalone list.  And of course, using those kinds of metrics, geographic and political realities mean that European universities – especially those from the really tiny countries – always come out first (Singapore and Hong Kong do okay, too, for similar reasons).

But when their editors start tweeting stuff – presumably as clickbait – about how shocking it is that only ONE American university (MIT, if it matters to you) makes the top 100 – you have to wonder if they’ve started drinking their own Kool-Aid.  Read that list of 100 again, take a look at who’s on the list, and think about who’s not.  Taken literally, the THE is saying that places like the University of Ireland, Maynooth, the University of Tasmania, and King Abdulaziz University are more international than Harvard, Yale, and Stanford.

Here’s the thing about rankings: there’s no way to do validity testing other than what I call the, “fall-down-laughing test”.  Like all indicator-systems, they are meant to proxy reality, rather than represent it absolutely.  But since there’s no independent standard of “excellence” or “internationalization” in universities, the only way you can determine whether or not the indicators and their associated weights actually “work” is by testing them in the real word, and seeing if they look “mostly right” to the people who will use them.  In most international ranking systems (including the THE), this means ensuring that either Harvard or Stanford comes first: if your rankings come up with, say, Tufts, or Oslo, or something as #1, it fails the fall-down-laughing test, because “everybody knows” Harvard and Stanford are 1-2.

The THE’s ranking on “international schools” comprehensively fails the fall-down-laughing test. In no world would sane academics agree that Abdulaziz and Maynooth are more international than Harvard.  The only way one could possibly believe this is if you’ve reached the point where you believe that specifically chosen indicators actually *are* reality, rather than proxies for it.  The Times Higher has apparently now gone down that particular rabbit hole.

November 15

Ten Years of Global University Rankings

Last week, I had the honour of chairing a session at the Conference on World-Class Universities, in Shanghai.  Held on the 10th anniversary of the release of the first global rankings (both the Shanghai rankings and the Times Higher Ed Rankings – then run by QS – appeared for the first time in 2003).  And so it was a time for reflection: what have we learned over the past decade?

The usual well-worn criticisms were aired: international rankings privilege, the measurable (research) over the meaningful (teaching), they exalt the 1% over the 99%, they are a function of money not quality, they distort national priorities… you’ve heard the litany.  And these criticisms are no less true just because they’re old.  But there’s another side to the story.

In North America, the reaction to the global rankings phenomenon was muted – that’s because, fundamentally, these rankings measure how closely institutions come to aping Harvard and Stanford.  We all had a reasonably good idea of our pecking order.  What shocked Asian and European universities, and higher education ministries, to the core was to discover just how far behind America they were.  The first reactions, predictably, were anger and denial.  But once everyone had worked through these stages, the policy reaction was astonishingly strong.

It’s hard to find many governments in Europe or Asia that didn’t adopt policy initiatives in response to rankings.  Sure, some – like the empty exhortations to get X institutions into the top 20/100/500/whatever – were shallow and jejune.  Others – like institutional mergers in France and Scandinavia, or Kazakhstan setting up its own rankings to spur its institutions to greater heights – might have been of questionable value.

However, as a Dutch colleague of mine pointed out, rankings have pushed higher education to the front of the policy agenda in a way that nothing else – not even the vaunted Bologna Process – has done.  Country after country – Russia, Germany, Japan, Korea, Malaysia, and France, to name but a few – have poured money into excellence initiatives as a result of rankings.  We can quibble about whether the money could have been better spent, of course, but realistically, if that money hadn’t been spent on research, it would have gone to health or defence – not higher education.

But just as important, perhaps, is the fact that higher education quality is now a global discussion.  Prior to rankings, it was possible for universities to claim any kind of nonsense about their relative global pre-eminence (“no, really, Uzbekistan National U is just like Harvard”).  Now, it’s harder to hide.  Everybody has had to focus more on outputs.  Not always the right ones, obviously, but outputs nonetheless.  And that’s worth celebrating.  The sector as a whole, and on the whole, is better for it.

November 05

Owning the Podium

I’m sure many of you saw Western President, Amit Chakma’s, op-ed in the National Post last week, suggesting that Canadian universities need more government assistance to reach new heights of excellence, and “own the podium” in global academia.  I’ve been told that Chakma’s op-ed presages a new push by the U-15 for a dedicated set of “excellence funds” which, presumably, would end up mostly in the U-15′s own hands (for what is excellence if not research done by the U-15?).  All I can say is that the argument needs some work.

The piece starts out with scare metrics to show that Canada is “falling behind”.  Australia has just two-thirds our population, yet has seven institutions in the QS top 100, compared to Canada’s five!  Why anyone should care about this specific cut-off (use the top-200 in the QS rankings and Canada beats Australia 9 to 8), or this specific ranking (in the THE rankings, Canada and Australia each have 4 spots), Chakma never makes clear.

The piece then moves on to make the case that, “other countries such as Germany, Israel, China and India are upping their game” in public funding of research (no mention of the fact that Canada spends more public dollars on higher education and research than any of these countries), which leads us to the astonishing non-sequitur that, “if universities in other jurisdictions are beating us on key academic and research measures, it’s not surprising that Canada is also being out-performed on key economic measures”.

This proposition – that public funding of education is a leading indicator of economic performance – is demonstrably false.  Germany has just about the weakest higher education spending in the OECD, and it’s doing just fine, economically.  The US has about the highest, and it’s still in its worst economic slowdown in over seventy-five years.  Claiming that there is some kind of demonstrable short-term link is the kind of thing that will get universities into trouble.  I mean, someone might just say, “well, Canada has the 4th-highest level of public funding of higher education as a percentage of GDP in the OECD – doesn’t that mean we should be doing better?  And if that’s indeed true, and our economy is so mediocre, doesn’t that give us reason to suspect that maybe our universities aren’t delivering the goods?”

According to Chakma, Canada has arrived at its allegedly-wretched state by virtue of having a funding formula which prioritizes bums-in-seats instead of excellence.  But that’s a tough sell.  Most countries (including oh-so-great Australia) have funding formulae at least as demand-oriented as our own – and most are working with considerably fewer dollars per student as well.  If Australia is in fact “beating” us (a debatable proposition), one might reasonably suspect that it has at least as much to do with management as it does money.

Presumably, though, that’s not a hypothesis the U-15 wants to test.

November 04

Concentration vs. Distribution

I’m spending part of this week in Shanghai at the bi-annual World-Class Universities conference, which is put on by the good folks who run the Shanghai Jiao Tong Rankings. I’ll be telling you more about this conference later, but today I wanted to pick up on a story from the last set of Shanghai rankings in August.  You’d be forgiven for missing it – Shanghai doesn’t make the news the way the Times Higher Education rankings does, because its methodology doesn’t allow for much change at the top.

The story had to with Saudi Arabia.  As recently as 2008, it had no universities in the top 500; now it has four, largely because of the way they are strategically hiring highly-cited scientists (on a part-time basis, one assumes, but I don’t know that for sure).  King Saud University, which only entered the rankings in 2009, has now cracked the top-200, making it by far the fastest rise of any institution in the history of any set of rankings.  But since this doesn’t line up with the “East Asian tigers overtaking Europe/America” line that everyone seems eager to hear, no one published it.

You see, we’re addicted to this idea that if you have great universities then great economic development will follow.  There were some surprised comments on twitter about the lack of a German presence in the rankings.  But why?  Whoever said that having a few strong top universities is the key to success?

Strong universities benefit their local economies – that’s been clear for decades.  And if you tilt the playing-field more towards those institutions – as David Naylor argued in a very good talk last spring, there’s no question that it will pay some returns in terms of discovery and innovation.  But the issue is one of opportunity costs: would such a concentration of resources create more innovation and spill-over benefits than other possible distributions of funds?  Those who make the argument for concentration (see, for instance, HEQCO’s recent paper on differentiation) seem to take this as given, but I’m not convinced their case is right.

Put it this way: if some government had a spare billion lying around, and the politics of regional envy wasn’t an issue, and they wanted to spend it in higher education, which investment would have the bigger impact: putting it all into a single, “world-class” university?  Spreading it across maybe a half-dozen “good” universities?  Or spreading it across all institutions?  Concentrating the money might do a lot of good for the country (not to mention the institution at which it was concentrated – but maybe dispersing it would do more.  As convincing as Naylor’s speech was, this issue of opportunity costs wasn’t addressed.

Or, go back to Shanghai terminology: if it were up to you to choose, do you think Canada would be better served with one institution in the top ten worldwide (currently – none) or seven in the top 100 (currently – four) or thirty-five in the top 500 (currently – twenty-three)?  And what arguments would you make to back-up your decision?  I’m curious to hear your views.

October 04

Those Times Higher Education World Rankings (2013-14 Edition)

So, yesterday saw the release of the latest round of the THE rankings.  They were a bit of a damp squib, and not just for Canadian schools (which really didn’t move all that much).  The problem with actually having a stable methodology, as the Times is finding, is that there isn’t much movement at the top from year-to-year. So, for instance, this year’s top ten is unchanged from last year’s, with only some minor swapping of places.

(On the subject of the THE’s top ten: am I the only one who finds it completely and utterly preposterous that Harvard’s score for industry funding per professor is less than half of what it is at Beijing and Basel?  Certainly, it leads to suspicions that not everyone is filling out their forms using the same definitions.)

The narrative of last year’s THE rankings was “the rise of Asia” because of some good results from places like Korea and China.  This year, though, that story wasn’t tenable.  Yes, places like NUS did well (up 5 places to 25th); but the University of Hong Kong was down 8 spots to 43rd, and Korea’s Postech was down 10 spots to 60th.  And no other region obviously “won”, either.  But that didn’t stop the THE from imposing a geographic narrative on the results, with Phil Baty claiming that European flagship universities were “listing” – which is only true if you ignore Scandinavia and the UK, and see things like Leuven finishing 61st, as opposed to 58th, as significant rather than as statistical noise.

This brings us to the University of Basel story.  The THE doesn’t make a big deal out of it, but a university jumping from 134th to 68th says a lot.  And not about the University of Basel.  That the entirety of its jump can be attributed to changes in its scores on teaching and research – both of which are largely based on survey results – suggests that there’s been some weirdness in the pattern of survey responses.  All the other big movers in the top 100 (i.e. Paris Sud, U Zurich, and Lund, who fell 22, 31, and 41 places, respectively) also had huge changes in exactly these two categories.

So what’s going on here?  The obvious suspicion is that there were fewer French and Swiss respondents this year, thus leading to fewer positive responses for those schools.  But since the THE is cagey about disclosing details on survey response patterns, it’s hard to tell if this was the case.

And this brings us to what should really be the lead story about these rankings: for an outfit that bleats about transparency, too much of what drives the final score is opaque.  That needs to change.

June 24

The THE’s Top 100 under 50

So, last week the Times Higher put out one of its two subsidiary sets of rankings called the Top 100 under 50 – that is, the best “young” (under 50 years old) universities in the world.  The premise of these rankings is that young universities don’t really get a fair shake in regular rankings, where the top universities, almost all from before World War I, dominate based on prestige scores alone.  That the THE would admit this is both excellent and baffling.  Excellent because it’s quite true that age and prestige seem to be correlated and any attempt to get rid of bias can only be a Good Thing.  Baffling, because once you admit your main annual rankings exercise is strongly affected by a factor like age which an institution can do nothing about – well, what’s the point?  At least with the Under-50s a good score probably tells us something about how well-managed an institution is; what the main rankings tell us is more of a mystery.

These rankings  interesting because they highlight a relatively homogeneous group of institutions.  Few of these are North American because most of our institutions were built before 1963 (from Canada, only Calgary, SFU and Vic make this list  and they’ll all be too old in a couple of years).  They’re an intriguing look into the world’s up-and-coming universities: they’re mostly European (and exactly none of them are from the People’s Republic of China).

But there is some serious wonkiness in the statistics behind this year’s rankings which bear some scrutiny.  Oddly enough, they don’t come from the reputational survey, which is the most obvious source of data wonkiness.  Twenty-two percent of institutional scores in this ranking come from the reputational ranking; and yet in the THE’s reputation rankings (which uses the same data) not a single one of the universities listed here had a reputational score high enough that the THE felt comfortable releasing the data.  To put this another way: the THE seemingly does not believe that the differences in institutional scores among the Under-50 crowd are actually meaningful.  Hmmm.

No, the real weirdness in this year’s rankings comes in citations, the one category which should be invulnerable to institutional gaming.  These scores are based on field-normalized, 5-year citation averages; the resulting institutional scores are then themselves standardized (technically, they are what are known as z-scores).   By design, they just shouldn’t move that much in a single year.  So what to make of the fact that the University of Warwick’s citation score jumped 31% in a single year, Nanyang Polytechnic’s by 58%, or UT Dallas’ by a frankly insane 93%?  For that last one to be true, Dallas would have needed to have had 5 times as many citations in 2011 as it did in 2005.  I haven’t checked or anything, but unless the whole faculty is on stims, that probably didn’t happen.  So there’s something funny going on here.

A final point: the geographical distribution of top schools will surprise many.  Twelve schools from the PIGS (Portugal, Ireland, Greece, Spain) made the list, but only one school (State University Campinas) from the BRIC countries did.  That tells us that good young universities are probably a seriously lagging indicator of economic growth – not a category to which many aspire.

May 28

Some Developments in Rankings

I was in Warsaw the week before last for the International Rankings Expert Group (IREG) Forum.  The forum is designed both for those interested in rankings, and for rankers themselves – the principals behind the US News & World report rankings, the Shanghai Jiao Tong rankings, Germany’s CHE rankings, and the  Quacquarelli Symonds rankings are all regular participants.  It’s always been an interesting place to hear firsthand how rankings are evolving.  When it first started nearly a decade ago, there was a certain degree of rivalry between the main rankers.  Everyone was eager to prove the superiority of their own methodology.  There is much less of that now.  Among those who rank, there is an acceptance that there are many different ways to rank, and the desirability of any given system is largely dependent on the intended audience and the availability of data (which institutions themselves tend to control).  Nowadays, the Forum is interesting as a way to see how people are trying to refine indicators of institutional activity.

The main item of interest in Wasrsaw, though, was the inauguration of IREG’s system of quality certificates.  Seven years ago, the IREG group (including me) put together something called the Berlin Principles, a statement of good practice in rankings.  About three years ago, IREG moved to turn the Berlin Principles into the basis of a quality assurance system; that is, it would offer to certify rankings systems as being Berlin-compliant.  In Asia and Eastern Europe, where rankings have become a de facto method of quality assurance, there seems to be great demand for having external quality assurance for such rankers.

In any event, the rankings done by the Perspektywy Education Foundation and QS (the subject rankings) were the first two to volunteer for the process, and their audit groups were chaired by well-known higher education experts, such as Jamil Salmi (ex-head of the Tertiary Education Group at the World Bank) and Tom Parker (ex-Executive Director of the Institute for Higher Education Policy in Washington).  The process was essentially about compliance with the Berlin principles – most notably, the bits involving data integrity.  Passing an audit is meant to be the equivalent of an ISO 9000 certification, and both QS and Perspektywy passed, becoming the first to qualify for these certificates.

It remains to be seen, of course, if this certification will actually mean anything in terms of how people view rankings (will people be more drawn to the QS rankings now that they have an external stamp of approval?).  Regardless, this is a reasonably big step forward for rankings, generally.  Rankers are starting to show the kind of transparency that they demand of others, and their attention to quality and sound methodology is being rewarded.  It’s a step forward that everyone should applaud.

May 13

The Universitas 21 National Rankings: a Spotlight on Canada

Though it made very little news in Canada when released in Vancouver last week, Universitas 21 Network (of which UBC and McGill are members) published the second edition of their Rankings of National Higher Education Systems.  There’s nothing really new in the 2013 ranking: the methodology is largely unchanged (there was a small redistribution of indicator weightings), as are the results – the top ten remains the same, and Canada stays 4th overall.  But it’s still an opportunity to reflect on what the data tells us about Canadian higher education in an international context.

The U21 rankings consist of 22 indicators in four broad categories: financial resources, “environment” (a mishmash of gender, transparency, and regulatory quality issues), “connectivity” (international students, research collaboration, webometrics), and “ouput” (mostly research, with some student and graduate measures thrown in).  Overall, about 50% of the indicators relate directly or indirectly to research.

So, how did Canada do in each of those categories?  The most important result is that we came second (behind the US) on the resources category.  Ponder that for a moment.  Second.  In the entire world.  This is something to bring up next time you hear any of the usual suspects talk about “underfunding”.

In the environment category, Canada came 30th, just behind China.  Seriously.  The knock seems to be that we don’t have enough private institutions, or a national quality assurance agency; why that matters, I can’t say.  In connectivity, Canada ranks 16th, but much further off the pace in an absolute sense. We do OK on research collaboration, but are miles behind countries like Switzerland on international students, and the United States on webometrics.  On outputs, we do well in most areas, but several categories are (bizarrely) not normed for size, so the US creams everyone.

As you can tell, I have a few reservations about the indicators used in this ranking (and even more about the fact they don’t publish the actual indicator data), but I think there is a basic truth contained here.  Canada isn’t the best at anything, but we’re still in the top third in the OECD on most of the indicators that matter.  There are precious few countries – the US, perhaps, the Nordic countries, and the Netherlands – who can say the same.

It’s not because we’re doing anything special – it’s about what you’d expect from the second best-funded system in the world.  But if we’re going to take that next step up – or at least keep our position during a period of cutbacks – what we really need to do is learn from countries like Switzerland and the Netherlands, and discover how they can match us for achievement with substantially less funds at their disposal.

April 25

The Leiden Rankings 2013

Though it was passed over in silence here in Canada, the new Leiden university research rankings made a bit of a splash elsewhere, last week.  I gave a brief overview of the Leiden rankings last year.  Based on five years’ worth of Web of Science publication and citation data (2008-2012), it is by some distance the best way to compare institutions’ current research output and performance.  The Leiden rankings have always allowed comparisons along a number of dimensions of impact and collaboration; what’s new – and fabulous – this year is that the results can be disaggregated into five broad areas of study (biomedical sciences, life & earth sciences, math & computer science, natural sciences & engineering, and social sciences & humanities).

So how did Canadian universities do?

The big news is that the University of Toronto is #2 in the world (Harvard = #1) in terms of publications, thanks mainly to its gargantuan output in biomedical sciences.  But when one starts looking at impact, the story is not quite as good.  American universities come way out in front on impact in all five areas of study – natural, since they control the journals and they read and cite each others’ work more often than they do that of foreigners.  The UK is second in all categories (except math & computer science), third place in most fields belongs to the Dutch (seriously – their numbers are stunning), followed by the Germans and Chinese, followed (at a distance) by Canada and Australia.   Overall, if you look at each country’s half-dozen or so best universities, sixth or seventh is probably where we rank as a country in all sub-fields, and overall.

Also of interest is the data on collaboration, and specifically the percentage of publications which have an international co-author.  That Canada ranks low on this measure shouldn’t be a surprise: Europeans tend to dominate this measure because there are so many countries cheek by jowl.  But the more interesting finding is just how messy international collaboration is as a measure of anything.  Sure, there are some good schools with high levels of international collaboration (e.g. Caltech).  But any indicator where the top schools are St. Petersburg State and King Saud University probably isn’t a clear-cut measure of quality.

Among Canadian schools, there aren’t many big surprises.  Toronto, UBC, and McGill are the big three; Alberta does well in terms of volume of publications, but badly in terms of impact; and Victoria and Simon Fraser lead the way on international collaborations.

If you have even the slightest interest in bibliometrics, do go and play around with the customizable data on the Leiden site.  It’s fun, and you’ll probably learn something.

Page 1 of 3123