HESA

Higher Education Strategy Associates

Category Archives: rankings

September 30

The Problem with Global Reputation Rankings

I was in Athens this past June, at an EU-sponsored conference on rankings, which included a very intriguing discussion about the use of reputation indicators that I thought I would share with you.

Not all rankings have reputational indicators; the Shanghai (ARWU) rankings, for instance, eschew them completely.  But QS and Times Higher Education (THE) rankings both weight them pretty highly (50% for QS, 35% for THE).  But this data isn’t entirely transparent.  THE, who release their World University Rankings tomorrow,  hides the actual reputational survey results for teaching and research by combining each of them with some other indicators (THE has 13 indicators, but it only shows 5 composite scores).  The reasons for doing this are largely commercial; if, each September, THE actually showed all the results individually, they wouldn’t be able to reassemble the indicators in a different way to have an entirely separate “Reputation Rankings” release six months later (with concomitant advertising and event sales) using exactly the same data.  Also, its data collection partner, Thomson Reuters, wouldn’t be able to sell the data back to institutions as part of its Global Institutional Profiles Project.

Now, I get it, rankers have to cover their (often substantial) costs somehow, and this re-sale of hidden data is one way to do it (disclosure: we at HESA did this with our Measuring Academic Research in Canada ranking.  But given the impact that rankings have for universities, there is an obligation to get this data right.  And the problem is that neither QS nor THE publish enough information about their reputation survey to make a real judgement about the quality of their data – and in particular about the reliability of the “reputation” voting.

We know that the THE allows survey recipients to nominate up to 30 institutions as being “the best in the world” for research and teaching, respectively (15 from one’s home continent, and 15 worldwide); the QS allows 40 (20 from one’s own country, 20 world-wide).  But we have no real idea about how many people are actually ticking the boxes on each university.

In any case, an analyst at an English university recently reverse-engineered the published data for UK universities to work out voting totals.  The resulting estimate is that, among institutions in the 150-200 range of the THE rankings, the average number of votes obtained for either research or teaching is in the range of 30-to-40, at best.  Which is astonishing, really.  Given that reputation counts for one third of an institution’s total score, it means there is enormous scope for year-to-year variations  – get 40 one year and 30 the next, and significant swings in ordinal rankings could result.  It also makes a complete mockery of the “Top Under 50” rankings, where 85% of institutions rank well below the top 200 in the main rankings, and therefore are likely only garnering a couple of votes apiece.  If true, this is a serious methodological problem.

For commercial reasons, it’s impossible to expect the THE to completely open the kimono on its data.  But given the ridiculous amount of influence its rankings have, it would be irresponsible of it – especially since it is allegedly a journalistic enterprise – not to at least allow some third party to inspect its data and give users a better sense of its reliability.  To do otherwise reduces the THE’s ranking exercise to sham social science.

May 29

May ’14 Rankings Round-Up

I’ve been remiss  the last month or so in not keeping you up-to-date with some of the big international rankings releases, namely the Leiden Rankings, the Times Top 100 Under 50 rankings, and the U21 Ranking of National Higher Education Systems.

Let’s start with Leiden (previous articles on Leiden can be found here, and here), a multidimensional bibliometric ranking that looks at various types of publication and impact metrics.  Because of the nature of the data it uses, and the way it displays results, the rankings are both stable and hard to summarize.  I encourage everyone interested in bibliometrics to take a look and play around with the data themselves to see how the rankings work. In terms of Canadian institutions, our Big Three (Toronto, UBC, McGill) do reasonably well, as usual (though the sheer volume of publications from Toronto is a bit of a stunner), perhaps more surprising is how Victoria outperforms most of the U-15 on some of these measures.

Next, there’s the U21 National Systems Rankings (which, again, I have previously profiled, back here and here).  This is an attempt to rank not individual institutions, but rather whole national higher education systems based on Resources, Environments, Connectivity, and Outputs.  The US comes tops, Sweden 2nd, and Canada 3rd overall – we climb a place from last year.  We do this mostly on the basis of being second in the world in terms of resources (that’s right, folks: complain as we all do about funding, and how nasty governments are here to merely maintain budgets in real dollars, only Denmark has a better-resources system than our own), and third in terms of “outputs” (mostly research-based).

We do less well, though, in other areas, notably “Environment”, where we come 33rd (behind Bulgaria, Thailand, and Serbia, among others.  That’s mostly because of the way the ranking effectively penalizes us for: a) being a federation without certain types of top-level national organizations (Germany suffers on this score as well), b) for our system being too public (yes, really), and c) Statscan data on higher education being either unavailable or totally impenetrable to outsiders.  If you were to ignore some of this weirder stuff, we’d have been ranked second.

The innovation in this year’s U21 rankings is the normalization of national scores by per capita GDP.  Canada falls to seventh on this measure (though the Americans fall further, from first to fifteenth).  The Scandinavians end up looking even better than they usually do, but so – interestingly enough – does Serbia, which ranks fourth overall in this version of the ranking.

Finally, there’s the Times Higher Top 100 Institutions Under 50, a fun ranking despite some of the obvious methodological limitations (which I pointed out back here) and won’t rehash again.  This ranking always changes significantly each year because the institutions at the top tend to be close to 50 years out, and as such get rotated out and new ones take their place.  Asian universities took four of the top five spots globally (Postech and KAIST in Korea, HKUST in Hong Kong, and Nanyang in Singapore).  Calgary, in 19th place was the best Canadian performer, but Simon Fraser made 24th and three other Canadian universities took their place for the first time: Guelph (73) UQAM (84) and Concordia (96).

Even if you don’t take rankings overly seriously, all three rankings provide ample amounts of thought-provoking data.  Poke around and you’re sure to find at least a few surprises.

May 15

Does More Information Really Solve Anything?

One of the great quests in higher education over the past two decades has been to make the sector more “transparent”.  Higher education is a classic example of a “low-information” economy.  Like medicine, consumers have very limited information about the quality of higher education providers, and so “poor performers” cannot easily be identified.  If only there were some way to actually provide individuals with better information, higher education would come closer to the ideal of “perfect information” (a key part of “perfect competition”), and poor performers would come under pressure from declining enrolments.

For many people, the arrival of university league table rankings held a lot of promise.  At last, some data tools with some simple heuristics that could help students make distinctions with respect to quality!  While some people still hold this view, others have become more circumspect, and have come to realize that most rankings simply replicate the existing prestige hierarchy because they rely on metrics like income and research intensity, which tend to be correlated with institutional age and size. Still, many hold out hope for other types of information tools to provide this kind of information.  In Europe, the big white hope is U-Multirank; in the UK it’s the “Key Information Set”, and in Korea it’s the Major Indictors System.  In the US, of course, you see the same phenomenon at work with the White House’s proposed college ratings system.

What unites all of these efforts is a belief that people will respond to information, if the right type of information is put in front of them in a manner they can easily understand/manipulate.  The arguments have tended centre around what kind of information is useful/available, and the right way to display/format the data, but a study out last month from the Higher Education Funding Council for England asked a much more profound question: is it possible that none of this stuff makes any difference at all?

Now, it’s not an empirical study of the use of information tools, so we shouldn’t get *too* excited about it.  Rather, it’s a literature review, but an uncommonly good one, drawing significantly from sources like Daniel Kahneman and Herbert Simon.  The two key findings (and I’m quoting from the press release here, because it’s way more succinct about this than I could be) are:

1) that the decision-making process is complex, personal and nuanced, involving different types of information, messengers and influences over a long time. This challenges the common assumption that people primarily make objective choices following a systematic analysis of all the information available to them at one time, and

2) that greater amounts of information do not necessarily mean that people will be better informed or be able to make better decisions. 

Now, because HEFCE and the UK government are among those people that believe deeply in the “better data leads to better universities via competition model” the study doesn’t actually say “guys, your approach implies some pretty whopping and likely incorrect assumptions” – but the report implies it pretty strongly.

It’s very much worth a read, if for no other reason than to remind oneself that even the best-designed, most well-meaning “interventions”, won’t necessarily have the intended effects.

May 13

U-Multirank: Game On

Those of you who read this blog for the stuff about rankings will know that I have a fair bit of time for the U-Multirank project.  U-Multirank, for those in need of a quick refresher, is a form of alternative rankings that has been backed by the European Commission.  The rankings are based on a set of multi-dimensional, personalizable rankings data, and were pioneered by Germany’s Centre for Higher Education (CHE).

There is no league table here.  Nothing tells you who is “best”.  You just compare institutions (or programs, though in this pilot year these are still pretty thin) on a variety of individual metrics.   The results are shown as a series of letter grades, meaning that, in practice, institutional results on each indicator are banded into five groups – therefore no spurious precision telling you which institution is 56th and which is 57th.

Another great feature is how global these rankings are.  No limiting to a top 200 or 400 in the world, which in practice limits comparisons to a certain type of research university in a finite number of countries.  Because U-Multirank is much more about profiling institutions than about creating some sort of horse-race amongst them, it’s open to any number of institutions.  In the inaugural year, over 850 institutions from 70 countries submitted information to the rankings, including 19 from Canada.  That instantly makes it the largest of the world’s major rankings system (excluding the webometrics rankings).

Of course, the problem with comparing this many schools is that there are a lot of apples-and-oranges in terms of institutional types.  The Big Three rankings (Shanghai, THE, QS) all sidestep this problem by focussing exclusively on research universities, but in an inclusive ranking like this one it’s a bit more difficult.  That’s why U-Multirank includes a filtering tool based on an earlier project called “U-MAP”, which helps to find “like” institutions based on institutional size, mission, discipline, profile, etc.

Why am I telling you all this?  Because the U-Multirank site just went live this morning.  Go look at it, here.  Play with it.  Let me know what you think.

Personally, while I love the concept, I think there’s still a danger that too many consumers – particularly in Asia – will prefer the precision (however spurious) and simplicity of THE-style league tables to the relativism of personalized rankings.  The worry here isn’t that a lack of users will create financial problems for U-Multirank – it’s financed more than sufficiently by the European Commission, so that’s not an issue; the potential worry is that low user numbers might make institutions – particularly those in North America – less keen to spend the person-hours collecting all the rather specialized information that U-Multirank demands.

But here’s hoping that’s not true.  U-Multirank is the ranking system academia would have developed itself had it had the smarts to get ahead of the curve on transparency instead of leaving that task to the Maclean’s of the world.  We should all wish it well.

March 07

Those Times Higher Education Reputation Rankings, 2014

The Times Higher Education (THE) Rankings came out yesterday.  Compared to previous years, there was very little fanfare for the release this time.  And that’s probably because the results weren’t especially interesting.

The thing to understand about rankings like this is that they are both profoundly true and profoundly trivial.  A few universities are undoubtedly seen as global standards, and so will always be at the top of the pile.  Previous THE rankings have shown that there is a “Big Six” in terms of reputation: Harvard, Stanford, Berkeley, MIT, Cambridge, and Oxford – this year’s results again show that no one else comes close to them in term of reputation.  Then there are another thirty or so who can more or less hold their position at the top from year-to-year.

After that, though, results are capricious.  Below 50th position, the Times neither assigns specific ranks (it presents data in tens, i.e., 51st-60th, 61st-70th, etc.), nor publishes the actual reputation score, because even they don’t think the scores are reliable.  Just for kicks, I divided this year’s top 100 into those groups of ten – a top ten, a next ten, a third ten, and so on – to see how many institutions were in the same group last year.  Here’s what I got:

Number of Institutions in Each Ten-Place Grouping in THE Reputation Rankings, Which Remained in Same Grouping, 2013 and 2014

image001

 

 

 

 

 

 

 

 

 

 

 

 

You’d expect a little movement from group-to-group – someone 71st last year rising to 69th this year, for instance – but this is just silly.  Below about 40th spot, there’s a lot of essentially random survey noise because the scores are so tight together that even small variations can move an institution several places.

A few American universities rose spectacularly this year – Purdue came in at 49th, despite not even cracking the top 100 in the previous year’s rankings; overall, there were 47 (up 3 from last year) American universities in the top 100.  Seoul National University was the biggest riser within the top 50, going from 41st to 26th, which may suggest that people are noticing quality in Korean universities (Yonsei also cracked top 100 for the first time), or it may just mean more Koreans responded to the survey (within limits, national response rates do matter – THE re-weights responses by region, but not by country; if you’re in a region with a lot of countries, like Europe or Asia, and your numbers go up, it can tilt the balance a bit).  Surprisingly, Australian universities tanked in the survey.

The American result will sound odd to anyone who regularly reads the THE and believes their editorial line about the rise of the East and decline of West in higher education.  But what do you expect?  Reputation is a lagging indicator.  Why anyone thinks its worthy of measuring annually is a bit mysterious.

February 06

When the Times Higher Education Rankings Fail The Fall-Down-Laughing Test

You may have noted the gradual proliferation of rankings at the Times Higher Education over the last few years.  First the World University Rankings, then the World Reputation Rankings (a recycling of reputation survey data from the World Rankings), then the “100 under 50” (World Rankings, restricted to institutions founded since the early 60s, with a methodological twist to make the results less ridiculous), then the “BRICS Rankings” (World Rankings results, with developed countries excluded, and similar methodological twists).

Between actual rankings, the Times Higher staff can pull stuff out of the database, and turn small bits of analysis into stories.   For instance, last week, the THE came out with a list of the “100 most international” universities in the world.  You can see the results here.  Harmless stuff, in a sense – all they’ve done is take the data from the World University Rankings on international students, foreign faculty, and international research collaborations, and turned it into its own standalone list.  And of course, using those kinds of metrics, geographic and political realities mean that European universities – especially those from the really tiny countries – always come out first (Singapore and Hong Kong do okay, too, for similar reasons).

But when their editors start tweeting stuff – presumably as clickbait – about how shocking it is that only ONE American university (MIT, if it matters to you) makes the top 100 – you have to wonder if they’ve started drinking their own Kool-Aid.  Read that list of 100 again, take a look at who’s on the list, and think about who’s not.  Taken literally, the THE is saying that places like the University of Ireland, Maynooth, the University of Tasmania, and King Abdulaziz University are more international than Harvard, Yale, and Stanford.

Here’s the thing about rankings: there’s no way to do validity testing other than what I call the, “fall-down-laughing test”.  Like all indicator-systems, they are meant to proxy reality, rather than represent it absolutely.  But since there’s no independent standard of “excellence” or “internationalization” in universities, the only way you can determine whether or not the indicators and their associated weights actually “work” is by testing them in the real word, and seeing if they look “mostly right” to the people who will use them.  In most international ranking systems (including the THE), this means ensuring that either Harvard or Stanford comes first: if your rankings come up with, say, Tufts, or Oslo, or something as #1, it fails the fall-down-laughing test, because “everybody knows” Harvard and Stanford are 1-2.

The THE’s ranking on “international schools” comprehensively fails the fall-down-laughing test. In no world would sane academics agree that Abdulaziz and Maynooth are more international than Harvard.  The only way one could possibly believe this is if you’ve reached the point where you believe that specifically chosen indicators actually *are* reality, rather than proxies for it.  The Times Higher has apparently now gone down that particular rabbit hole.

November 15

Ten Years of Global University Rankings

Last week, I had the honour of chairing a session at the Conference on World-Class Universities, in Shanghai.  Held on the 10th anniversary of the release of the first global rankings (both the Shanghai rankings and the Times Higher Ed Rankings – then run by QS – appeared for the first time in 2003).  And so it was a time for reflection: what have we learned over the past decade?

The usual well-worn criticisms were aired: international rankings privilege, the measurable (research) over the meaningful (teaching), they exalt the 1% over the 99%, they are a function of money not quality, they distort national priorities… you’ve heard the litany.  And these criticisms are no less true just because they’re old.  But there’s another side to the story.

In North America, the reaction to the global rankings phenomenon was muted – that’s because, fundamentally, these rankings measure how closely institutions come to aping Harvard and Stanford.  We all had a reasonably good idea of our pecking order.  What shocked Asian and European universities, and higher education ministries, to the core was to discover just how far behind America they were.  The first reactions, predictably, were anger and denial.  But once everyone had worked through these stages, the policy reaction was astonishingly strong.

It’s hard to find many governments in Europe or Asia that didn’t adopt policy initiatives in response to rankings.  Sure, some – like the empty exhortations to get X institutions into the top 20/100/500/whatever – were shallow and jejune.  Others – like institutional mergers in France and Scandinavia, or Kazakhstan setting up its own rankings to spur its institutions to greater heights – might have been of questionable value.

However, as a Dutch colleague of mine pointed out, rankings have pushed higher education to the front of the policy agenda in a way that nothing else – not even the vaunted Bologna Process – has done.  Country after country – Russia, Germany, Japan, Korea, Malaysia, and France, to name but a few – have poured money into excellence initiatives as a result of rankings.  We can quibble about whether the money could have been better spent, of course, but realistically, if that money hadn’t been spent on research, it would have gone to health or defence – not higher education.

But just as important, perhaps, is the fact that higher education quality is now a global discussion.  Prior to rankings, it was possible for universities to claim any kind of nonsense about their relative global pre-eminence (“no, really, Uzbekistan National U is just like Harvard”).  Now, it’s harder to hide.  Everybody has had to focus more on outputs.  Not always the right ones, obviously, but outputs nonetheless.  And that’s worth celebrating.  The sector as a whole, and on the whole, is better for it.

November 05

Owning the Podium

I’m sure many of you saw Western President, Amit Chakma’s, op-ed in the National Post last week, suggesting that Canadian universities need more government assistance to reach new heights of excellence, and “own the podium” in global academia.  I’ve been told that Chakma’s op-ed presages a new push by the U-15 for a dedicated set of “excellence funds” which, presumably, would end up mostly in the U-15′s own hands (for what is excellence if not research done by the U-15?).  All I can say is that the argument needs some work.

The piece starts out with scare metrics to show that Canada is “falling behind”.  Australia has just two-thirds our population, yet has seven institutions in the QS top 100, compared to Canada’s five!  Why anyone should care about this specific cut-off (use the top-200 in the QS rankings and Canada beats Australia 9 to 8), or this specific ranking (in the THE rankings, Canada and Australia each have 4 spots), Chakma never makes clear.

The piece then moves on to make the case that, “other countries such as Germany, Israel, China and India are upping their game” in public funding of research (no mention of the fact that Canada spends more public dollars on higher education and research than any of these countries), which leads us to the astonishing non-sequitur that, “if universities in other jurisdictions are beating us on key academic and research measures, it’s not surprising that Canada is also being out-performed on key economic measures”.

This proposition – that public funding of education is a leading indicator of economic performance – is demonstrably false.  Germany has just about the weakest higher education spending in the OECD, and it’s doing just fine, economically.  The US has about the highest, and it’s still in its worst economic slowdown in over seventy-five years.  Claiming that there is some kind of demonstrable short-term link is the kind of thing that will get universities into trouble.  I mean, someone might just say, “well, Canada has the 4th-highest level of public funding of higher education as a percentage of GDP in the OECD – doesn’t that mean we should be doing better?  And if that’s indeed true, and our economy is so mediocre, doesn’t that give us reason to suspect that maybe our universities aren’t delivering the goods?”

According to Chakma, Canada has arrived at its allegedly-wretched state by virtue of having a funding formula which prioritizes bums-in-seats instead of excellence.  But that’s a tough sell.  Most countries (including oh-so-great Australia) have funding formulae at least as demand-oriented as our own – and most are working with considerably fewer dollars per student as well.  If Australia is in fact “beating” us (a debatable proposition), one might reasonably suspect that it has at least as much to do with management as it does money.

Presumably, though, that’s not a hypothesis the U-15 wants to test.

November 04

Concentration vs. Distribution

I’m spending part of this week in Shanghai at the bi-annual World-Class Universities conference, which is put on by the good folks who run the Shanghai Jiao Tong Rankings. I’ll be telling you more about this conference later, but today I wanted to pick up on a story from the last set of Shanghai rankings in August.  You’d be forgiven for missing it – Shanghai doesn’t make the news the way the Times Higher Education rankings does, because its methodology doesn’t allow for much change at the top.

The story had to with Saudi Arabia.  As recently as 2008, it had no universities in the top 500; now it has four, largely because of the way they are strategically hiring highly-cited scientists (on a part-time basis, one assumes, but I don’t know that for sure).  King Saud University, which only entered the rankings in 2009, has now cracked the top-200, making it by far the fastest rise of any institution in the history of any set of rankings.  But since this doesn’t line up with the “East Asian tigers overtaking Europe/America” line that everyone seems eager to hear, no one published it.

You see, we’re addicted to this idea that if you have great universities then great economic development will follow.  There were some surprised comments on twitter about the lack of a German presence in the rankings.  But why?  Whoever said that having a few strong top universities is the key to success?

Strong universities benefit their local economies – that’s been clear for decades.  And if you tilt the playing-field more towards those institutions – as David Naylor argued in a very good talk last spring, there’s no question that it will pay some returns in terms of discovery and innovation.  But the issue is one of opportunity costs: would such a concentration of resources create more innovation and spill-over benefits than other possible distributions of funds?  Those who make the argument for concentration (see, for instance, HEQCO’s recent paper on differentiation) seem to take this as given, but I’m not convinced their case is right.

Put it this way: if some government had a spare billion lying around, and the politics of regional envy wasn’t an issue, and they wanted to spend it in higher education, which investment would have the bigger impact: putting it all into a single, “world-class” university?  Spreading it across maybe a half-dozen “good” universities?  Or spreading it across all institutions?  Concentrating the money might do a lot of good for the country (not to mention the institution at which it was concentrated – but maybe dispersing it would do more.  As convincing as Naylor’s speech was, this issue of opportunity costs wasn’t addressed.

Or, go back to Shanghai terminology: if it were up to you to choose, do you think Canada would be better served with one institution in the top ten worldwide (currently – none) or seven in the top 100 (currently – four) or thirty-five in the top 500 (currently – twenty-three)?  And what arguments would you make to back-up your decision?  I’m curious to hear your views.

October 04

Those Times Higher Education World Rankings (2013-14 Edition)

So, yesterday saw the release of the latest round of the THE rankings.  They were a bit of a damp squib, and not just for Canadian schools (which really didn’t move all that much).  The problem with actually having a stable methodology, as the Times is finding, is that there isn’t much movement at the top from year-to-year. So, for instance, this year’s top ten is unchanged from last year’s, with only some minor swapping of places.

(On the subject of the THE’s top ten: am I the only one who finds it completely and utterly preposterous that Harvard’s score for industry funding per professor is less than half of what it is at Beijing and Basel?  Certainly, it leads to suspicions that not everyone is filling out their forms using the same definitions.)

The narrative of last year’s THE rankings was “the rise of Asia” because of some good results from places like Korea and China.  This year, though, that story wasn’t tenable.  Yes, places like NUS did well (up 5 places to 25th); but the University of Hong Kong was down 8 spots to 43rd, and Korea’s Postech was down 10 spots to 60th.  And no other region obviously “won”, either.  But that didn’t stop the THE from imposing a geographic narrative on the results, with Phil Baty claiming that European flagship universities were “listing” – which is only true if you ignore Scandinavia and the UK, and see things like Leuven finishing 61st, as opposed to 58th, as significant rather than as statistical noise.

This brings us to the University of Basel story.  The THE doesn’t make a big deal out of it, but a university jumping from 134th to 68th says a lot.  And not about the University of Basel.  That the entirety of its jump can be attributed to changes in its scores on teaching and research – both of which are largely based on survey results – suggests that there’s been some weirdness in the pattern of survey responses.  All the other big movers in the top 100 (i.e. Paris Sud, U Zurich, and Lund, who fell 22, 31, and 41 places, respectively) also had huge changes in exactly these two categories.

So what’s going on here?  The obvious suspicion is that there were fewer French and Swiss respondents this year, thus leading to fewer positive responses for those schools.  But since the THE is cagey about disclosing details on survey response patterns, it’s hard to tell if this was the case.

And this brings us to what should really be the lead story about these rankings: for an outfit that bleats about transparency, too much of what drives the final score is opaque.  That needs to change.

Page 1 of 41234