Higher Education Strategy Associates

Category Archives: rankings

March 24

Banning the Term “Underfunding”

Somehow I missed this when the OECD’s Education at a Glance 2014 came out, but apparently Canada’s post-secondary system is now officially the best funded in the entire world.

I know, I know.  It’s a hard idea to accept when Presidents of every student union, faculty association, university, and college have been blaming “underfunding” for virtually every ill in post-secondary education since before Air Farce jokes started taking the bus to get to the punchline.  But the fact is, we’re tops.  Numero uno.  Take a look:

Figure 1: Percentage of GDP Spent on Higher Education Institutions, Select OECD Countries, 2011














For what I believe is the first time ever, Canada is outstripping both the US (2.7%) and Korea (2.6%).  At 2.8% of GDP, spending on higher education is nearly twice what it is in the European Union.

Ah, you say, that’s probably because so much of our funding comes from private sources.  After all, don’t we always hear that tuition is at, or approaching, 50% of total funding in universities?  Well, no.  That stat only applies to operating expenditures (not total expenditures), and is only valid in Nova Scotia and Ontario.  Here’s what happens if we look only at public spending in all those countries:

Figure 2: Percentage of GDP Spent on Higher Education Institutions from Public Sources, Select OECD Countries, 2011














While it’s true that Canada does have a high proportion of funds coming from private sources, public sector support to higher education still amounts to 1.6% of GDP, which is substantially above the OECD average.  In fact, our public expenditure on higher education is the same as in Norway and Sweden; among all OECD countries, only Finland and Denmark (not included in graph) are higher.

And this doesn’t even consider the fact that Statscan and CMEC don’t include expenditures like Canada Education Savings Grants and tax credits, which together are worth another 0.2% of GDP, because OECD doesn’t really have a reporting category for oddball expenditures like that.  The omission doesn’t change our total expenditure, but it does affect the public/private balance.  Instead of being 1.6% of GDP public, and 1.2% of GDP private, it’s probably more like 1.8% or 1.9% public, which again would put us at the absolute top of the world ranking.

So it’s worth asking: when people say we are “underfunded”, what do they mean?  Underfunded compared to who?  Underfunded for what?  If we have more money than anyone else, and we still feel there isn’t enough to go around, maybe we should be looking a lot more closely at *how* we spend the money rather than at *how much* we spend.

Meantime, I think there should be a public shaming campaign against use of the term “underfunding” in Canada.  It’s embarrassing, once you know the facts.

February 25

Rankings in the Middle East

If you follow rankings at all, you’ll have noticed that there is a fair bit of activity going on in the Middle East these days.  US News & World Report and Quacquarelli Symonds (QS) both published “Best Arab Universities” rankings last year; this week, the Times Higher Education (THE) produced a MENA (Middle East and North Africa) ranking at a glitzy conference in Doha.

The reason for this sudden flurry of Middle East-oriented rankings is pretty clear: Gulf universities have a lot of money they’d like to use on advertising to bolster their global status, and this is one way to do it.  Both THE and QS tried to tap this market by making up “developing world” or “BRICs” rankings, but frankly most Arab universities didn’t do too well on those metrics, so there was a niche market for something more focused.

The problem is that rankings make considerably less sense in MENA than they do elsewhere. In order to come up with useful indicators, you need accurate and comparable data, and there simply isn’t very much of this in the region.  Let’s take some of the obvious candidates for indicators:

Research:  This is an easy metric, and one which doesn’t rely on local universities’ ability to provide data.  And, no surprise, both US News and the Times Higher Ed have based 100% of their rankings on this measure.  But that’s ludicrous for a couple of reasons.  First is that most MENA universities have literally no interest in research.  Outside the Gulf (i.e. Oman, Kuwait, Qatar, Bahrain, UAE, and Saudi Arabia) there’s no available money for it.  Within the Gulf, most universities are staffed by expats teaching 4 or even 5 classes per term, with no time or mandate for research.  The only places where serious research is happening are at one or two of the foreign universities that are part of Education City in Doha, and in some of the larger Saudi Universities.  Of course the problem with Saudi universities, as we know, is that at least some of the big ones are furiously gaming publication metrics precisely in order to climb the rankings, without actually changing university cultures very much (see for example this eye-raising piece).

Expenditures:  This is a classic input variable used in many rankings.  However, an awful lot of Gulf universities are private and won’t want to talk about their expenditures for commercial reasons.  Additionally, some are personal creations of local rulers who spend lavishly on them (for example, Sharjah and Khalifa Universities in UAE); they’d be mortified if the data showed them to spending less than the Sheikh next door.  Even in public universities, the issue isn’t straightforward.  Transparency in government spending isn’t universal in the area, either; I suspect that getting financial data out of an Egyptian university would be a pretty unrewarding task.  Finally, for many Gulf universities, cost data will be massively wonky from one year to the next because of the way compensation works.  Expat teaching staff (in the majority at most Gulf unis) are paid partly in cash and partly through free housing, the cost of which swings enormously from one year to the next based on changes in the rental market.

Student Quality: In Canada, the US, and Japan, rankings often focus on how smart the students are based on average entering grades, SAT scores, etc.  But those simply don’t work in a multi-national ranking, so those are out.

Student Surveys: In Europe and North America, student surveys are one way to gauge quality.  However, if you are under the impression that there is a lot of appetite among Arab elites to allow public institutions to be rated by public opinion then I have some lakeside property in the Sahara I’d like to sell you.

Graduate Outcomes:  This is a tough one.  Some MENA universities do have graduate surveys, but what do you measure?  Employment?  How do you account for the fact that female labour market participation varies so much from country to country, and that many female graduates are either discouraged or forbidden by their families from working? 

What’s left?  Not much.  You could try class size data, but my guess is most universities outside the Gulf wouldn’t have an easy way of working this out.  Percent of professors with PhDs might be a possibility, as would the size of the institution’s graduate programs.  But after that it gets pretty thin.

To sum up: it’s easy to understand commercial rankers chasing money in the Gulf.  But given the lack of usable metrics, it’s unlikely their efforts will amount to anything useful, even by the relatively low standards of the rankings industry.

October 30

Times Higher Rankings, Weak Methodologies, and the Vastly Overblown “Rise of Asia”

I’m about a month late with this one (apologies), but I did want to mention something about the most recent version of the Times Higher Education (THE) Rankings.  You probably saw it linked to headlines that read, “The Rise of Asia”, or some such thing.

As some of you may know, I am inherently suspicious about year-on-year changes in rankings.  Universities are slow-moving creatures.  Quality is built over decades, not months.  If you see huge shifts from one year to another, it usually means the methodology is flimsy.  So I looked at the data for evidence of this “rise of Asia”.

The evidence clearly isn’t there in the top 50.  Tokyo and Hong Kong are unchanged in their position.  Tsinghua Beijing and National University of Singapore are all within a place or two of where they were last year.  In fact, if you just look at the top 50, you’d think Asia might be going backwards, since one of their big unis (Seoul National) fell out of the top 50, going from 44th to 52nd in a single year.

Well, what about if you look at the top 100?  Not much different.  In Korea, KAIST is up a bit, but Pohang is down.  Both the Hong Kong University of Science and Technology and Nanyang were up sharply, though, which is a bit of a boost; however, only one new “Asian” university came into the rankings, and that was the Middle Eastern Technical University in Turkey, which rose spectacularly from the 201-225 band last year, to 85th this year.

OK, what about the next 100?  Here it gets interesting.  There are bad news stories for Asian universities.  National Taiwan and Osaka each fell 13 places. Tohoku fell 15, Tokyo Tech 16, Chinese University Hong Kong 20, and Yonsei University fell out of the top 200 altogether.  But there is good news too: Bogazici University in Turkey jumped 60 places to 139th, and five new universities – two from China, two from Turkey and one from Korea – entered the top 200 for the first time.

So here’s the problem with the THE narrative.  The best part of the evidence for all this “rise of Asia” stuff rests on events in Turkey (which, like Israel, is often considered as being European rather than Asian – at least if membership in UEFA and Eurovision is anything to go by).  The only reason THE goes on with its “rise of Asia” tagline is because it has a lot of advertisers and a big conference business in East Asia, and its good business to flatter them, and damn the facts.

But there’s another issue here: how the hell did Turkey do so well this year, anyway?  Well, for that you need to check in with my friend Richard Holmes, who runs the University Ranking Watch blog.  He points out that a single paper (the one in Physics Letters B, which announced the confirmation of the Higgs Boson, and which immediately got cited in a bazillion places) was responsible for most of the movement in this year’s rankings.  And, because the paper had over 2,800 co-authors (including from those suddenly big Turkish universities), and because THE doesn’t fractionally count multiple-authored articles, and because THE’s methodology gives tons of bonus points to universities located in countries where scientific publications are low, this absolutely blew some schools’ numbers into the stratosphere.  Other examples of this are Scuola Normale di Pisa, which came out of nowhere to be ranked 65th in the world, or Federica Santa Maria Technical University in Chile, which somehow became the 4th ranked university in Latin America.

So basically, this year’s “rise of Asia” story was based almost entirely on the fact that a few of the 2,800 co-authors on the “Observation of a new boson…” paper happened to work in Turkey.

THE needs a new methodology.  Soon.

September 30

The Problem with Global Reputation Rankings

I was in Athens this past June, at an EU-sponsored conference on rankings, which included a very intriguing discussion about the use of reputation indicators that I thought I would share with you.

Not all rankings have reputational indicators; the Shanghai (ARWU) rankings, for instance, eschew them completely.  But QS and Times Higher Education (THE) rankings both weight them pretty highly (50% for QS, 35% for THE).  But this data isn’t entirely transparent.  THE, who release their World University Rankings tomorrow,  hides the actual reputational survey results for teaching and research by combining each of them with some other indicators (THE has 13 indicators, but it only shows 5 composite scores).  The reasons for doing this are largely commercial; if, each September, THE actually showed all the results individually, they wouldn’t be able to reassemble the indicators in a different way to have an entirely separate “Reputation Rankings” release six months later (with concomitant advertising and event sales) using exactly the same data.  Also, its data collection partner, Thomson Reuters, wouldn’t be able to sell the data back to institutions as part of its Global Institutional Profiles Project.

Now, I get it, rankers have to cover their (often substantial) costs somehow, and this re-sale of hidden data is one way to do it (disclosure: we at HESA did this with our Measuring Academic Research in Canada ranking.  But given the impact that rankings have for universities, there is an obligation to get this data right.  And the problem is that neither QS nor THE publish enough information about their reputation survey to make a real judgement about the quality of their data – and in particular about the reliability of the “reputation” voting.

We know that the THE allows survey recipients to nominate up to 30 institutions as being “the best in the world” for research and teaching, respectively (15 from one’s home continent, and 15 worldwide); the QS allows 40 (20 from one’s own country, 20 world-wide).  But we have no real idea about how many people are actually ticking the boxes on each university.

In any case, an analyst at an English university recently reverse-engineered the published data for UK universities to work out voting totals.  The resulting estimate is that, among institutions in the 150-200 range of the THE rankings, the average number of votes obtained for either research or teaching is in the range of 30-to-40, at best.  Which is astonishing, really.  Given that reputation counts for one third of an institution’s total score, it means there is enormous scope for year-to-year variations  – get 40 one year and 30 the next, and significant swings in ordinal rankings could result.  It also makes a complete mockery of the “Top Under 50” rankings, where 85% of institutions rank well below the top 200 in the main rankings, and therefore are likely only garnering a couple of votes apiece.  If true, this is a serious methodological problem.

For commercial reasons, it’s impossible to expect the THE to completely open the kimono on its data.  But given the ridiculous amount of influence its rankings have, it would be irresponsible of it – especially since it is allegedly a journalistic enterprise – not to at least allow some third party to inspect its data and give users a better sense of its reliability.  To do otherwise reduces the THE’s ranking exercise to sham social science.

May 29

May ’14 Rankings Round-Up

I’ve been remiss  the last month or so in not keeping you up-to-date with some of the big international rankings releases, namely the Leiden Rankings, the Times Top 100 Under 50 rankings, and the U21 Ranking of National Higher Education Systems.

Let’s start with Leiden (previous articles on Leiden can be found here, and here), a multidimensional bibliometric ranking that looks at various types of publication and impact metrics.  Because of the nature of the data it uses, and the way it displays results, the rankings are both stable and hard to summarize.  I encourage everyone interested in bibliometrics to take a look and play around with the data themselves to see how the rankings work. In terms of Canadian institutions, our Big Three (Toronto, UBC, McGill) do reasonably well, as usual (though the sheer volume of publications from Toronto is a bit of a stunner), perhaps more surprising is how Victoria outperforms most of the U-15 on some of these measures.

Next, there’s the U21 National Systems Rankings (which, again, I have previously profiled, back here and here).  This is an attempt to rank not individual institutions, but rather whole national higher education systems based on Resources, Environments, Connectivity, and Outputs.  The US comes tops, Sweden 2nd, and Canada 3rd overall – we climb a place from last year.  We do this mostly on the basis of being second in the world in terms of resources (that’s right, folks: complain as we all do about funding, and how nasty governments are here to merely maintain budgets in real dollars, only Denmark has a better-resources system than our own), and third in terms of “outputs” (mostly research-based).

We do less well, though, in other areas, notably “Environment”, where we come 33rd (behind Bulgaria, Thailand, and Serbia, among others.  That’s mostly because of the way the ranking effectively penalizes us for: a) being a federation without certain types of top-level national organizations (Germany suffers on this score as well), b) for our system being too public (yes, really), and c) Statscan data on higher education being either unavailable or totally impenetrable to outsiders.  If you were to ignore some of this weirder stuff, we’d have been ranked second.

The innovation in this year’s U21 rankings is the normalization of national scores by per capita GDP.  Canada falls to seventh on this measure (though the Americans fall further, from first to fifteenth).  The Scandinavians end up looking even better than they usually do, but so – interestingly enough – does Serbia, which ranks fourth overall in this version of the ranking.

Finally, there’s the Times Higher Top 100 Institutions Under 50, a fun ranking despite some of the obvious methodological limitations (which I pointed out back here) and won’t rehash again.  This ranking always changes significantly each year because the institutions at the top tend to be close to 50 years out, and as such get rotated out and new ones take their place.  Asian universities took four of the top five spots globally (Postech and KAIST in Korea, HKUST in Hong Kong, and Nanyang in Singapore).  Calgary, in 19th place was the best Canadian performer, but Simon Fraser made 24th and three other Canadian universities took their place for the first time: Guelph (73) UQAM (84) and Concordia (96).

Even if you don’t take rankings overly seriously, all three rankings provide ample amounts of thought-provoking data.  Poke around and you’re sure to find at least a few surprises.

May 15

Does More Information Really Solve Anything?

One of the great quests in higher education over the past two decades has been to make the sector more “transparent”.  Higher education is a classic example of a “low-information” economy.  Like medicine, consumers have very limited information about the quality of higher education providers, and so “poor performers” cannot easily be identified.  If only there were some way to actually provide individuals with better information, higher education would come closer to the ideal of “perfect information” (a key part of “perfect competition”), and poor performers would come under pressure from declining enrolments.

For many people, the arrival of university league table rankings held a lot of promise.  At last, some data tools with some simple heuristics that could help students make distinctions with respect to quality!  While some people still hold this view, others have become more circumspect, and have come to realize that most rankings simply replicate the existing prestige hierarchy because they rely on metrics like income and research intensity, which tend to be correlated with institutional age and size. Still, many hold out hope for other types of information tools to provide this kind of information.  In Europe, the big white hope is U-Multirank; in the UK it’s the “Key Information Set”, and in Korea it’s the Major Indictors System.  In the US, of course, you see the same phenomenon at work with the White House’s proposed college ratings system.

What unites all of these efforts is a belief that people will respond to information, if the right type of information is put in front of them in a manner they can easily understand/manipulate.  The arguments have tended centre around what kind of information is useful/available, and the right way to display/format the data, but a study out last month from the Higher Education Funding Council for England asked a much more profound question: is it possible that none of this stuff makes any difference at all?

Now, it’s not an empirical study of the use of information tools, so we shouldn’t get *too* excited about it.  Rather, it’s a literature review, but an uncommonly good one, drawing significantly from sources like Daniel Kahneman and Herbert Simon.  The two key findings (and I’m quoting from the press release here, because it’s way more succinct about this than I could be) are:

1) that the decision-making process is complex, personal and nuanced, involving different types of information, messengers and influences over a long time. This challenges the common assumption that people primarily make objective choices following a systematic analysis of all the information available to them at one time, and

2) that greater amounts of information do not necessarily mean that people will be better informed or be able to make better decisions. 

Now, because HEFCE and the UK government are among those people that believe deeply in the “better data leads to better universities via competition model” the study doesn’t actually say “guys, your approach implies some pretty whopping and likely incorrect assumptions” – but the report implies it pretty strongly.

It’s very much worth a read, if for no other reason than to remind oneself that even the best-designed, most well-meaning “interventions”, won’t necessarily have the intended effects.

May 13

U-Multirank: Game On

Those of you who read this blog for the stuff about rankings will know that I have a fair bit of time for the U-Multirank project.  U-Multirank, for those in need of a quick refresher, is a form of alternative rankings that has been backed by the European Commission.  The rankings are based on a set of multi-dimensional, personalizable rankings data, and were pioneered by Germany’s Centre for Higher Education (CHE).

There is no league table here.  Nothing tells you who is “best”.  You just compare institutions (or programs, though in this pilot year these are still pretty thin) on a variety of individual metrics.   The results are shown as a series of letter grades, meaning that, in practice, institutional results on each indicator are banded into five groups – therefore no spurious precision telling you which institution is 56th and which is 57th.

Another great feature is how global these rankings are.  No limiting to a top 200 or 400 in the world, which in practice limits comparisons to a certain type of research university in a finite number of countries.  Because U-Multirank is much more about profiling institutions than about creating some sort of horse-race amongst them, it’s open to any number of institutions.  In the inaugural year, over 850 institutions from 70 countries submitted information to the rankings, including 19 from Canada.  That instantly makes it the largest of the world’s major rankings system (excluding the webometrics rankings).

Of course, the problem with comparing this many schools is that there are a lot of apples-and-oranges in terms of institutional types.  The Big Three rankings (Shanghai, THE, QS) all sidestep this problem by focussing exclusively on research universities, but in an inclusive ranking like this one it’s a bit more difficult.  That’s why U-Multirank includes a filtering tool based on an earlier project called “U-MAP”, which helps to find “like” institutions based on institutional size, mission, discipline, profile, etc.

Why am I telling you all this?  Because the U-Multirank site just went live this morning.  Go look at it, here.  Play with it.  Let me know what you think.

Personally, while I love the concept, I think there’s still a danger that too many consumers – particularly in Asia – will prefer the precision (however spurious) and simplicity of THE-style league tables to the relativism of personalized rankings.  The worry here isn’t that a lack of users will create financial problems for U-Multirank – it’s financed more than sufficiently by the European Commission, so that’s not an issue; the potential worry is that low user numbers might make institutions – particularly those in North America – less keen to spend the person-hours collecting all the rather specialized information that U-Multirank demands.

But here’s hoping that’s not true.  U-Multirank is the ranking system academia would have developed itself had it had the smarts to get ahead of the curve on transparency instead of leaving that task to the Maclean’s of the world.  We should all wish it well.

March 07

Those Times Higher Education Reputation Rankings, 2014

The Times Higher Education (THE) Rankings came out yesterday.  Compared to previous years, there was very little fanfare for the release this time.  And that’s probably because the results weren’t especially interesting.

The thing to understand about rankings like this is that they are both profoundly true and profoundly trivial.  A few universities are undoubtedly seen as global standards, and so will always be at the top of the pile.  Previous THE rankings have shown that there is a “Big Six” in terms of reputation: Harvard, Stanford, Berkeley, MIT, Cambridge, and Oxford – this year’s results again show that no one else comes close to them in term of reputation.  Then there are another thirty or so who can more or less hold their position at the top from year-to-year.

After that, though, results are capricious.  Below 50th position, the Times neither assigns specific ranks (it presents data in tens, i.e., 51st-60th, 61st-70th, etc.), nor publishes the actual reputation score, because even they don’t think the scores are reliable.  Just for kicks, I divided this year’s top 100 into those groups of ten – a top ten, a next ten, a third ten, and so on – to see how many institutions were in the same group last year.  Here’s what I got:

Number of Institutions in Each Ten-Place Grouping in THE Reputation Rankings, Which Remained in Same Grouping, 2013 and 2014














You’d expect a little movement from group-to-group – someone 71st last year rising to 69th this year, for instance – but this is just silly.  Below about 40th spot, there’s a lot of essentially random survey noise because the scores are so tight together that even small variations can move an institution several places.

A few American universities rose spectacularly this year – Purdue came in at 49th, despite not even cracking the top 100 in the previous year’s rankings; overall, there were 47 (up 3 from last year) American universities in the top 100.  Seoul National University was the biggest riser within the top 50, going from 41st to 26th, which may suggest that people are noticing quality in Korean universities (Yonsei also cracked top 100 for the first time), or it may just mean more Koreans responded to the survey (within limits, national response rates do matter – THE re-weights responses by region, but not by country; if you’re in a region with a lot of countries, like Europe or Asia, and your numbers go up, it can tilt the balance a bit).  Surprisingly, Australian universities tanked in the survey.

The American result will sound odd to anyone who regularly reads the THE and believes their editorial line about the rise of the East and decline of West in higher education.  But what do you expect?  Reputation is a lagging indicator.  Why anyone thinks its worthy of measuring annually is a bit mysterious.

February 06

When the Times Higher Education Rankings Fail The Fall-Down-Laughing Test

You may have noted the gradual proliferation of rankings at the Times Higher Education over the last few years.  First the World University Rankings, then the World Reputation Rankings (a recycling of reputation survey data from the World Rankings), then the “100 under 50” (World Rankings, restricted to institutions founded since the early 60s, with a methodological twist to make the results less ridiculous), then the “BRICS Rankings” (World Rankings results, with developed countries excluded, and similar methodological twists).

Between actual rankings, the Times Higher staff can pull stuff out of the database, and turn small bits of analysis into stories.   For instance, last week, the THE came out with a list of the “100 most international” universities in the world.  You can see the results here.  Harmless stuff, in a sense – all they’ve done is take the data from the World University Rankings on international students, foreign faculty, and international research collaborations, and turned it into its own standalone list.  And of course, using those kinds of metrics, geographic and political realities mean that European universities – especially those from the really tiny countries – always come out first (Singapore and Hong Kong do okay, too, for similar reasons).

But when their editors start tweeting stuff – presumably as clickbait – about how shocking it is that only ONE American university (MIT, if it matters to you) makes the top 100 – you have to wonder if they’ve started drinking their own Kool-Aid.  Read that list of 100 again, take a look at who’s on the list, and think about who’s not.  Taken literally, the THE is saying that places like the University of Ireland, Maynooth, the University of Tasmania, and King Abdulaziz University are more international than Harvard, Yale, and Stanford.

Here’s the thing about rankings: there’s no way to do validity testing other than what I call the, “fall-down-laughing test”.  Like all indicator-systems, they are meant to proxy reality, rather than represent it absolutely.  But since there’s no independent standard of “excellence” or “internationalization” in universities, the only way you can determine whether or not the indicators and their associated weights actually “work” is by testing them in the real word, and seeing if they look “mostly right” to the people who will use them.  In most international ranking systems (including the THE), this means ensuring that either Harvard or Stanford comes first: if your rankings come up with, say, Tufts, or Oslo, or something as #1, it fails the fall-down-laughing test, because “everybody knows” Harvard and Stanford are 1-2.

The THE’s ranking on “international schools” comprehensively fails the fall-down-laughing test. In no world would sane academics agree that Abdulaziz and Maynooth are more international than Harvard.  The only way one could possibly believe this is if you’ve reached the point where you believe that specifically chosen indicators actually *are* reality, rather than proxies for it.  The Times Higher has apparently now gone down that particular rabbit hole.

November 15

Ten Years of Global University Rankings

Last week, I had the honour of chairing a session at the Conference on World-Class Universities, in Shanghai.  Held on the 10th anniversary of the release of the first global rankings (both the Shanghai rankings and the Times Higher Ed Rankings – then run by QS – appeared for the first time in 2003).  And so it was a time for reflection: what have we learned over the past decade?

The usual well-worn criticisms were aired: international rankings privilege, the measurable (research) over the meaningful (teaching), they exalt the 1% over the 99%, they are a function of money not quality, they distort national priorities… you’ve heard the litany.  And these criticisms are no less true just because they’re old.  But there’s another side to the story.

In North America, the reaction to the global rankings phenomenon was muted – that’s because, fundamentally, these rankings measure how closely institutions come to aping Harvard and Stanford.  We all had a reasonably good idea of our pecking order.  What shocked Asian and European universities, and higher education ministries, to the core was to discover just how far behind America they were.  The first reactions, predictably, were anger and denial.  But once everyone had worked through these stages, the policy reaction was astonishingly strong.

It’s hard to find many governments in Europe or Asia that didn’t adopt policy initiatives in response to rankings.  Sure, some – like the empty exhortations to get X institutions into the top 20/100/500/whatever – were shallow and jejune.  Others – like institutional mergers in France and Scandinavia, or Kazakhstan setting up its own rankings to spur its institutions to greater heights – might have been of questionable value.

However, as a Dutch colleague of mine pointed out, rankings have pushed higher education to the front of the policy agenda in a way that nothing else – not even the vaunted Bologna Process – has done.  Country after country – Russia, Germany, Japan, Korea, Malaysia, and France, to name but a few – have poured money into excellence initiatives as a result of rankings.  We can quibble about whether the money could have been better spent, of course, but realistically, if that money hadn’t been spent on research, it would have gone to health or defence – not higher education.

But just as important, perhaps, is the fact that higher education quality is now a global discussion.  Prior to rankings, it was possible for universities to claim any kind of nonsense about their relative global pre-eminence (“no, really, Uzbekistan National U is just like Harvard”).  Now, it’s harder to hide.  Everybody has had to focus more on outputs.  Not always the right ones, obviously, but outputs nonetheless.  And that’s worth celebrating.  The sector as a whole, and on the whole, is better for it.

Page 1 of 41234