HESA

Higher Education Strategy Associates

Tag Archives: Shanghai ARWU

September 01

McMaster > McGill?

The Shanghai Rankings (technically, the Academic Ranking of World Universities) came out a couple of weeks ago.  This is the granddaddy of all international rankings; the one that started it all, and still perceived as the most stable and reliable measure of scientific hubs; essentially it measures large concentrations of scientific talent.  And there were some very interesting results for Canada, the most intriguing of which is the fact that McGill has fallen out of Canada’s “top 3”, replaced by McMaster.

So, first of all the big picture: Toronto was up four places to 23rd in the world (and 10th among publics, if you consider Oxford, Cambridge and Cornell to be public), while UBC rose three places to 31st.  McMaster and McGill rounded out Canadian institutions in the top 100 (more on them in a second).  Below that, University of Alberta stayed steady in the 101-150 bracket, while Université de Montreal was joined by Calgary and Ottawa in the 151-200 bracket, bringing the national total in the top 200 to 8.  Overall, the country stayed steady at 19 institutions in the top 500, though Université du Québec dropped out and was replaced by Concordia; that puts the country behind the US, the UK, China, Germany, Australia and France but ahead of everyone else (including, surprisingly, Japan, which has been doing terribly in various rankings of late).

But the big story – in Canada, anyway – is that McMaster rose 17 places to 66th overall while McGill dropped four places to 67th. This is the first time in any ranking (so far as I can recall) that McGill has not ben considered one of the country’s top three institutions, and so it raises some significant questions.  Is it a matter of McGill’s reputation going down?  An echo of l’Affaire Potter?  A consequence of long-term funding decline?  What, exactly?

The answer is it’s none of those things.  Alone among the major rankings, Shanghai does not survey academics or anyone else about institutions, so it has nothing to do with image, reputation, prestige or anything else.  Nor, by the way, is funding a credible suspect.  Although we’re always hearing about how McGill is hard done by the Quebec government, the fact of the matter is that McGill has done as well or better than McMaster in terms of expenditures per student.

Figure 1-Total Expenditure per FTE Student, 2000-01 to 2015-16

Source: Statistics Canada’s Financial Information of Colleges and Universities & Post-Secondary Student Information System, various years

So what happened?  It’s pretty simple, actually.  20% of the Shanghai rank is based on what is called the “HiCi list” – the list of Highly Cited researchers put out annually by Clarivate (formerly Thompson Reuters), which you can peruse here.  But Clarivate has changed its HiCi methodology in the last couple of years, which has had a knock-on effect for the Shanghai rankings as well.  Basically, the old method rewarded old researchers whose publications had gathered lots of citations over time; the methodology only counts citations in the past ten years and therefore privileges newer, “hotter” research papers and their authors (there’s a longer explanation here if you want all the gory details).

Anyway, the effect of this appears to be significant: McGill had five highly-cited researchers in both 2015 and 2016, while McMaster went from ten to fifteen – all in the Faculty of Health Sciences, if you can believe it – putting them top in Canada.    Those extra five researchers were enough, in a ranking which is highly sensitive to the presence of really top scholars, to move McMaster above McGill.

So let’s not read anything more into this ranking: it’s not about funding, or reputation: it’s about a cluster of extraordinary research excellence which in this instance is giving a halo effect to an entire university.  C’est tout.

September 28

International Rankings Round-Up

So, the international rankings season is now more or less at an end.  What should everyone take away from it?  Well, here’s how Canadian Universities did in the three main rankings (the Shanghai Academic Ranking of World Universities, the QS Rankings and the Times Higher Rankings).

ottsyd20160928

Basically, you can paint any picture you want out of that.  Two rankings say UBC is better than last year and one says it is worse.  At McGill and Toronto, its 2-1 the other way.  Universities in the top 200?  One says we dropped from 8 to 7, another says we grew from 8 to 9 and a third says we stayed stable at 6.  All three agree we have fewer universities in the top 500, but they disagree as to which ones are out (ARWU figures it’s Carleton, QS says its UQ and Guelph, and for the Times Higher it’s Concordia).

Do any of these changes mean anything?  No.  Not a damn thing.  Most year-to-year changes in these rankings are statistical noise: but this year, with all three rankings making small methodological changes to their bibliometric measures, the year-to-year comparisons are especially fraught.

I know rankings sometimes get accused of tinkering with methodology in order to get new results and hence generate new headlines, but in all cases, this year’s changes made the rankings better, either making them more difficult to game, more reflective of the breadth of academia, or better at handling outlier publications and genuine challenges in bibliometrics.  Yes, the THE rankings threw up some pretty big year-to-year changes and the odd goofy result (do read my colleague Richard Holmes’ comments the subject here) but I think on the whole the enterprise is moving in the right direction.

The basic picture is the same across all of them.  Canada has three serious world-class universities (Toronto, UBC, McGill), and another handful which are pretty good (McMaster, Alberta, Montreal and then possibly Waterloo and Calgary).  16 institutions make everyone’s top 500 (the U-15 plus Victoria and Simon Fraser but minus Manitoba, which doesn’t quite make the grade on QS), and then there’s another half-dozen on the bubble, making it into some rankings’ top 500 but not others (York, Concordia, Quebec, Guelph, Manitoba, Concordia).  In other words, pretty much exactly what you’d expect in a global rankings.  It’s also almost exactly what we here at HESA Towers found when doing our domestic research rankings four years ago. So: no surprises, no blown calls.

Which is as it should be: universities are gargantuan, slow-moving, predictable organizations.  Relative levels of research output and prestige change very slowly; the most obvious sign of a bad university ranking is rapid changing of positions from year to year.   Paradoxically, of course, this makes better rankings less newsworthy.

More globally, most of the rankings are showing rises for Chinese universities, which is not surprising given the extent to which their research budgets have expanded in the past decade.  The Times threw up two big surprises; first by declaring Oxford the top university in the world when no other ranker, international or domestic, has them in first place in the UK, and second by excluding Trinity College Dublin from the rankings altogether because it had submitted some dodgy data.

The next big date on the rankings calendar is the Times Higher Education’s attempt to break into the US market.  It’s partnering with the Wall Street Journal to create an alternative to the US News and World Report rankings.  The secret sauce of these rankings appears to be a national student survey, which has never been used in the US before.  However, in order to get a statistically significant sample (say, the 210-students per institution minimum we used to use in the annual Globe and Mail Canadian University Report) at every institution currently covered by USNWR would imply an astronomically large sample size – likely north of a million students.  I can pretty much guarantee THE does not have this kind of sample.  So I doubt that we’re going to see students reviewing their own institution; rather, I suspect the survey is simply going to ask students which institutions they think are “the best”, which amounts to an enormous pooling of ignorance.  But I’ll be back with a more detailed review once this one is released.

November 05

World-Class Universities in the Great Recession: Who’s Winning the Funding Game?

Governments always face a choice between access and excellence: does it make more sense to focus resources on a few institutions in order to make them more “world-class”, or does it make sense to build capacity more widely and increase access?  During hard times, these choices become more acute.  In the US, for instance, the 1970s were a time when persistent federal budget deficits as a result of the Vietnam War, combined with a period of slow growth, caused higher education budgets to contract.  Institutions often had to choose between their access function and their research function, and the latter did not always win.

My question today (excerpted from the paper I gave in Shanghai on Monday) is: how are major OECD countries handling that same question in the post-2008 landscape?

Below, I have assembled data on real institutional expenditures per-student in higher education, in ten countries: Canada, the US, the UK, Australia, Sweden, Switzerland, France, Germany, the Netherlands, and Japan.  I use expenditures rather than income because the latter tends to be less consistent, and is prone to sudden swings.  Insofar as is possible, and in order to reduce the potential impact of different reporting methods and definitions of classes of expenditure, I use the most encompassing definition of expenditures given the available data.  The availability of data across countries is uneven; I’ll spare you the details, but it’s reasonably good in the US, the UK, Canada, Australia, and Sweden, decent in Switzerland, below-par in Japan, the Netherlands, and Germany, and godawful in France.  For the first six countries, I can compare with reasonable confidence how “top” universities (as per yesterday, I’m defining “top” as being among the top-100 of the Academic Ranking of World Class Universities, or the ARWU-100 for short).  In the six countries with the best data, I can differentiate between ARWU-100 universities and the rest; in the other four, I have only partial data, which nevertheless leads me to believe that the results for “top” universities is not substantially different from what happened to all institutions.

Figure 1 basically summarizes the findings:

Figure 1: Changes in Real Per-Student Funding Since 2008 for ARWU-100 and All Universities, Selected OECD Countries

unnamed

 

 

 

 

 

 

 

 

 

 

 

 

Here’s what you can take from that figure:

1)  Since 2008, total per-student expenditures have risen in only three countries: the UK, Sweden, and Japan.  In the UK, the increase comes from the massive new tuition fees introduced in 2012.  In Sweden, a lot of the per-student growth comes from the fact that enrolments are decreasing rapidly (more on that in a future blog).  In Germany, per-student expenditure is down since 2008, but way up since 2007.  The reason?  The federal-länder “higher education pact” raised institutional incomes enormously in 2008, but growth in student numbers (a desired outcome of the pact) meant that this increase was gradually whittled away.

2)  “Top” Institutions do better than the rest of the university sector in the US, Canada, and Switzerland (but for different reasons), but worse in Sweden and Australia.  Some of this has to do with differences in income patterns, but an awful lot has to do with changes in enrolment patterns too, which are going in different directions in different countries.

3)  Australian universities are getting hammered.  Seriously.  Since 2008, their top four universities have seen their per-student income fall by 15% in real terms.  A small portion of that seems to be an issue of some odd accounting that elevated expenditures in 2008, and hence exaggerates expenses in the base year; but even without that, it’s a big drop.  You can see why they want higher fees.

4)  Big swings in funding don’t make much short-term difference in rankings – at least at the top.  Since 2008, top-100 universities in the US have increased their per-student expenditure by 10%, while Australian unis have fallen by 15%.  That’s a 25% swing in total.  And yet there has been almost no relative movement between the two in any major rankings.  When we think about great universities, we need to think more about stocks of assets like professors and laboratories, and less about flows of funds.

So there’s no single story around the world, but there are some interesting national policy choices out there.

If anyone’s interested in the paper, I will probably post it sometime next week after I fix up a couple of graphs: if you can’t wait, just email me (ausher@higheredstrategy.com), and I’ll send you a draft.

November 04

How Canadian Universities Got Both Big and Rich

Earlier this week, I gave a speech in Shanghai on whether countries are choosing to focus higher education spending on top institutions as a response to the scarcity of funds since the start of the global financial crisis.  I thought some of you might be interested in this, so over the next two days I’ll be sharing some of the data from that presentation.  The story I want to tell today is about how exceptional the Canadian story has been among the top countries in higher education.

(A brief aside before I get started on this: there is nothing like a quick attempt to find financial information on universities in other countries to put our own gripes – Ok, my gripes – about institutional transparency into some perspective.  Seriously, you could fill the Louvre with what French universities don’t publish about their own activities.)

For the purpose of this exercise, I compare what is happening to universities generally in a country, to what is happening at its “top” universities.  To keep things simple, I define as a “top” university any university that makes the Top 100 of the Shanghai Academic Ranking of World-class Universities (ARWU).  In Canada, that means UBC, Toronto, McGill, and McMaster (yes, it’s an arbitrary criteria, but it happens to work internationally).  I use expenditures rather than income because fluctuations in endowment income make income numbers too noisy.  Figure 1 shows the evolution of funding at Canadian universities in real (i.e. inflation-adjusted) dollars.

Figure 1: Real Change in Expenditures, Canadian Universities 2000-01 to 2012-13, Indexed to 2000-01 (Source: Statistics Canada/CAUBO Financial Information of Universities and Colleges Survey)

1

 

 

 

 

 

 

 

 

 

 

 

 

 

So this is actually a big deal.  On aggregate, Canadian universities saw their expenditures grow by nearly 70% in real dollars between 2000 and 2010.  For “top” universities, the figure was a little over 80%  (the gap, for the most part, is explained by more research dollars).  Very few countries in the developed world saw this kind of growth.  It’s really quite extraordinary.

But a lot of that money went not to “improvement”, per se, but rather to expanding access.  Here are the same figures, adjusted for growth in student numbers.

Figure 2: Real Change in Per-Student Expenditures, Canadian Universities 2000-01 to 2012-13, Indexed to 2000-01

2

 

 

 

 

 

 

 

 

 

 

 

 

 

Once you account for the big increase in student numbers, the picture looks a little bit different.  At the “top” universities, real per-student income is up 20% since 2000, but about even since the start of the financial crisis; universities as a whole are up about 8% since 2000, but down by nearly 10% since the start of the financial crisis.

This tells us a couple of things.  First, Canadians have put a ton of money, both collectively and as individuals, into higher education over the past 15 years.  Anyone who says we under-invest in higher education deserves hours of ridicule.  But second, it’s also indicative of just how much Canadian universities – including the big prestigious ones – have grown over the past decade.  Figure 3 provides a quick look at changes in total enrolment at those top universities.

Figure 3: Changes in enrolments at highly-ranked Canadian universities, 2000-2001 to 2012-13, indexed to 2000-2001

3

 

 

 

 

 

 

 

 

 

 

 

 

 

In China, the top 40 or so universities were told not to grow during the country’s massive expansion of access, because they thought it would affect quality.  US private universities have mostly kept enrolment growth quite minimal.  But chez nous, McGill’s increase – the most modest of the bunch – is 30%.  Toronto’s increase is 65%, and McMaster’s is a mind-boggling 80%.

Michael Crow, the iconoclastic President of Arizona State University, often says that where American research universities get it wrong is in not growing more, and offering more spaces to more students – especially disadvantaged students.  Well, Canadian universities, even our research universities, have been doing exactly that.  What we’ve bought with our money is not just access, and not just excellence, but accessible excellence.

That’s pretty impressive. We might consider tooting our own horn a bit for things like that.

November 15

Ten Years of Global University Rankings

Last week, I had the honour of chairing a session at the Conference on World-Class Universities, in Shanghai.  Held on the 10th anniversary of the release of the first global rankings (both the Shanghai rankings and the Times Higher Ed Rankings – then run by QS – appeared for the first time in 2003).  And so it was a time for reflection: what have we learned over the past decade?

The usual well-worn criticisms were aired: international rankings privilege, the measurable (research) over the meaningful (teaching), they exalt the 1% over the 99%, they are a function of money not quality, they distort national priorities… you’ve heard the litany.  And these criticisms are no less true just because they’re old.  But there’s another side to the story.

In North America, the reaction to the global rankings phenomenon was muted – that’s because, fundamentally, these rankings measure how closely institutions come to aping Harvard and Stanford.  We all had a reasonably good idea of our pecking order.  What shocked Asian and European universities, and higher education ministries, to the core was to discover just how far behind America they were.  The first reactions, predictably, were anger and denial.  But once everyone had worked through these stages, the policy reaction was astonishingly strong.

It’s hard to find many governments in Europe or Asia that didn’t adopt policy initiatives in response to rankings.  Sure, some – like the empty exhortations to get X institutions into the top 20/100/500/whatever – were shallow and jejune.  Others – like institutional mergers in France and Scandinavia, or Kazakhstan setting up its own rankings to spur its institutions to greater heights – might have been of questionable value.

However, as a Dutch colleague of mine pointed out, rankings have pushed higher education to the front of the policy agenda in a way that nothing else – not even the vaunted Bologna Process – has done.  Country after country – Russia, Germany, Japan, Korea, Malaysia, and France, to name but a few – have poured money into excellence initiatives as a result of rankings.  We can quibble about whether the money could have been better spent, of course, but realistically, if that money hadn’t been spent on research, it would have gone to health or defence – not higher education.

But just as important, perhaps, is the fact that higher education quality is now a global discussion.  Prior to rankings, it was possible for universities to claim any kind of nonsense about their relative global pre-eminence (“no, really, Uzbekistan National U is just like Harvard”).  Now, it’s harder to hide.  Everybody has had to focus more on outputs.  Not always the right ones, obviously, but outputs nonetheless.  And that’s worth celebrating.  The sector as a whole, and on the whole, is better for it.

November 04

Concentration vs. Distribution

I’m spending part of this week in Shanghai at the bi-annual World-Class Universities conference, which is put on by the good folks who run the Shanghai Jiao Tong Rankings. I’ll be telling you more about this conference later, but today I wanted to pick up on a story from the last set of Shanghai rankings in August.  You’d be forgiven for missing it – Shanghai doesn’t make the news the way the Times Higher Education rankings does, because its methodology doesn’t allow for much change at the top.

The story had to with Saudi Arabia.  As recently as 2008, it had no universities in the top 500; now it has four, largely because of the way they are strategically hiring highly-cited scientists (on a part-time basis, one assumes, but I don’t know that for sure).  King Saud University, which only entered the rankings in 2009, has now cracked the top-200, making it by far the fastest rise of any institution in the history of any set of rankings.  But since this doesn’t line up with the “East Asian tigers overtaking Europe/America” line that everyone seems eager to hear, no one published it.

You see, we’re addicted to this idea that if you have great universities then great economic development will follow.  There were some surprised comments on twitter about the lack of a German presence in the rankings.  But why?  Whoever said that having a few strong top universities is the key to success?

Strong universities benefit their local economies – that’s been clear for decades.  And if you tilt the playing-field more towards those institutions – as David Naylor argued in a very good talk last spring, there’s no question that it will pay some returns in terms of discovery and innovation.  But the issue is one of opportunity costs: would such a concentration of resources create more innovation and spill-over benefits than other possible distributions of funds?  Those who make the argument for concentration (see, for instance, HEQCO’s recent paper on differentiation) seem to take this as given, but I’m not convinced their case is right.

Put it this way: if some government had a spare billion lying around, and the politics of regional envy wasn’t an issue, and they wanted to spend it in higher education, which investment would have the bigger impact: putting it all into a single, “world-class” university?  Spreading it across maybe a half-dozen “good” universities?  Or spreading it across all institutions?  Concentrating the money might do a lot of good for the country (not to mention the institution at which it was concentrated – but maybe dispersing it would do more.  As convincing as Naylor’s speech was, this issue of opportunity costs wasn’t addressed.

Or, go back to Shanghai terminology: if it were up to you to choose, do you think Canada would be better served with one institution in the top ten worldwide (currently – none) or seven in the top 100 (currently – four) or thirty-five in the top 500 (currently – twenty-three)?  And what arguments would you make to back-up your decision?  I’m curious to hear your views.

November 29

Rankings Indigestion

The easiest knock on rankings like those produced by Shanghai Jiao Tong University, is that they only measure research, and that universities are about much more than just research. That’s absolutely true, of course, but to my mind it also reflects a general unwillingness to come to grips with what an odd, hybrid of an organization higher education really is.

Go back two hundred years and universities were nearly irrelevant as institutions. The decline of the church had robbed the academy of much of its traditional purpose. Napoleon thought universities so useless that he closed them all and created a set of grandes écoles instead. Similarly, in Germany, universities at the start of the nineteenth century were seen as so useless in contributing to national priorities that they were completely remodeled along research lines by Alexander Humboldt.

The idea of a research mission is so ingrained in our understanding of a university that it’s hard to imagine them without it – but historically, it’s a fairly recent development. In the early 1800s, nearly all scientific research was done outside universities. The spread of the German model in the nineteenth century changed that a bit, but in many ways it was only the two World Wars of the twentieth century and the persuasive arguments of Vannevar Bush that really convinced governments to (a) spend on scientific research and (b) over time, concentrate that spending in universities. Nowadays, there’s very little discovery-oriented research that doesn’t occur in universities.

In other words, over the course of the last two centuries, as part of a long-term quest to become more relevant, the university (writ large) ate science.

That has consequences. Though teaching isn’t really much of a prestige activity, and teaching has almost exclusively a local impact and local role, science wants to be global. To use a neurological metaphor, individual scientists or labs are like neurons, and they are always seeking to send out dendrites to find and link up with other related neurons, with information passing between them to create positive feedback loops. One of the things that research rankings (and the bilbliometric studies on which they are based) do, at a very high level at least, is provide some indication to scientists as to where to send out their dendrites. In that sense, they are an essential tool in the globalization of science.

In sum: rankings are useful to science, but rankings irritate universities. Given that universities gorged themselves on science and reaped major benefits as a result, it’s not unreasonable to think of rankings as a form of indigestion after a very fine meal.

November 01

More Shanghai Needed

I’m in Shanghai this week, a guest of the Center for World-Class Universities at Shanghai Jiao Tong University for their biannual conference. It’s probably the best spot on the international conference circuit to watch how governments and institutions are adapting to a world in which their performance is being measured, compared and ranked on a global scale.

In discussions like this the subject of rankings is never far away, all the more so at this meeting because its convenor, Professor Nian Cai Liu, is also the originator of the Academic Ranking of World Universities, also known as the Shanghai Rankings. This is one of three main competing world rankings in education, the others being the Times Higher Education Supplement (THES) and the QS World Rankings.

The THES and QS rankings are both commercially-driven exercises. QS actually used to do rankings for THES, but the two parted ways a couple of years ago when QS’s commercialism was seen to have gotten a little out of hand. After the split, THES got a little ostentatious about wanting to come up with a “new way” of doing rankings, but in reality, the two aren’t that different: they both rely to a considerable degree on institutions submitting unverified data and on surveys of “expert” opinion. Shanghai, on the other hand, eschews surveys and unverified data, and instead relies entirely on third-party data (mostly bibliometrics).

In terms of reliability, there’s really no comparison. If you look at the correlation between the indicators used in each of the rankings, THES and QS are very weak (meaning that the final results are highly sensitive to the weightings), while the Shanghai rankings are very strong (meaning their results are more robust). What that means is that, while the Shanghai rankings are an excellent rule-of-thumb indicator of concentrations of scientific talent around the world, the QS and THES rankings in many respects are simply measuring reputation.

(I could be a bit harsher here, but since QS are known to threaten academic commentators with lawsuits, I’ll be circumspect.)

Oddly, QS and THES get a lot more attention in the Canadian press than do the Shanghai rankings. I’m not sure whether this is because of a lingering anglophilia or because we do slightly better in those rankings (McGill, improbably, ranks in the THES’s top 20). Either way, it’s a shame, because the Shanghai rankings are a much better gauge of comparative research output, and with its more catholic inclusion policy (500 institutions ranked compared to the THES’s 200), it allows more institutions to compare themselves to the best in the world – at least as far as research is concerned.