HESA

Higher Education Strategy Associates

Category Archives: Rankings

Including Times Higher Ed, QS, Shanghai, U Multirank, etc.

September 01

McMaster > McGill?

The Shanghai Rankings (technically, the Academic Ranking of World Universities) came out a couple of weeks ago.  This is the granddaddy of all international rankings; the one that started it all, and still perceived as the most stable and reliable measure of scientific hubs; essentially it measures large concentrations of scientific talent.  And there were some very interesting results for Canada, the most intriguing of which is the fact that McGill has fallen out of Canada’s “top 3”, replaced by McMaster.

So, first of all the big picture: Toronto was up four places to 23rd in the world (and 10th among publics, if you consider Oxford, Cambridge and Cornell to be public), while UBC rose three places to 31st.  McMaster and McGill rounded out Canadian institutions in the top 100 (more on them in a second).  Below that, University of Alberta stayed steady in the 101-150 bracket, while Université de Montreal was joined by Calgary and Ottawa in the 151-200 bracket, bringing the national total in the top 200 to 8.  Overall, the country stayed steady at 19 institutions in the top 500, though Université du Québec dropped out and was replaced by Concordia; that puts the country behind the US, the UK, China, Germany, Australia and France but ahead of everyone else (including, surprisingly, Japan, which has been doing terribly in various rankings of late).

But the big story – in Canada, anyway – is that McMaster rose 17 places to 66th overall while McGill dropped four places to 67th. This is the first time in any ranking (so far as I can recall) that McGill has not ben considered one of the country’s top three institutions, and so it raises some significant questions.  Is it a matter of McGill’s reputation going down?  An echo of l’Affaire Potter?  A consequence of long-term funding decline?  What, exactly?

The answer is it’s none of those things.  Alone among the major rankings, Shanghai does not survey academics or anyone else about institutions, so it has nothing to do with image, reputation, prestige or anything else.  Nor, by the way, is funding a credible suspect.  Although we’re always hearing about how McGill is hard done by the Quebec government, the fact of the matter is that McGill has done as well or better than McMaster in terms of expenditures per student.

Figure 1-Total Expenditure per FTE Student, 2000-01 to 2015-16

Source: Statistics Canada’s Financial Information of Colleges and Universities & Post-Secondary Student Information System, various years

So what happened?  It’s pretty simple, actually.  20% of the Shanghai rank is based on what is called the “HiCi list” – the list of Highly Cited researchers put out annually by Clarivate (formerly Thompson Reuters), which you can peruse here.  But Clarivate has changed its HiCi methodology in the last couple of years, which has had a knock-on effect for the Shanghai rankings as well.  Basically, the old method rewarded old researchers whose publications had gathered lots of citations over time; the methodology only counts citations in the past ten years and therefore privileges newer, “hotter” research papers and their authors (there’s a longer explanation here if you want all the gory details).

Anyway, the effect of this appears to be significant: McGill had five highly-cited researchers in both 2015 and 2016, while McMaster went from ten to fifteen – all in the Faculty of Health Sciences, if you can believe it – putting them top in Canada.    Those extra five researchers were enough, in a ranking which is highly sensitive to the presence of really top scholars, to move McMaster above McGill.

So let’s not read anything more into this ranking: it’s not about funding, or reputation: it’s about a cluster of extraordinary research excellence which in this instance is giving a halo effect to an entire university.  C’est tout.

June 09

Why we should – and shouldn’t – pay attention to World Rankings

The father of modern university rankings is James McKeen Cattell, a well-known early 20th-century psychologist, scientific editor (he ran the journals Science and Psychological Review) and eugenecist.  In 1903, he began publishing American Men of Science, a semi-regular rating of the country’s top scientists, as rated by university department chairs.  He then hit on the idea of counting how many of these scientists were graduates of the nation’s various universities.  Being a baseball enthusiast, it seemed completely natural to arrange these results top to bottom, as in a league table.  Rankings have never looked back.

Because of the league table format, reporting on rankings tends to mirror what we see in sports.  Who’s up?  Who’s down?  Can we diagnose the problem from the statistics?  Is it a problem attracting international faculty?  Lower citation rates?  A lack of depth in left-handed relief pitching?  And so on.

The 2018, QS World University Rankings, released last night, are another occasion for this kind of analysis.  The master narrative for Canada – if you want to call it that – is that “Canada is slipping”.  The evidence for this is that the University of British Columbia fell out of the top 50 institutions in the world (down six places to 51st) and that we also now have two fewer institutions in the top 200, (Calgary fell from 196th to 217th and Western from 198 to 210th) than we used to.

People pushing various agendas will find solace in this.  At UBC, blame will no doubt be placed on the institution’s omnishambular year of 2015-16.  Nationally, people will try to link the results to problems of federal funding and argue how implementing the recommendations of the Naylor report would be a game-changer for rankings.

This is wrong for a couple of reasons.  The first is that it is by no means clear that Canadian institutions are in fact slipping.  Sure, we have two fewer in the 200, but the number in the top 500 grew by one.  Of those who made the top 500, nine rose in the rankings, nine slipped and one stayed constant.  Even the one high-profile “failure” – UBC –  only saw its overall score fall by one-tenth of a point; the fall in the rankings was more due to an improvement in a clutch of Asian and Australian universities.

The second is that in the short-term, rankings are remarkably impervious to policy changes.  For instance, according to the QS reputational survey, UBC’s reputation has taken exactly zero damage from l’affaire Gupta and its aftermath.  Which is as it should be: a few months of communications hell doesn’t offset 100 years of scientific excellence.  And new money for research may help less than people think. In Canada, institutional citations tend to track the number of grants received more than the dollar value of the grants.  How granting councils distribute money is at least as important as the amount they spend.

And that’s exactly right.  Universities are among the oldest institutions in society and they don’t suddenly become noticeably better or worse over the course of twelve months.  Observations over the span of a decade or so are more useful, but changes in ranking methodology make this difficult (McGill and Toronto are both down quite a few places since 2011, but a lot of that has to do with changes which reduced the impact of medical research relative to other fields of study).

So it matters that Canada has three universities which are genuinely top class, and another clutch (between four and ten, depending on your definition), which could be called “world-class”.  It’s useful to know that, and to note if any institutions have sustained, year-after-year changes either up or down.  But this has yet to happen to any Canadian university.

What’s not as useful is to cover rankings like sports, and invest too much meaning in year-to-year movements.  Most of the yearly changes are margin-of-error kind of stuff, changes that result from a couple of dozen papers being published in one year rather than another, or the difference between admitting 120 extra international students instead of 140.   There is not much Moneyball-style analysis to be done when so many institutional outputs are – in the final analysis – pretty much the same.

February 20

Canada’s Rankings Run-up

Canada did quite well out of a couple of university rankings which have come out in the last month or so: the Times Higher education’s “Most International Universities” ranking, and the QS “Best Student Cities” ranking.  But there’s actually less to this success than meets the eye.  Let me explain.

Let’s start with the THE’s “Most International” ranking.  I have written about this before, saying it does not pass the “fall-down-laughing” test which is really the only method of testing a ranking’s external validity.  In previous years, the ranking was entirely about which institutions had the most international student, faculty and research collaborations.  These kinds of criteria inevitably favour institutions in small countries with big neighbours and disfavour big countries with few neighbours, it was no surprise that places like the University of Luxembourg and the Qatar University would top the list, and the United States would struggle to put an institution in the top 100.  In other words, the chosen indicators generated a really superficial standard of “internationalism” that lacked credibility (Times readers were pretty scathing about the “Qatar #1 result).

Now as a result of this, the Times changed it methodology.  Drastically.  They didn’t make a big deal of doing so (presumably not wishing to draw more attention to the rankings’ earlier superficiality), but basically, i) they added a fourth set of indicators (worth 25% of total) for international reputation based on THE’s annual survey of academics and ii) they excluded any institution which didn’t receive at least 100 in said academic survey.  (check out Angel Calderon’s critique of the new rules here for more details, if that sort of thing interests you).  That last one is a big one: in practice it means the universe for this ranking is only about 200 institutions.

On the whole, I think the result is a better ranking and confirms more closely to what your average academic on the street thinks of as an “international” university.  Not surprisingly, places like Qatar and Luxembourg suddenly vanished from the rankings.  Indeed, as a result of those changes fully three-quarters of the institution that were ranked in 2016 disappeared from the rankings in 2017.  Not surprisingly, Canadian universities suddenly shot up as a result.  UBC jumped from to 40th to 12th, McGill went from 76th to 23rd, Alberta from 110th to 31st, Toronto from 128th to 32nd, and so on.

Cue much horn-tooting on social media from those respective universities for these huge jumps in “internationality”.  But guys, chill.  It’s a methodology change.  You didn’t do that: the THE’s methodologists did.

Now, over to the second set of rankings, the QS “Best Student Cities”, the methodology for which is here.  The ranking is comprised of 22 indicators spread over six areas: university quality (i.e. how highly-ranked, according to QS, are the institutions in that city), “student mix”, which is a composite of total student numbers, international student numbers and some kind of national tolerance index,; “desirability”, which is a mix of data about pollution, safety, livability (some index made up by the Economist), corruption (again, a piece of national-level data) and students’ own ratings of the city (QS surveys students on various things); “employer activity”, which is mostly based on an international survey of employers about institutional quality, “affordability”, and “student view” (again, from QS’ own proprietary data.

Again, Montreal coming #1 is partly the result of a methodology change. This is the first year QS added student views to the mix, and Montreal does quite well on that front’ eliminate those scores and Montreal comes third.  And while the inclusion of student views in any ranking is to be applauded, you have to wonder about the sample size.  QS says they get 18,000 responses globally…Canada represents about 1% of the world’s students and Montreal institutions represent 10-15% of Canadian students, so if the responses are evenly distributed, that means there might be 20 responses from Montreal in the sample (there’s probably more than that because responses won’t be evenly distributed, but my point is we’re talking small numbers).  So I have my doubts about the stability of that score.  Ditto on the employer ratings, where Montreal somehow comes top among Canadian cities, which I am sure is news to most Canadians.  After all, where Montreal really wins big is on things like “livability” and “affordability”, which is another way of saying the city’s not in especially great shape economically.

So, yeah, some good rankings headlines for Canada: but let’s understand that nearly all of it stems from methodology changes.  And what methodologists give, they can take away.

November 11

The New WSJ/Times Higher Education Rankings

Almost the moment I hit send on my last post about rankings, the inaugural Wall Street Journal/Times Higher Education rankings of US universities hit the stands.  It didn’t make a huge splash mainly because the WSJ inexplicably decided to put the results behind their paywall (which is, you know, BANANAS) but it’s worth looking at because I think in many ways it points the way to the future of rankings in many countries.

So the main idea behind these rankings is to try to do something different from the US News & World Report (USNWR) rankings which are a lot like Maclean’s rankings (hardly a surprise since the latter was explicitly modelled on the former back in 1991).  In part, the WSJ/THE went down the same road that Money Magazine went in terms of looking at output data: graduate outcomes like earnings and indebtedness, except that they were able to exploit the huge new database of institutional-level data on these things that the Obama administration.  In addition to that, they went a little bit further and created their own student survey to get evidence about student satisfaction and engagement.

Now this last thing may seem like old hat in Canada: after all, the Globe and Mail ran a rankings based on student surveys from 2003 to 2012 (we at HESA were involved from 2006 onwards and ran the survey directly for the last couple of years).  It’s also old hat in Europe, where a high proportion of rankings depend at least in part on student surveys.  But in the US, it’s an absolute novelty.  Surveys usually require institutional co-operation, and organizing this among more than a thousand institutions simply isn’t easy:  “top” institutions would refuse to participate, just as they won’t do CLA, NSSE, AHELO or any measurement system which doesn’t privilege money.

So what the Times Higher team did was effectively what the Globe did in Canada thirteen years ago: find students online, independent of their institutions, and survey them there.  The downside is that the minimum number of responses per institution is quite low (50, compared with the 210 we used to use at the Globe); the very big upside is that students’ voices are being heard and we get some data about engagement.  The result was more or less what you’d expect from the Canadian data: smaller colleges and religious institutions tend to do extremely well on engagement measures (the top three for Engagement were Dordt College, Brigham Young and Texas Christian).

So, I give the THE/WSJ effort high marks for effort here.  Sure, there are problems with the data.  The “n” is low and the resulting number have big error margins.  The income figures are only for those who have student loans and includes both those who graduated and those who did not.  But it’s still a genuine attempt to shift rankings away from inputs and towards processes and outputs.

The problem?  It’s still the same institutions coming in at the top.  Stanford, MIT. Columbia, Penn, Yale…heck, you don’t even hit a public institution (Michigan) until 24th position.  Even when you add all this process and outcome stuff, it’s still the rich schools that dominate.  And the reason for this is pretty simple: rich universities can stay relatively small (giving them an advantage on engagement) and take their pick of students who then tend to have better outcomes.  Just because you’re not weighting resources at 100% of the ranking doesn’t mean you’re not weighting items strongly correlated to resources at 100%.

Is there a way around this?  Yes, two, but neither is particularly easy.  The first is to use some seriously contrarian indicators.  The annual Washington Monthly rankings  does this, measuring things like percentage of students receiving Pell Grants, student participation in community service, etc.  The other way to do this is to use indicators similar to those used by THE/WSJ, but to normalize them based on inputs like income and incoming SATs.  The latter is relatively easy to do in the sense that the data already (mostly) exists in the public, but frankly there’s no market.  Sure, wonks might like to know about which institutions perform best on some kind of value-added measure, but parents are profoundly uninterested in this.  Given a choice between sending their kids to a school that efficiently gets kids from the 25th percentile up to the 75th percentile and sending their kid to a school with top students and lots of resources, finances permitting they’re going to take the latter every time.  In other words, this is a problem, but it’s a problem much bigger than these particular rankings.

My biggest quibble with these rankings?  WSJ inexplicably put them behind a paywall, which did much to kill the buzz.  After a lag of three weeks, THE made them public too, but too little too late.  A missed opportunity.  But still, they point the way to the future, because a growing number of national-level rankings are starting to pay attention to outcomes (American rankings remarkably are not pioneers here: in fact, the Bulgarian National Rankings  got there several years ago, and with much better data).  Unfortunately, because these kinds of outcomes data are not available everywhere and are not entirely compatible even where they are, we aren’t going to see these data sources inform international rankings any time soon.  Which is why, mark my words, literally all the interesting work in rankings over the next couple of years is going to happen in national rankings, not international ones.

November 10

Measuring Innovation

Yesterday, I described how the key sources of institutional prestige were beginning to shift away from pure research & publication towards research & collaboration with industry.  Or, to put it another way, the kudos now come not from solely doing research, but rather in participating in the process of turning discoveries into meaningful and commercially viable products.  Innovation, in other words (though that term is not unproblematic).  But while we all have a pretty good grasp on the various ways to measure research output, figuring out how to measure an institutions’ performance in terms of innovation is a bit trickier.  So today I want to look at a couple of emerging attempts to do just that.

First out of the gate in this area is Reuters, which has already published two editions of a “top 100 innovative universities” list.  The top three won’t surprise anyone (Stanford, MIT, Harvard) but the next three – Texas, Washington and the Korea Advanced Institute of Science and Technology – might:  it’s a sign at least that some non-traditional indicators are being put in the mix. (Obligatory CanCon section: UBC 50th, Toronto 57th and that’s all she wrote.)

So what is Reuters actually measuring?  Mostly, it’s patents.  Patents filed, Success rates of patents filed, percentage of patents for which coverage was sought in all three of the main patent offices (US, Europe, japan), patent citations, patent citation impact…you get the idea.  It’s a pretty one-dimensional view of innovation.  The bibliometric bits are slightly more interesting – percent of articles co-written with industry partners, citations in articles originating in industry – but that maybe gets you to one and a half dimensions, tops.

Meanwhile, the THE may be inching towards an innovation ranking.  Last year, it released a set of four “innovation indicators”, but only published the top 15 in each indicator (and included some institutions not usually thought of as universities in the list, such as “Wright-Patterson Airforce Base”, the Scripps Research Institute” and the “Danish Cancer Society”) which suggests this was a pretty quick rip-and-grab from the Scopus database rather than a long, thoughtful detailed inquiry into the subject.  Two of the four indicators, “resources from industry” and “industry contribution” (i.e. resources from industry as a percentage of total research budget), are based on data from the THE’s annual survey of institutions and while they may be reasonable indicators of innovation, for reasons I pointed out back here, you should intensely distrust the data.  The other two indicators are both bibliometric:  “patent citations” and “industry collaboration” (i.e. co-authorships).  On the whole, THE’s effort is slightly better than Reuters’, but is still quite narrow.

The problem is that the ways in which universities support innovation in an economic sense are really tough to measure.  One might think that counting spin-offs would be possible, but the definition of a spin-off might vary quite a bit from place to place (and it’s tough to know if you’ve caught 100% of said activity).  Co-working space (that is space where firms and institutions interact) would be another way to measure things, but it’s also very difficult to capture.  Economic activity in university tech parks is another, but not all economic activity in tech parks are necessarily university- or even science-based (this is an issue in China and many developing countries as well).  The number of students engaged in firm-based work-integrated learning (WIL) activities would be great but a) there is no common international definition of WIL and b) almost no one measures this anyway.  Income from patent licensing is easily obtainable in some countries but not others.

What you’d really want, frankly, is a summary of unvarnished opinions about the quality of industry partnerships with the businesses themselves, perhaps weighted by the size of the businesses involved (an 8 out of 10 at Yale probably means more than a 9 out of 10 at Bowling Green State).  We can get these at a national level through the World Economic Forum’s annual competitiveness survey, but not at an institutional level, which is presumably more important.  And that’s to say nothing of the value of finding ways to measure the various ways in which institutions support innovation in ways other than through industry collaboration.

Anyways, these problems are not insoluble.  They just take imagination and work.  If I were in charge of metrics in Ontario, say, I could think of many ways – some quantitative, some qualitative – that we might use to evaluate this.  Not many of them would translate easily into international comparisons.  For that to happen would require a genuine international common data set to emerge.  That’s unlikely to happen any time soon, but that’s no reason to throw up our hands.  It would be unimaginably bad if, at the outset of an era where institutions are judged on their ability to be economic collaborators, we allow patent counts to become the standard way of measuring success.  It’s vitally important that thoughtful people in higher education put some thought into this topic.

September 28

International Rankings Round-Up

So, the international rankings season is now more or less at an end.  What should everyone take away from it?  Well, here’s how Canadian Universities did in the three main rankings (the Shanghai Academic Ranking of World Universities, the QS Rankings and the Times Higher Rankings).

ottsyd20160928

Basically, you can paint any picture you want out of that.  Two rankings say UBC is better than last year and one says it is worse.  At McGill and Toronto, its 2-1 the other way.  Universities in the top 200?  One says we dropped from 8 to 7, another says we grew from 8 to 9 and a third says we stayed stable at 6.  All three agree we have fewer universities in the top 500, but they disagree as to which ones are out (ARWU figures it’s Carleton, QS says its UQ and Guelph, and for the Times Higher it’s Concordia).

Do any of these changes mean anything?  No.  Not a damn thing.  Most year-to-year changes in these rankings are statistical noise: but this year, with all three rankings making small methodological changes to their bibliometric measures, the year-to-year comparisons are especially fraught.

I know rankings sometimes get accused of tinkering with methodology in order to get new results and hence generate new headlines, but in all cases, this year’s changes made the rankings better, either making them more difficult to game, more reflective of the breadth of academia, or better at handling outlier publications and genuine challenges in bibliometrics.  Yes, the THE rankings threw up some pretty big year-to-year changes and the odd goofy result (do read my colleague Richard Holmes’ comments the subject here) but I think on the whole the enterprise is moving in the right direction.

The basic picture is the same across all of them.  Canada has three serious world-class universities (Toronto, UBC, McGill), and another handful which are pretty good (McMaster, Alberta, Montreal and then possibly Waterloo and Calgary).  16 institutions make everyone’s top 500 (the U-15 plus Victoria and Simon Fraser but minus Manitoba, which doesn’t quite make the grade on QS), and then there’s another half-dozen on the bubble, making it into some rankings’ top 500 but not others (York, Concordia, Quebec, Guelph, Manitoba, Concordia).  In other words, pretty much exactly what you’d expect in a global rankings.  It’s also almost exactly what we here at HESA Towers found when doing our domestic research rankings four years ago. So: no surprises, no blown calls.

Which is as it should be: universities are gargantuan, slow-moving, predictable organizations.  Relative levels of research output and prestige change very slowly; the most obvious sign of a bad university ranking is rapid changing of positions from year to year.   Paradoxically, of course, this makes better rankings less newsworthy.

More globally, most of the rankings are showing rises for Chinese universities, which is not surprising given the extent to which their research budgets have expanded in the past decade.  The Times threw up two big surprises; first by declaring Oxford the top university in the world when no other ranker, international or domestic, has them in first place in the UK, and second by excluding Trinity College Dublin from the rankings altogether because it had submitted some dodgy data.

The next big date on the rankings calendar is the Times Higher Education’s attempt to break into the US market.  It’s partnering with the Wall Street Journal to create an alternative to the US News and World Report rankings.  The secret sauce of these rankings appears to be a national student survey, which has never been used in the US before.  However, in order to get a statistically significant sample (say, the 210-students per institution minimum we used to use in the annual Globe and Mail Canadian University Report) at every institution currently covered by USNWR would imply an astronomically large sample size – likely north of a million students.  I can pretty much guarantee THE does not have this kind of sample.  So I doubt that we’re going to see students reviewing their own institution; rather, I suspect the survey is simply going to ask students which institutions they think are “the best”, which amounts to an enormous pooling of ignorance.  But I’ll be back with a more detailed review once this one is released.

May 30

The 2016 U21 Rankings

Universitas 21 is one of the higher-prestige university alliances out there (McGill, Melbourne and the National University of Singapore are among its members).  Now like a lot of university alliances it doesn’t actually do much. The Presidents or their alternates meet every year or so, they have some moderately useful inter-institution mobility schemes, that kind of thing.  But the one thing it does which gets a lot of press is that it issues a ranking every year.  Not of universities, of course (membership organizations which try to rank their own members tend not to last long), but rather of higher education systems.  The latest one is available here.

I have written about the U21 rankings before , but I think it’s worth another look this year because there have been some methodological changes and also because Canada has fallen quite a ways in the rankings.  So let’s delve into this a bit.

The U21 rankings are built around four broad concepts: Resources (which makes up 20% of the final score), Environment (20%), Connectivity (20%) and Output (40%), each of which is measured through a handful of variables (25 in all).  The simplest category is Resources, because all the data is available through OECD documentation.  Denmark comes top of this list – this is before any of the cuts I talked about back here  kick in, so we can expect it to fall in coming years.  Then in a tight bunch come Singapore, the US, Canada and Sweden. 

Next comes “Environment”, which is a weird hodge-podge of indicators around regulatory issues, institutional financial autonomy, percentages of students and academic staff who are female, a survey of business’ views of higher education quality and – my favourite – how good their education data is.  Now I’m all for giving Canada negative points for Statscan’s uselessness, but there’s something deeply wrong with any indicator of university quality which ranks Canada (34th) and Denmark (31st) behind Indonesia (29th) and Thailand (21st).  Since most of these scores come from survey responses, I think it would be instructive to publish the results of these responses, because they flat-out do not meet the fall-down-laughing test.

The Connectivity element is pretty heavily weighted to things like percentage of foreign students and staff and what percentage of articles are co-authored with foreign scholars.  For structural and geographical reasons, European countries (especially the titchy ones) tend to do very well on this measure and so they take all nine of the top nine spots.  New Zealand comes tenth, Canada eleventh.  The Output measure combines research outputs and measures of access, plus an interesting new one on employability.  However, because not all of these measures are normalized for system-size, the US always runs away with this category (though due to some methodological tweaks less so than they used to).  Canada comes seventh on this measure.

Over the last three years, Canada has dropped from third to ninth in the measures.  The table below shows why this is the case.

Canada’s U21 Ranking Scores by Category, 2012-2016

2016-05-29-1

In 2015, when Canada dropped from 3rd to 6th, it was because we lost points on “environment” and “connectivity”.  It’s not entirely clear to me why we lost points on the latter, but it is notable that on the former there was a methodological change to include the dodgy survey data I mentioned earlier, so this drop may simply reflect a methodological change.  This year, we lost points on resources which frankly isn’t surprising given controls on tuition and real declines in government funding in Canada.  But it’s important to note that the way this is scored, what matters is not whether resources (or resources per-student) are going up or down, it’s whether they are going up or down relative to the category leader – i.e. Denmark.  So even with no change in our funding levels, we could expect our scores to rise over the next few years.

May 20

The Times Higher Education “Industry Income” Rankings are Bunk

A few weeks ago, the Times Higher Education published a ranking of “top attractors of industry funds”.  It’s actually just a re-packaging of data from its major fall rankings exercise: “industry dollars per professors” is one of its thirteen indicators and this is just that indicator published as a standalone ranking.  What’s fascinating is how at odds the results are with published data available from institutions themselves.

Take Ludwig-Maximillans University in Munich, the top university for research income according to THE.  According to the ranking, the university collects a stonking $392,800 in industry income per academic.  But a quick look at the university’s own facts and figures page reveals a different story.  The institution says it receives €148.4 million in “outside funding”.  But over 80% of that is from the EU, the German government, or a German government agency.  Only €26.7 million comes from “other sources”.  This is at a university which has 1492 professors.  I make that out to be 17,895 euros per prof.  Unless the THE gets a much different $/€ rate than I do, that’s a long way from $392,800 per professor.  In fact, the only way the THE number makes sense is if you count the entire university budget as “external funding” (1492 profs time $392,800 equals roughly $600M, which is pretty close to the €579 million figure which the university claims as its entire budget).

Or take Duke, second on the THE list.  According to the rankings, the university collects $287,100 in industry income per faculty member.  Duke’s Facts and Figures page says Duke has 3,428 academic staff.  Multiply that out and you get a shade over $984 million.  But Duke’s financial statements indicate that the total amount of “grants, contracting and similar agreements” from non-government sources is just under $540 million, which would come to $157,000 per prof, or only 54% of what the Times says it is.

The 3rd place school, the Korea Advanced Institute of Science and Technology (KAIST), is difficult to examine because it seems not to publish financial statements or have a “facts & figures” page in English.  However, assuming Wikipedia’s estimate of 1140 academic staff is correct, and if we generously interpret the graph on the university’s research statistics page as telling us that 50 of the 279 billion Won in total research expenditures comes from industry, then at current exchange rates that comes to  a shade over $42 million, or $37,000 per academic. Or, one-seventh of what the THE says it is.

I can’t examine the fourth-placed institution, because Johns Hopkins’ financial statements don’t break out its grant funding by public and private sources.  But tied for fifth place is my absolute favourite, Anadolou University in Turkey, which allegedly has $242,500 in income per professor.  This is difficult to check because Turkish universities appear not to publish their financial documents.  But I can tell you right now that this is simply not true. On its facts and figures page, the university claims to have 2,537 academic staff (if you think that’s a lot, keep in mind Anadolu’s claim to fame is as a distance ed university. It has 2.7 million registered students in addition to the 30,000 or so it has on its physical campus, roughly half of whom are “active”). For both numbers to be true, Anadolu would have to be pulling in $615 million/year in private funding, and that simply strains credulity.  Certainly, Anadolu does do quite a bit of business – a University World News article from 2008 suggests that it was pulling in $176 million per year in private income (impressive, but less than a third of what is implied by the THE numbers), but much of that seems to come from what we would typically call “ancillary enterprises” – that is, businesses owned by the university – rather than  external investment from the private sector.

I could go through the rest of the top ten, but you get the picture.  If only a couple of hours of googling on my part can throw up questions like this, then you have to wonder how bad the rest of the data is.   In fact, the only university in the top ten where the THE number might be something close to legit is that for Wageningen University in the Netherlands.  This university lists €101.7 million in “contract research”, and has 587 professors.  That comes out to a shade over €173,000 (or about $195,000 per professor) which is at least spitting distance from the $242,000 claimed by THE.  The problem is, it’s not clear from any Wageningen documentation I’ve been able to find how much of that contract research is actually private sector.  So it may be close to accurate, or it may be completely off.

The problem here is a problem common to many rankings systems.  It’s not that the Times Higher is making up data, and it’s not that institutions are (necessarily) telling fibs.  It’s that if you hand out a questionnaire to a couple of thousand institutions who, for reasons of local administrative practice, define and measure data in many different ways, and ask for data on indicators which do not have a single obvious response (think “number of professors”: do you include clinicians?  Part-time profs?  Emeritus professors?), you’re likely to get data which isn’t really comparable.  And if you don’t take the time to verify and check these things (which the THE doesn’t, it just gets the university to sign a piece of paper “verifying that all data submitted are true”), you’re going to end up printing nonsense. 

Because THE publishes this data as a ratio of two indicators (industry income and academic staff) but does not publish the indicators themselves, it’s impossible for anyone to work out where the mistakes might be. Are universities overstating certain types of income, or understating the number of professors?  We don’t know.  There might be innocent explanations for these things – differences of interpretation that could be corrected over time.  Maybe LMU misunderstood what was meant by “outside revenue”.  Maybe Duke excluded medical faculty when calculating its number of academics.  Maybe Anadolu excluded its distance ed teachers and included ancillary income.  Who knows? 

The problem is that the Times Higher knows that these are potential problems but does nothing to rectify them.  It could be more transparent and publish the source data so that errors could be caught and corrected more easily, but it won’t do that because it wants to sell the data back to institutions.  It could spend more time verifying this data, but it has chosen to hide instead behind sworn statements from universities. To do more would be to reduce profitability. 

The only way this is ever going to be solved is if institutions themselves start making their THE submissions public, and create a fully open database of institutional characteristics.  That’s unlikely to happen because institutions appear to be at least as fearful of full transparency as the THE.  As a result, we’re likely to be stuck with fantasy numbers in rankings for quite some time yet.

November 05

World-Class Universities in the Great Recession: Who’s Winning the Funding Game?

Governments always face a choice between access and excellence: does it make more sense to focus resources on a few institutions in order to make them more “world-class”, or does it make sense to build capacity more widely and increase access?  During hard times, these choices become more acute.  In the US, for instance, the 1970s were a time when persistent federal budget deficits as a result of the Vietnam War, combined with a period of slow growth, caused higher education budgets to contract.  Institutions often had to choose between their access function and their research function, and the latter did not always win.

My question today (excerpted from the paper I gave in Shanghai on Monday) is: how are major OECD countries handling that same question in the post-2008 landscape?

Below, I have assembled data on real institutional expenditures per-student in higher education, in ten countries: Canada, the US, the UK, Australia, Sweden, Switzerland, France, Germany, the Netherlands, and Japan.  I use expenditures rather than income because the latter tends to be less consistent, and is prone to sudden swings.  Insofar as is possible, and in order to reduce the potential impact of different reporting methods and definitions of classes of expenditure, I use the most encompassing definition of expenditures given the available data.  The availability of data across countries is uneven; I’ll spare you the details, but it’s reasonably good in the US, the UK, Canada, Australia, and Sweden, decent in Switzerland, below-par in Japan, the Netherlands, and Germany, and godawful in France.  For the first six countries, I can compare with reasonable confidence how “top” universities (as per yesterday, I’m defining “top” as being among the top-100 of the Academic Ranking of World Class Universities, or the ARWU-100 for short).  In the six countries with the best data, I can differentiate between ARWU-100 universities and the rest; in the other four, I have only partial data, which nevertheless leads me to believe that the results for “top” universities is not substantially different from what happened to all institutions.

Figure 1 basically summarizes the findings:

Figure 1: Changes in Real Per-Student Funding Since 2008 for ARWU-100 and All Universities, Selected OECD Countries

unnamed

 

 

 

 

 

 

 

 

 

 

 

 

Here’s what you can take from that figure:

1)  Since 2008, total per-student expenditures have risen in only three countries: the UK, Sweden, and Japan.  In the UK, the increase comes from the massive new tuition fees introduced in 2012.  In Sweden, a lot of the per-student growth comes from the fact that enrolments are decreasing rapidly (more on that in a future blog).  In Germany, per-student expenditure is down since 2008, but way up since 2007.  The reason?  The federal-länder “higher education pact” raised institutional incomes enormously in 2008, but growth in student numbers (a desired outcome of the pact) meant that this increase was gradually whittled away.

2)  “Top” Institutions do better than the rest of the university sector in the US, Canada, and Switzerland (but for different reasons), but worse in Sweden and Australia.  Some of this has to do with differences in income patterns, but an awful lot has to do with changes in enrolment patterns too, which are going in different directions in different countries.

3)  Australian universities are getting hammered.  Seriously.  Since 2008, their top four universities have seen their per-student income fall by 15% in real terms.  A small portion of that seems to be an issue of some odd accounting that elevated expenditures in 2008, and hence exaggerates expenses in the base year; but even without that, it’s a big drop.  You can see why they want higher fees.

4)  Big swings in funding don’t make much short-term difference in rankings – at least at the top.  Since 2008, top-100 universities in the US have increased their per-student expenditure by 10%, while Australian unis have fallen by 15%.  That’s a 25% swing in total.  And yet there has been almost no relative movement between the two in any major rankings.  When we think about great universities, we need to think more about stocks of assets like professors and laboratories, and less about flows of funds.

So there’s no single story around the world, but there are some interesting national policy choices out there.

If anyone’s interested in the paper, I will probably post it sometime next week after I fix up a couple of graphs: if you can’t wait, just email me (ausher@higheredstrategy.com), and I’ll send you a draft.

September 25

University Rankings and the Eugenics Movement

Over the course of writing a book chapter, I’ve come up with a delightful little nugget: some of the earliest rankings of universities originated in the Eugenics movement.

The story starts with Francis Galton. A first cousin to Charles Darwin, Galton was the inventor of the weather map, standard deviation, the regression line (and the explanation of regression towards the mean), fingerprinting, and composite photography.  In other words, pretty much your textbook definition of a genius.

At some point (some believe it was after reading On the  Origin of Species), Galton came to believe that genius is born, not made.  And so in 1869, he wrote a book called Hereditary Genius in which, using biographical dictionaries called “Men of Our Time” (published by Routledge, no less), he traced back “eminent men” to see if they had eminent fathers or grandfathers.  Eventually, he concluded that they did.  This led him into a lifelong study of heredity.  In 1874, Galton published British Men of Science, where he explored all sorts of heritable and non-heritable traits or experiences in order to better understand the basis of scientific genius; one of the questions he asked was whether each had gone to university (not actually universally true at the time), and if so, where had they gone?

Galton soon had imitators who began looking more seriously at education as part of the “genius” phenomenon.  In 1904, Havelock Ellis – like Galton, an eminent psychologist (his field was sexuality, and he was one of the first scientists to write on homosexuality and transgender psychology), published A Study of British Genius This work examined all of the entries in all of the (then) sixty-six volumes of the Dictionary of Biography, eliminated those who were there solely by function of birth (i.e. the royals and most of the nobility/aristocracy), and then classified them by a number of characteristics.  One of the characteristics was university education, and unsurprisingly he found that most had gone to either Cambridge or Oxford (with a smattering from Edinburgh and Trinity).  Though it was not claimed as a ranking, it did list institutions in rank order; or rather two rank orders, as it had separate listings for British and foreign universities.

Not-so-coincidentally, it was also around this time when the first annual edition of American Men of Science appeared.  This series attempted to put the study of great men on a more scientific footing.  The author, James McKeen Cattell (a distinguished scientist who was President of the American Psychological Association in 1895, and edited both Science and Psychological Review), did a series of annual peer surveys to see who were the most respected scientists in the nation.  In the first edition, the section on psychologists contained a tabulation of the number of top people in the field, organized by the educational institution from which they graduated; at the time, it also contained an explicit warning that this was not a measure of quality.  However, by 1906 Cattell was producing tables showing changes in the number of graduates from each university in his top 1,000, and by 1910 he was producing tables that explicitly ranked institutions according to their graduates (with the value of each graduate weighted according to one’s place in the rankings).  Cattell’s work is, in many people’s view, the first actual ranking of American universities.

What’s the connection with eugenics?  Well, Galton’s obsession with heredity directly led him to the idea that “races” could be improved upon by selective breeding (and, conversely, that they could become “degenerate” if one wasn’t careful).  Indeed, it was Galton himself who coined the term “eugenics”, and was a major proponent of the idea.  For his part, Ellis would ultimately end up as President of the Galton Institute in London, which promoted eugenics (John Maynard Keynes would later sit on the Institute’s Board); in America, Cattell wound up as President of the American Eugenics Society. 

In effect, none of them remotely believed that one’s university made the slightest difference to eventual outcomes.  In their minds, it was all about heredity.  However, one could still infer something about universities by the fact that “Men of Genius” (and I’m sorry to keep saying “men” in this piece, but it’s pre-WWI, and they really were almost all men) chose to go there.  At the same time, these rankings represent the precursors to various reputational rankings that became in vogue in the US from the 1920s right through to the early 1980s.  And it’s worth noting that the idea of ranking institutions according to their alumni has made a comeback in recent years through the Academic Ranking of World Universities (also known as the Shanghai rankings), which scores institutions, in part, on the number of Nobel Prize and Fields Medals won by an institution’s alumni.

Anyway, just a curio I thought you’d all enjoy.

Page 1 of 612345...Last »