HESA

Higher Education Strategy Associates

Category Archives: rankings

May 30

The 2016 U21 Rankings

Universitas 21 is one of the higher-prestige university alliances out there (McGill, Melbourne and the National University of Singapore are among its members).  Now like a lot of university alliances it doesn’t actually do much. The Presidents or their alternates meet every year or so, they have some moderately useful inter-institution mobility schemes, that kind of thing.  But the one thing it does which gets a lot of press is that it issues a ranking every year.  Not of universities, of course (membership organizations which try to rank their own members tend not to last long), but rather of higher education systems.  The latest one is available here.

I have written about the U21 rankings before , but I think it’s worth another look this year because there have been some methodological changes and also because Canada has fallen quite a ways in the rankings.  So let’s delve into this a bit.

The U21 rankings are built around four broad concepts: Resources (which makes up 20% of the final score), Environment (20%), Connectivity (20%) and Output (40%), each of which is measured through a handful of variables (25 in all).  The simplest category is Resources, because all the data is available through OECD documentation.  Denmark comes top of this list – this is before any of the cuts I talked about back here  kick in, so we can expect it to fall in coming years.  Then in a tight bunch come Singapore, the US, Canada and Sweden. 

Next comes “Environment”, which is a weird hodge-podge of indicators around regulatory issues, institutional financial autonomy, percentages of students and academic staff who are female, a survey of business’ views of higher education quality and – my favourite – how good their education data is.  Now I’m all for giving Canada negative points for Statscan’s uselessness, but there’s something deeply wrong with any indicator of university quality which ranks Canada (34th) and Denmark (31st) behind Indonesia (29th) and Thailand (21st).  Since most of these scores come from survey responses, I think it would be instructive to publish the results of these responses, because they flat-out do not meet the fall-down-laughing test.

The Connectivity element is pretty heavily weighted to things like percentage of foreign students and staff and what percentage of articles are co-authored with foreign scholars.  For structural and geographical reasons, European countries (especially the titchy ones) tend to do very well on this measure and so they take all nine of the top nine spots.  New Zealand comes tenth, Canada eleventh.  The Output measure combines research outputs and measures of access, plus an interesting new one on employability.  However, because not all of these measures are normalized for system-size, the US always runs away with this category (though due to some methodological tweaks less so than they used to).  Canada comes seventh on this measure.

Over the last three years, Canada has dropped from third to ninth in the measures.  The table below shows why this is the case.

Canada’s U21 Ranking Scores by Category, 2012-2016

2016-05-29-1

In 2015, when Canada dropped from 3rd to 6th, it was because we lost points on “environment” and “connectivity”.  It’s not entirely clear to me why we lost points on the latter, but it is notable that on the former there was a methodological change to include the dodgy survey data I mentioned earlier, so this drop may simply reflect a methodological change.  This year, we lost points on resources which frankly isn’t surprising given controls on tuition and real declines in government funding in Canada.  But it’s important to note that the way this is scored, what matters is not whether resources (or resources per-student) are going up or down, it’s whether they are going up or down relative to the category leader – i.e. Denmark.  So even with no change in our funding levels, we could expect our scores to rise over the next few years.

May 20

The Times Higher Education “Industry Income” Rankings are Bunk

A few weeks ago, the Times Higher Education published a ranking of “top attractors of industry funds”.  It’s actually just a re-packaging of data from its major fall rankings exercise: “industry dollars per professors” is one of its thirteen indicators and this is just that indicator published as a standalone ranking.  What’s fascinating is how at odds the results are with published data available from institutions themselves.

Take Ludwig-Maximillans University in Munich, the top university for research income according to THE.  According to the ranking, the university collects a stonking $392,800 in industry income per academic.  But a quick look at the university’s own facts and figures page reveals a different story.  The institution says it receives €148.4 million in “outside funding”.  But over 80% of that is from the EU, the German government, or a German government agency.  Only €26.7 million comes from “other sources”.  This is at a university which has 1492 professors.  I make that out to be 17,895 euros per prof.  Unless the THE gets a much different $/€ rate than I do, that’s a long way from $392,800 per professor.  In fact, the only way the THE number makes sense is if you count the entire university budget as “external funding” (1492 profs time $392,800 equals roughly $600M, which is pretty close to the €579 million figure which the university claims as its entire budget).

Or take Duke, second on the THE list.  According to the rankings, the university collects $287,100 in industry income per faculty member.  Duke’s Facts and Figures page says Duke has 3,428 academic staff.  Multiply that out and you get a shade over $984 million.  But Duke’s financial statements indicate that the total amount of “grants, contracting and similar agreements” from non-government sources is just under $540 million, which would come to $157,000 per prof, or only 54% of what the Times says it is.

The 3rd place school, the Korea Advanced Institute of Science and Technology (KAIST), is difficult to examine because it seems not to publish financial statements or have a “facts & figures” page in English.  However, assuming Wikipedia’s estimate of 1140 academic staff is correct, and if we generously interpret the graph on the university’s research statistics page as telling us that 50 of the 279 billion Won in total research expenditures comes from industry, then at current exchange rates that comes to  a shade over $42 million, or $37,000 per academic. Or, one-seventh of what the THE says it is.

I can’t examine the fourth-placed institution, because Johns Hopkins’ financial statements don’t break out its grant funding by public and private sources.  But tied for fifth place is my absolute favourite, Anadolou University in Turkey, which allegedly has $242,500 in income per professor.  This is difficult to check because Turkish universities appear not to publish their financial documents.  But I can tell you right now that this is simply not true. On its facts and figures page, the university claims to have 2,537 academic staff (if you think that’s a lot, keep in mind Anadolu’s claim to fame is as a distance ed university. It has 2.7 million registered students in addition to the 30,000 or so it has on its physical campus, roughly half of whom are “active”). For both numbers to be true, Anadolu would have to be pulling in $615 million/year in private funding, and that simply strains credulity.  Certainly, Anadolu does do quite a bit of business – a University World News article from 2008 suggests that it was pulling in $176 million per year in private income (impressive, but less than a third of what is implied by the THE numbers), but much of that seems to come from what we would typically call “ancillary enterprises” – that is, businesses owned by the university – rather than  external investment from the private sector.

I could go through the rest of the top ten, but you get the picture.  If only a couple of hours of googling on my part can throw up questions like this, then you have to wonder how bad the rest of the data is.   In fact, the only university in the top ten where the THE number might be something close to legit is that for Wageningen University in the Netherlands.  This university lists €101.7 million in “contract research”, and has 587 professors.  That comes out to a shade over €173,000 (or about $195,000 per professor) which is at least spitting distance from the $242,000 claimed by THE.  The problem is, it’s not clear from any Wageningen documentation I’ve been able to find how much of that contract research is actually private sector.  So it may be close to accurate, or it may be completely off.

The problem here is a problem common to many rankings systems.  It’s not that the Times Higher is making up data, and it’s not that institutions are (necessarily) telling fibs.  It’s that if you hand out a questionnaire to a couple of thousand institutions who, for reasons of local administrative practice, define and measure data in many different ways, and ask for data on indicators which do not have a single obvious response (think “number of professors”: do you include clinicians?  Part-time profs?  Emeritus professors?), you’re likely to get data which isn’t really comparable.  And if you don’t take the time to verify and check these things (which the THE doesn’t, it just gets the university to sign a piece of paper “verifying that all data submitted are true”), you’re going to end up printing nonsense. 

Because THE publishes this data as a ratio of two indicators (industry income and academic staff) but does not publish the indicators themselves, it’s impossible for anyone to work out where the mistakes might be. Are universities overstating certain types of income, or understating the number of professors?  We don’t know.  There might be innocent explanations for these things – differences of interpretation that could be corrected over time.  Maybe LMU misunderstood what was meant by “outside revenue”.  Maybe Duke excluded medical faculty when calculating its number of academics.  Maybe Anadolu excluded its distance ed teachers and included ancillary income.  Who knows? 

The problem is that the Times Higher knows that these are potential problems but does nothing to rectify them.  It could be more transparent and publish the source data so that errors could be caught and corrected more easily, but it won’t do that because it wants to sell the data back to institutions.  It could spend more time verifying this data, but it has chosen to hide instead behind sworn statements from universities. To do more would be to reduce profitability. 

The only way this is ever going to be solved is if institutions themselves start making their THE submissions public, and create a fully open database of institutional characteristics.  That’s unlikely to happen because institutions appear to be at least as fearful of full transparency as the THE.  As a result, we’re likely to be stuck with fantasy numbers in rankings for quite some time yet.

September 25

University Rankings and the Eugenics Movement

Over the course of writing a book chapter, I’ve come up with a delightful little nugget: some of the earliest rankings of universities originated in the Eugenics movement.

The story starts with Francis Galton. A first cousin to Charles Darwin, Galton was the inventor of the weather map, standard deviation, the regression line (and the explanation of regression towards the mean), fingerprinting, and composite photography.  In other words, pretty much your textbook definition of a genius.

At some point (some believe it was after reading On the  Origin of Species), Galton came to believe that genius is born, not made.  And so in 1869, he wrote a book called Hereditary Genius in which, using biographical dictionaries called “Men of Our Time” (published by Routledge, no less), he traced back “eminent men” to see if they had eminent fathers or grandfathers.  Eventually, he concluded that they did.  This led him into a lifelong study of heredity.  In 1874, Galton published British Men of Science, where he explored all sorts of heritable and non-heritable traits or experiences in order to better understand the basis of scientific genius; one of the questions he asked was whether each had gone to university (not actually universally true at the time), and if so, where had they gone?

Galton soon had imitators who began looking more seriously at education as part of the “genius” phenomenon.  In 1904, Havelock Ellis – like Galton, an eminent psychologist (his field was sexuality, and he was one of the first scientists to write on homosexuality and transgender psychology), published A Study of British Genius This work examined all of the entries in all of the (then) sixty-six volumes of the Dictionary of Biography, eliminated those who were there solely by function of birth (i.e. the royals and most of the nobility/aristocracy), and then classified them by a number of characteristics.  One of the characteristics was university education, and unsurprisingly he found that most had gone to either Cambridge or Oxford (with a smattering from Edinburgh and Trinity).  Though it was not claimed as a ranking, it did list institutions in rank order; or rather two rank orders, as it had separate listings for British and foreign universities.

Not-so-coincidentally, it was also around this time when the first annual edition of American Men of Science appeared.  This series attempted to put the study of great men on a more scientific footing.  The author, James McKeen Cattell (a distinguished scientist who was President of the American Psychological Association in 1895, and edited both Science and Psychological Review), did a series of annual peer surveys to see who were the most respected scientists in the nation.  In the first edition, the section on psychologists contained a tabulation of the number of top people in the field, organized by the educational institution from which they graduated; at the time, it also contained an explicit warning that this was not a measure of quality.  However, by 1906 Cattell was producing tables showing changes in the number of graduates from each university in his top 1,000, and by 1910 he was producing tables that explicitly ranked institutions according to their graduates (with the value of each graduate weighted according to one’s place in the rankings).  Cattell’s work is, in many people’s view, the first actual ranking of American universities.

What’s the connection with eugenics?  Well, Galton’s obsession with heredity directly led him to the idea that “races” could be improved upon by selective breeding (and, conversely, that they could become “degenerate” if one wasn’t careful).  Indeed, it was Galton himself who coined the term “eugenics”, and was a major proponent of the idea.  For his part, Ellis would ultimately end up as President of the Galton Institute in London, which promoted eugenics (John Maynard Keynes would later sit on the Institute’s Board); in America, Cattell wound up as President of the American Eugenics Society. 

In effect, none of them remotely believed that one’s university made the slightest difference to eventual outcomes.  In their minds, it was all about heredity.  However, one could still infer something about universities by the fact that “Men of Genius” (and I’m sorry to keep saying “men” in this piece, but it’s pre-WWI, and they really were almost all men) chose to go there.  At the same time, these rankings represent the precursors to various reputational rankings that became in vogue in the US from the 1920s right through to the early 1980s.  And it’s worth noting that the idea of ranking institutions according to their alumni has made a comeback in recent years through the Academic Ranking of World Universities (also known as the Shanghai rankings), which scores institutions, in part, on the number of Nobel Prize and Fields Medals won by an institution’s alumni.

Anyway, just a curio I thought you’d all enjoy.

September 17

A Global Higher Education Rankings Cheat Sheet

As you likely noticed from the press generated by the release of the QS rankings: it’s now rankings season!  Are you at a university that seems to care about global rankings?  Are you not sure what the heck they all mean, or why institutions rank differently on different metrics?  Here’s a handy cheat-sheet to understand what each of them does, and why some institutions swear by some, but not by others.

Academic Ranking of World Universities (ARWU): Also known as the Shanghai Rankings, this is the granddaddy of world rankings (disclaimer: I sit on the advisory board), having been first out of the gate back in 2003.  It’s mostly bibliometric in nature, and places a pretty high premium on publication in a select few publications.  It also, unusually, scores institutions on how many Nobel or Field prizes their staff or alumni have won.  It’s really best thought of as a way of measuring large deposits of scientific talent.  There’s no adjustment for size or field (though it publishes separate ratings for six broad fields of study), which tends to favour institutions that are strong in fields like medicine and physics. As a result, it’s among the most stable rankings there is: only eleven institutions have ever been in ARWU’s top ten, and the top spot has always been held by Harvard.

Times Higher Education (THE) Rankings: As a rough guide, think of THE as ARWU with a prestige survey and some statistics on international students and staff tacked-on.  The survey is a mix of good and bad.  They seem to take reasonable care in constructing the sample and, for the most part, questions are worded sensibly.  However, the conceit that “teaching ability” is being measured this way is weird (especially since institutions’ “teaching” scores are correlated at .99 with their research scores).  The bibliometrics are different from ARWU’s in three important ways, though.  The first is that they are more about impact (i.e. citations) than publications.  The second is that said citations are adjusted for field, which helps institutions that are strong in areas outside medicine and physics, like the social sciences.  The third is that they are also adjusted for region, which gives a boost to universities outside Europe and North America.  It also does a set of field rankings.

QS Rankings: QS used to do rankings for THE until 2009 when the latter ended the partnership, but QS kept trucking on in the rankings game.  It’s superficially similar to THE in the sense that it’s mostly a mix of survey and bibliometrics.  The former is worth more, and is somewhat less technically sound than the THE’s survey, and it gets regularly lambasted for that.  The bibliometrics are a mix of publication and citation measures.  Its two distinguishing features are: 1) data from a survey of employers soliciting their views on graduate employability; and, 2) they rank ordinally down to position 500 (other rankings only group in tranches after the first hundred or so institutions).  This latter feature is a big deal if you happened to be obsessed with minute changes in ranking order, and regularly feature in the 200-to-500 range.  In New Zealand, for instance, QS gets used exclusively in policy discussions for precisely this reason.

U-Multirank: Unlike all the others, U-Multirank doesn’t provide data in a league-table format.  Instead, it takes data provided by institutions and allows users to choose their own indicators to provide of “personalized rankings”.  That’s the upside.  The downside is that not enough institutions actually provide data, so its usefulness is somewhat less than optimal.

Webometrics RankingsAs a rule of thumb: the bigger, and more complicated, and more filled with rich data a university website is, the more important a university it is likely to be.  Seriously.  And it actually kind of works.  In any case, Webometric’s big utility is that it ranks something like 13,000 universities around the world, and so for many countries in the developing world, it’s the only chance for them to see how they compare against other universities.

May 21

AHELO: Universities Behaving Badly

So there’s some excitement being generated this month with respect to the OECD’s Assessment of Higher Education Learning Outcomes (AHELO).  Roughly speaking, AHELO is the higher education equivalent of the Programme for International Student Assessment (PISA), or the Program for International Assessment of Adult Competencies (PIAAC).  It consists of a general test of critical thinking skills (based on the Collegiate Learning Assessment), plus a couple of subject-matter tests that test competencies in specific disciplines.  AHELO completed its pilot phase a couple of years ago, and OECD is now looking to move this to a full-blown regular survey.

Not everyone is keen on this.  In fact, OECD appears to moving ahead with this despite extremely tepid support among OECD education ministers, which is somewhat unusual.  Critics make a number of points against AHELO, which mostly boil down to: a) it’s too expensive, and it’s taking resources away from other more important OECD efforts in higher education; b) low-stakes testing generally is BS; and, c) intrinsically, trying to measure student outcomes internationally is invalid because curricula vary so much from place to place.

The critics have half a point.  It’s true that AHELO is expensive and is crowding-out other OECD activities.  And it’s not entirely clear why OECD is burning this much political capital on a project with so little ministerial support.  But while there is some credibility to the low-stakes testing issue, it hasn’t stopped either PISA or PIAAC from being huge successes, helping to inform policy around the world.  And as for different curricula: that’s the point.  Part of what government wants to know is whether or not what is being taught in universities is bringing students up to international standards of competency.

But what’s notable about the charge against AHELO are the people who are against it.  In the main, it’s associations of universities in rich countries, such as the American Council on Education, Universities Canada, and their counterparts in the UK and Europe.  And make no mistake, they are not doing so because they think there are better ways to compare outcomes; quite simply, these folks do not want comparisons to be made.

Now, this wouldn’t be a terrible position to take if it were done because universities dislike comparisons based on untested or iffy social science.  But of course, that’s not the case, is it?  Top universities are more than happy to play ball with rankings organizations like Times Higher Education, where the validity of the social science is substantially more questionable than AHELO’s.

Institutional opposition to AHELO, for the most part, plays out the same way as opposition to U-Multirank, it’s a defence of privilege: top universities know they will do well on comparisons of prestige and research intensity.  They don’t know how they will do on comparisons of teaching and learning.  And so they oppose it, and don’t even bother to suggest ways to improve comparisons.

Is AHELO perfect?  Of course not.  But it’s better than nothing – certainly, it would be an improvement over the mainly input-based rankings that universities participate in now – and could be improved over time.  The opposition of top universities (and their associations) to AHELO is shameful, hypocritical, and self-serving.  They think they can get away with this obstructionism because the politicking is all done behind the scenes – but they deserve to be held to account.

May 20

Who Wins and Who Loses in the “Top 100 Under 50″ Rankings

The annual Times Higher Education “Top 100 Under 50” universities came out a few weeks ago.  Australians were crowing about their success, and a few people in Canada noticed that Canada didn’t do so well – only four spots: Calgary 22nd, Simon Fraser 27th, UQAM 85th, and Concordia 96th.   So, today, we ask the question: why do young Canadian universities not fare well on these rankings?

Well, one way to look at this is to ask: “who does well at these rankings?”  And the answer, as usual, is that the ones that make it to the very, very top are some pretty specialized, ludicrously well-funded places, which have no obvious Canadian equivalent.  For example:

  • ETH Lausanne (top school) has 5,000 undergraduates and 4,500 graduate students, making it Harvard-like in its student balance.  They do this despite, in theory, having an open-access system in place for domestic students; in practice, weaker students self-select out Lausanne because the failure rate in year 1 is so high (roughly 50%, higher in Math and Physics).  It may be the only university in the world to operate not just a nuclear reactor but also a fusion reactor as well.  The institution has base (i.e. operations) funding of slightly over $800 million Canadian, which works out to a ludicrous $80,000 per student.
  • Pohang University of Science and Technology (POSTECH) (2nd place) has an even more ridiculous 1,300 undergraduates and 2,100 graduates.  Its budget is a slightly smaller $250 million US (still over $60K per student), but it has a $2 billion endowment from its founder, POSCO (a major steel manufacturer in Korea), as well as a heavy tie-up with POSCO for applied research.  (A good history of Postech can be found here).  Again, no Canadian university had those kind of advantages growing up

You get the picture.  It’s a similar deal at Korea’s KAIST and Singapore’s Nanyang (both in the top five).  UC Irvine and UC Santa Cruz are also in the top ten, and while the California system is currently experiencing some problems, these institutes continue to feed off the massive support the UC’s used to receive.  And since the THE rankings heavily privilege money and research intensity, you can see why these kinds of institutions score well ahead of Canadian schools, where implicit rules prevent any institution from reaching these degrees of research-intensity.

But look again at that Australian article I linked to above.  No fewer than 16 Australian universities made the top 100, and the reasons cited for their successes – public funding, stable regulation, English language, other cultural factors – all of these factors exist in Canada.  So why does Australia succeed where Canada doesn’t?

The explanation is actually pretty simple: on average, our universities are substantially older than Australia’s.  Even among the four Canadian schools, two arguably don’t actually meet the “under 50” criteria (Calgary was founded in 1945, though did not become an independent institution until 1966; Concordia dates from 1974, but the two colleges that merged to form it date back to 1896 and 1926, respectively).  Outside of those four, the only Canadian institutions with over 10,000 residential students, founded after 1965, are Lethbridge, Kwantlen, and Fraser Valley (though, depending on how you define “founding date”, presumably you could also make a case for Regina, MacEwan, and Mount Royal).  In Australia, only one-third of universities had degree-granting status before 1965.

The “under-50” designation effectively screens-out most institutions in countries that were early adopters of mass higher education.  The US, for instance, has only seven institutions on the THE list, five of which are in the late-developing West and South, and none of which are in the traditional higher education heartland of the Northeast.  It’s an arbitrary cut-off date, which has expressly been drawn in such a way that puts Asian universities in a better light.  It’s worth keeping that in mind when examining the results.

March 24

Banning the Term “Underfunding”

Somehow I missed this when the OECD’s Education at a Glance 2014 came out, but apparently Canada’s post-secondary system is now officially the best funded in the entire world.

I know, I know.  It’s a hard idea to accept when Presidents of every student union, faculty association, university, and college have been blaming “underfunding” for virtually every ill in post-secondary education since before Air Farce jokes started taking the bus to get to the punchline.  But the fact is, we’re tops.  Numero uno.  Take a look:

Figure 1: Percentage of GDP Spent on Higher Education Institutions, Select OECD Countries, 2011

unnamed

 

 

 

 

 

 

 

 

 

 

 

 

For what I believe is the first time ever, Canada is outstripping both the US (2.7%) and Korea (2.6%).  At 2.8% of GDP, spending on higher education is nearly twice what it is in the European Union.

Ah, you say, that’s probably because so much of our funding comes from private sources.  After all, don’t we always hear that tuition is at, or approaching, 50% of total funding in universities?  Well, no.  That stat only applies to operating expenditures (not total expenditures), and is only valid in Nova Scotia and Ontario.  Here’s what happens if we look only at public spending in all those countries:

Figure 2: Percentage of GDP Spent on Higher Education Institutions from Public Sources, Select OECD Countries, 2011

unnamed

 

 

 

 

 

 

 

 

 

 

 

 

While it’s true that Canada does have a high proportion of funds coming from private sources, public sector support to higher education still amounts to 1.6% of GDP, which is substantially above the OECD average.  In fact, our public expenditure on higher education is the same as in Norway and Sweden; among all OECD countries, only Finland and Denmark (not included in graph) are higher.

And this doesn’t even consider the fact that Statscan and CMEC don’t include expenditures like Canada Education Savings Grants and tax credits, which together are worth another 0.2% of GDP, because OECD doesn’t really have a reporting category for oddball expenditures like that.  The omission doesn’t change our total expenditure, but it does affect the public/private balance.  Instead of being 1.6% of GDP public, and 1.2% of GDP private, it’s probably more like 1.8% or 1.9% public, which again would put us at the absolute top of the world ranking.

So it’s worth asking: when people say we are “underfunded”, what do they mean?  Underfunded compared to who?  Underfunded for what?  If we have more money than anyone else, and we still feel there isn’t enough to go around, maybe we should be looking a lot more closely at *how* we spend the money rather than at *how much* we spend.

Meantime, I think there should be a public shaming campaign against use of the term “underfunding” in Canada.  It’s embarrassing, once you know the facts.

February 25

Rankings in the Middle East

If you follow rankings at all, you’ll have noticed that there is a fair bit of activity going on in the Middle East these days.  US News & World Report and Quacquarelli Symonds (QS) both published “Best Arab Universities” rankings last year; this week, the Times Higher Education (THE) produced a MENA (Middle East and North Africa) ranking at a glitzy conference in Doha.

The reason for this sudden flurry of Middle East-oriented rankings is pretty clear: Gulf universities have a lot of money they’d like to use on advertising to bolster their global status, and this is one way to do it.  Both THE and QS tried to tap this market by making up “developing world” or “BRICs” rankings, but frankly most Arab universities didn’t do too well on those metrics, so there was a niche market for something more focused.

The problem is that rankings make considerably less sense in MENA than they do elsewhere. In order to come up with useful indicators, you need accurate and comparable data, and there simply isn’t very much of this in the region.  Let’s take some of the obvious candidates for indicators:

Research:  This is an easy metric, and one which doesn’t rely on local universities’ ability to provide data.  And, no surprise, both US News and the Times Higher Ed have based 100% of their rankings on this measure.  But that’s ludicrous for a couple of reasons.  First is that most MENA universities have literally no interest in research.  Outside the Gulf (i.e. Oman, Kuwait, Qatar, Bahrain, UAE, and Saudi Arabia) there’s no available money for it.  Within the Gulf, most universities are staffed by expats teaching 4 or even 5 classes per term, with no time or mandate for research.  The only places where serious research is happening are at one or two of the foreign universities that are part of Education City in Doha, and in some of the larger Saudi Universities.  Of course the problem with Saudi universities, as we know, is that at least some of the big ones are furiously gaming publication metrics precisely in order to climb the rankings, without actually changing university cultures very much (see for example this eye-raising piece).

Expenditures:  This is a classic input variable used in many rankings.  However, an awful lot of Gulf universities are private and won’t want to talk about their expenditures for commercial reasons.  Additionally, some are personal creations of local rulers who spend lavishly on them (for example, Sharjah and Khalifa Universities in UAE); they’d be mortified if the data showed them to spending less than the Sheikh next door.  Even in public universities, the issue isn’t straightforward.  Transparency in government spending isn’t universal in the area, either; I suspect that getting financial data out of an Egyptian university would be a pretty unrewarding task.  Finally, for many Gulf universities, cost data will be massively wonky from one year to the next because of the way compensation works.  Expat teaching staff (in the majority at most Gulf unis) are paid partly in cash and partly through free housing, the cost of which swings enormously from one year to the next based on changes in the rental market.

Student Quality: In Canada, the US, and Japan, rankings often focus on how smart the students are based on average entering grades, SAT scores, etc.  But those simply don’t work in a multi-national ranking, so those are out.

Student Surveys: In Europe and North America, student surveys are one way to gauge quality.  However, if you are under the impression that there is a lot of appetite among Arab elites to allow public institutions to be rated by public opinion then I have some lakeside property in the Sahara I’d like to sell you.

Graduate Outcomes:  This is a tough one.  Some MENA universities do have graduate surveys, but what do you measure?  Employment?  How do you account for the fact that female labour market participation varies so much from country to country, and that many female graduates are either discouraged or forbidden by their families from working? 

What’s left?  Not much.  You could try class size data, but my guess is most universities outside the Gulf wouldn’t have an easy way of working this out.  Percent of professors with PhDs might be a possibility, as would the size of the institution’s graduate programs.  But after that it gets pretty thin.

To sum up: it’s easy to understand commercial rankers chasing money in the Gulf.  But given the lack of usable metrics, it’s unlikely their efforts will amount to anything useful, even by the relatively low standards of the rankings industry.

October 30

Times Higher Rankings, Weak Methodologies, and the Vastly Overblown “Rise of Asia”

I’m about a month late with this one (apologies), but I did want to mention something about the most recent version of the Times Higher Education (THE) Rankings.  You probably saw it linked to headlines that read, “The Rise of Asia”, or some such thing.

As some of you may know, I am inherently suspicious about year-on-year changes in rankings.  Universities are slow-moving creatures.  Quality is built over decades, not months.  If you see huge shifts from one year to another, it usually means the methodology is flimsy.  So I looked at the data for evidence of this “rise of Asia”.

The evidence clearly isn’t there in the top 50.  Tokyo and Hong Kong are unchanged in their position.  Tsinghua Beijing and National University of Singapore are all within a place or two of where they were last year.  In fact, if you just look at the top 50, you’d think Asia might be going backwards, since one of their big unis (Seoul National) fell out of the top 50, going from 44th to 52nd in a single year.

Well, what about if you look at the top 100?  Not much different.  In Korea, KAIST is up a bit, but Pohang is down.  Both the Hong Kong University of Science and Technology and Nanyang were up sharply, though, which is a bit of a boost; however, only one new “Asian” university came into the rankings, and that was the Middle Eastern Technical University in Turkey, which rose spectacularly from the 201-225 band last year, to 85th this year.

OK, what about the next 100?  Here it gets interesting.  There are bad news stories for Asian universities.  National Taiwan and Osaka each fell 13 places. Tohoku fell 15, Tokyo Tech 16, Chinese University Hong Kong 20, and Yonsei University fell out of the top 200 altogether.  But there is good news too: Bogazici University in Turkey jumped 60 places to 139th, and five new universities – two from China, two from Turkey and one from Korea – entered the top 200 for the first time.

So here’s the problem with the THE narrative.  The best part of the evidence for all this “rise of Asia” stuff rests on events in Turkey (which, like Israel, is often considered as being European rather than Asian – at least if membership in UEFA and Eurovision is anything to go by).  The only reason THE goes on with its “rise of Asia” tagline is because it has a lot of advertisers and a big conference business in East Asia, and its good business to flatter them, and damn the facts.

But there’s another issue here: how the hell did Turkey do so well this year, anyway?  Well, for that you need to check in with my friend Richard Holmes, who runs the University Ranking Watch blog.  He points out that a single paper (the one in Physics Letters B, which announced the confirmation of the Higgs Boson, and which immediately got cited in a bazillion places) was responsible for most of the movement in this year’s rankings.  And, because the paper had over 2,800 co-authors (including from those suddenly big Turkish universities), and because THE doesn’t fractionally count multiple-authored articles, and because THE’s methodology gives tons of bonus points to universities located in countries where scientific publications are low, this absolutely blew some schools’ numbers into the stratosphere.  Other examples of this are Scuola Normale di Pisa, which came out of nowhere to be ranked 65th in the world, or Federica Santa Maria Technical University in Chile, which somehow became the 4th ranked university in Latin America.

So basically, this year’s “rise of Asia” story was based almost entirely on the fact that a few of the 2,800 co-authors on the “Observation of a new boson…” paper happened to work in Turkey.

THE needs a new methodology.  Soon.

September 30

The Problem with Global Reputation Rankings

I was in Athens this past June, at an EU-sponsored conference on rankings, which included a very intriguing discussion about the use of reputation indicators that I thought I would share with you.

Not all rankings have reputational indicators; the Shanghai (ARWU) rankings, for instance, eschew them completely.  But QS and Times Higher Education (THE) rankings both weight them pretty highly (50% for QS, 35% for THE).  But this data isn’t entirely transparent.  THE, who release their World University Rankings tomorrow,  hides the actual reputational survey results for teaching and research by combining each of them with some other indicators (THE has 13 indicators, but it only shows 5 composite scores).  The reasons for doing this are largely commercial; if, each September, THE actually showed all the results individually, they wouldn’t be able to reassemble the indicators in a different way to have an entirely separate “Reputation Rankings” release six months later (with concomitant advertising and event sales) using exactly the same data.  Also, its data collection partner, Thomson Reuters, wouldn’t be able to sell the data back to institutions as part of its Global Institutional Profiles Project.

Now, I get it, rankers have to cover their (often substantial) costs somehow, and this re-sale of hidden data is one way to do it (disclosure: we at HESA did this with our Measuring Academic Research in Canada ranking.  But given the impact that rankings have for universities, there is an obligation to get this data right.  And the problem is that neither QS nor THE publish enough information about their reputation survey to make a real judgement about the quality of their data – and in particular about the reliability of the “reputation” voting.

We know that the THE allows survey recipients to nominate up to 30 institutions as being “the best in the world” for research and teaching, respectively (15 from one’s home continent, and 15 worldwide); the QS allows 40 (20 from one’s own country, 20 world-wide).  But we have no real idea about how many people are actually ticking the boxes on each university.

In any case, an analyst at an English university recently reverse-engineered the published data for UK universities to work out voting totals.  The resulting estimate is that, among institutions in the 150-200 range of the THE rankings, the average number of votes obtained for either research or teaching is in the range of 30-to-40, at best.  Which is astonishing, really.  Given that reputation counts for one third of an institution’s total score, it means there is enormous scope for year-to-year variations  – get 40 one year and 30 the next, and significant swings in ordinal rankings could result.  It also makes a complete mockery of the “Top Under 50” rankings, where 85% of institutions rank well below the top 200 in the main rankings, and therefore are likely only garnering a couple of votes apiece.  If true, this is a serious methodological problem.

For commercial reasons, it’s impossible to expect the THE to completely open the kimono on its data.  But given the ridiculous amount of influence its rankings have, it would be irresponsible of it – especially since it is allegedly a journalistic enterprise – not to at least allow some third party to inspect its data and give users a better sense of its reliability.  To do otherwise reduces the THE’s ranking exercise to sham social science.

Page 1 of 512345