Higher Education Strategy Associates

Category Archives: rankings

September 25

University Rankings and the Eugenics Movement

Over the course of writing a book chapter, I’ve come up with a delightful little nugget: some of the earliest rankings of universities originated in the Eugenics movement.

The story starts with Francis Galton. A first cousin to Charles Darwin, Galton was the inventor of the weather map, standard deviation, the regression line (and the explanation of regression towards the mean), fingerprinting, and composite photography.  In other words, pretty much your textbook definition of a genius.

At some point (some believe it was after reading On the  Origin of Species), Galton came to believe that genius is born, not made.  And so in 1869, he wrote a book called Hereditary Genius in which, using biographical dictionaries called “Men of Our Time” (published by Routledge, no less), he traced back “eminent men” to see if they had eminent fathers or grandfathers.  Eventually, he concluded that they did.  This led him into a lifelong study of heredity.  In 1874, Galton published British Men of Science, where he explored all sorts of heritable and non-heritable traits or experiences in order to better understand the basis of scientific genius; one of the questions he asked was whether each had gone to university (not actually universally true at the time), and if so, where had they gone?

Galton soon had imitators who began looking more seriously at education as part of the “genius” phenomenon.  In 1904, Havelock Ellis – like Galton, an eminent psychologist (his field was sexuality, and he was one of the first scientists to write on homosexuality and transgender psychology), published A Study of British Genius This work examined all of the entries in all of the (then) sixty-six volumes of the Dictionary of Biography, eliminated those who were there solely by function of birth (i.e. the royals and most of the nobility/aristocracy), and then classified them by a number of characteristics.  One of the characteristics was university education, and unsurprisingly he found that most had gone to either Cambridge or Oxford (with a smattering from Edinburgh and Trinity).  Though it was not claimed as a ranking, it did list institutions in rank order; or rather two rank orders, as it had separate listings for British and foreign universities.

Not-so-coincidentally, it was also around this time when the first annual edition of American Men of Science appeared.  This series attempted to put the study of great men on a more scientific footing.  The author, James McKeen Cattell (a distinguished scientist who was President of the American Psychological Association in 1895, and edited both Science and Psychological Review), did a series of annual peer surveys to see who were the most respected scientists in the nation.  In the first edition, the section on psychologists contained a tabulation of the number of top people in the field, organized by the educational institution from which they graduated; at the time, it also contained an explicit warning that this was not a measure of quality.  However, by 1906 Cattell was producing tables showing changes in the number of graduates from each university in his top 1,000, and by 1910 he was producing tables that explicitly ranked institutions according to their graduates (with the value of each graduate weighted according to one’s place in the rankings).  Cattell’s work is, in many people’s view, the first actual ranking of American universities.

What’s the connection with eugenics?  Well, Galton’s obsession with heredity directly led him to the idea that “races” could be improved upon by selective breeding (and, conversely, that they could become “degenerate” if one wasn’t careful).  Indeed, it was Galton himself who coined the term “eugenics”, and was a major proponent of the idea.  For his part, Ellis would ultimately end up as President of the Galton Institute in London, which promoted eugenics (John Maynard Keynes would later sit on the Institute’s Board); in America, Cattell wound up as President of the American Eugenics Society. 

In effect, none of them remotely believed that one’s university made the slightest difference to eventual outcomes.  In their minds, it was all about heredity.  However, one could still infer something about universities by the fact that “Men of Genius” (and I’m sorry to keep saying “men” in this piece, but it’s pre-WWI, and they really were almost all men) chose to go there.  At the same time, these rankings represent the precursors to various reputational rankings that became in vogue in the US from the 1920s right through to the early 1980s.  And it’s worth noting that the idea of ranking institutions according to their alumni has made a comeback in recent years through the Academic Ranking of World Universities (also known as the Shanghai rankings), which scores institutions, in part, on the number of Nobel Prize and Fields Medals won by an institution’s alumni.

Anyway, just a curio I thought you’d all enjoy.

September 17

A Global Higher Education Rankings Cheat Sheet

As you likely noticed from the press generated by the release of the QS rankings: it’s now rankings season!  Are you at a university that seems to care about global rankings?  Are you not sure what the heck they all mean, or why institutions rank differently on different metrics?  Here’s a handy cheat-sheet to understand what each of them does, and why some institutions swear by some, but not by others.

Academic Ranking of World Universities (ARWU): Also known as the Shanghai Rankings, this is the granddaddy of world rankings (disclaimer: I sit on the advisory board), having been first out of the gate back in 2003.  It’s mostly bibliometric in nature, and places a pretty high premium on publication in a select few publications.  It also, unusually, scores institutions on how many Nobel or Field prizes their staff or alumni have won.  It’s really best thought of as a way of measuring large deposits of scientific talent.  There’s no adjustment for size or field (though it publishes separate ratings for six broad fields of study), which tends to favour institutions that are strong in fields like medicine and physics. As a result, it’s among the most stable rankings there is: only eleven institutions have ever been in ARWU’s top ten, and the top spot has always been held by Harvard.

Times Higher Education (THE) Rankings: As a rough guide, think of THE as ARWU with a prestige survey and some statistics on international students and staff tacked-on.  The survey is a mix of good and bad.  They seem to take reasonable care in constructing the sample and, for the most part, questions are worded sensibly.  However, the conceit that “teaching ability” is being measured this way is weird (especially since institutions’ “teaching” scores are correlated at .99 with their research scores).  The bibliometrics are different from ARWU’s in three important ways, though.  The first is that they are more about impact (i.e. citations) than publications.  The second is that said citations are adjusted for field, which helps institutions that are strong in areas outside medicine and physics, like the social sciences.  The third is that they are also adjusted for region, which gives a boost to universities outside Europe and North America.  It also does a set of field rankings.

QS Rankings: QS used to do rankings for THE until 2009 when the latter ended the partnership, but QS kept trucking on in the rankings game.  It’s superficially similar to THE in the sense that it’s mostly a mix of survey and bibliometrics.  The former is worth more, and is somewhat less technically sound than the THE’s survey, and it gets regularly lambasted for that.  The bibliometrics are a mix of publication and citation measures.  Its two distinguishing features are: 1) data from a survey of employers soliciting their views on graduate employability; and, 2) they rank ordinally down to position 500 (other rankings only group in tranches after the first hundred or so institutions).  This latter feature is a big deal if you happened to be obsessed with minute changes in ranking order, and regularly feature in the 200-to-500 range.  In New Zealand, for instance, QS gets used exclusively in policy discussions for precisely this reason.

U-Multirank: Unlike all the others, U-Multirank doesn’t provide data in a league-table format.  Instead, it takes data provided by institutions and allows users to choose their own indicators to provide of “personalized rankings”.  That’s the upside.  The downside is that not enough institutions actually provide data, so its usefulness is somewhat less than optimal.

Webometrics RankingsAs a rule of thumb: the bigger, and more complicated, and more filled with rich data a university website is, the more important a university it is likely to be.  Seriously.  And it actually kind of works.  In any case, Webometric’s big utility is that it ranks something like 13,000 universities around the world, and so for many countries in the developing world, it’s the only chance for them to see how they compare against other universities.

May 21

AHELO: Universities Behaving Badly

So there’s some excitement being generated this month with respect to the OECD’s Assessment of Higher Education Learning Outcomes (AHELO).  Roughly speaking, AHELO is the higher education equivalent of the Programme for International Student Assessment (PISA), or the Program for International Assessment of Adult Competencies (PIAAC).  It consists of a general test of critical thinking skills (based on the Collegiate Learning Assessment), plus a couple of subject-matter tests that test competencies in specific disciplines.  AHELO completed its pilot phase a couple of years ago, and OECD is now looking to move this to a full-blown regular survey.

Not everyone is keen on this.  In fact, OECD appears to moving ahead with this despite extremely tepid support among OECD education ministers, which is somewhat unusual.  Critics make a number of points against AHELO, which mostly boil down to: a) it’s too expensive, and it’s taking resources away from other more important OECD efforts in higher education; b) low-stakes testing generally is BS; and, c) intrinsically, trying to measure student outcomes internationally is invalid because curricula vary so much from place to place.

The critics have half a point.  It’s true that AHELO is expensive and is crowding-out other OECD activities.  And it’s not entirely clear why OECD is burning this much political capital on a project with so little ministerial support.  But while there is some credibility to the low-stakes testing issue, it hasn’t stopped either PISA or PIAAC from being huge successes, helping to inform policy around the world.  And as for different curricula: that’s the point.  Part of what government wants to know is whether or not what is being taught in universities is bringing students up to international standards of competency.

But what’s notable about the charge against AHELO are the people who are against it.  In the main, it’s associations of universities in rich countries, such as the American Council on Education, Universities Canada, and their counterparts in the UK and Europe.  And make no mistake, they are not doing so because they think there are better ways to compare outcomes; quite simply, these folks do not want comparisons to be made.

Now, this wouldn’t be a terrible position to take if it were done because universities dislike comparisons based on untested or iffy social science.  But of course, that’s not the case, is it?  Top universities are more than happy to play ball with rankings organizations like Times Higher Education, where the validity of the social science is substantially more questionable than AHELO’s.

Institutional opposition to AHELO, for the most part, plays out the same way as opposition to U-Multirank, it’s a defence of privilege: top universities know they will do well on comparisons of prestige and research intensity.  They don’t know how they will do on comparisons of teaching and learning.  And so they oppose it, and don’t even bother to suggest ways to improve comparisons.

Is AHELO perfect?  Of course not.  But it’s better than nothing – certainly, it would be an improvement over the mainly input-based rankings that universities participate in now – and could be improved over time.  The opposition of top universities (and their associations) to AHELO is shameful, hypocritical, and self-serving.  They think they can get away with this obstructionism because the politicking is all done behind the scenes – but they deserve to be held to account.

May 20

Who Wins and Who Loses in the “Top 100 Under 50″ Rankings

The annual Times Higher Education “Top 100 Under 50” universities came out a few weeks ago.  Australians were crowing about their success, and a few people in Canada noticed that Canada didn’t do so well – only four spots: Calgary 22nd, Simon Fraser 27th, UQAM 85th, and Concordia 96th.   So, today, we ask the question: why do young Canadian universities not fare well on these rankings?

Well, one way to look at this is to ask: “who does well at these rankings?”  And the answer, as usual, is that the ones that make it to the very, very top are some pretty specialized, ludicrously well-funded places, which have no obvious Canadian equivalent.  For example:

  • ETH Lausanne (top school) has 5,000 undergraduates and 4,500 graduate students, making it Harvard-like in its student balance.  They do this despite, in theory, having an open-access system in place for domestic students; in practice, weaker students self-select out Lausanne because the failure rate in year 1 is so high (roughly 50%, higher in Math and Physics).  It may be the only university in the world to operate not just a nuclear reactor but also a fusion reactor as well.  The institution has base (i.e. operations) funding of slightly over $800 million Canadian, which works out to a ludicrous $80,000 per student.
  • Pohang University of Science and Technology (POSTECH) (2nd place) has an even more ridiculous 1,300 undergraduates and 2,100 graduates.  Its budget is a slightly smaller $250 million US (still over $60K per student), but it has a $2 billion endowment from its founder, POSCO (a major steel manufacturer in Korea), as well as a heavy tie-up with POSCO for applied research.  (A good history of Postech can be found here).  Again, no Canadian university had those kind of advantages growing up

You get the picture.  It’s a similar deal at Korea’s KAIST and Singapore’s Nanyang (both in the top five).  UC Irvine and UC Santa Cruz are also in the top ten, and while the California system is currently experiencing some problems, these institutes continue to feed off the massive support the UC’s used to receive.  And since the THE rankings heavily privilege money and research intensity, you can see why these kinds of institutions score well ahead of Canadian schools, where implicit rules prevent any institution from reaching these degrees of research-intensity.

But look again at that Australian article I linked to above.  No fewer than 16 Australian universities made the top 100, and the reasons cited for their successes – public funding, stable regulation, English language, other cultural factors – all of these factors exist in Canada.  So why does Australia succeed where Canada doesn’t?

The explanation is actually pretty simple: on average, our universities are substantially older than Australia’s.  Even among the four Canadian schools, two arguably don’t actually meet the “under 50” criteria (Calgary was founded in 1945, though did not become an independent institution until 1966; Concordia dates from 1974, but the two colleges that merged to form it date back to 1896 and 1926, respectively).  Outside of those four, the only Canadian institutions with over 10,000 residential students, founded after 1965, are Lethbridge, Kwantlen, and Fraser Valley (though, depending on how you define “founding date”, presumably you could also make a case for Regina, MacEwan, and Mount Royal).  In Australia, only one-third of universities had degree-granting status before 1965.

The “under-50” designation effectively screens-out most institutions in countries that were early adopters of mass higher education.  The US, for instance, has only seven institutions on the THE list, five of which are in the late-developing West and South, and none of which are in the traditional higher education heartland of the Northeast.  It’s an arbitrary cut-off date, which has expressly been drawn in such a way that puts Asian universities in a better light.  It’s worth keeping that in mind when examining the results.

March 24

Banning the Term “Underfunding”

Somehow I missed this when the OECD’s Education at a Glance 2014 came out, but apparently Canada’s post-secondary system is now officially the best funded in the entire world.

I know, I know.  It’s a hard idea to accept when Presidents of every student union, faculty association, university, and college have been blaming “underfunding” for virtually every ill in post-secondary education since before Air Farce jokes started taking the bus to get to the punchline.  But the fact is, we’re tops.  Numero uno.  Take a look:

Figure 1: Percentage of GDP Spent on Higher Education Institutions, Select OECD Countries, 2011














For what I believe is the first time ever, Canada is outstripping both the US (2.7%) and Korea (2.6%).  At 2.8% of GDP, spending on higher education is nearly twice what it is in the European Union.

Ah, you say, that’s probably because so much of our funding comes from private sources.  After all, don’t we always hear that tuition is at, or approaching, 50% of total funding in universities?  Well, no.  That stat only applies to operating expenditures (not total expenditures), and is only valid in Nova Scotia and Ontario.  Here’s what happens if we look only at public spending in all those countries:

Figure 2: Percentage of GDP Spent on Higher Education Institutions from Public Sources, Select OECD Countries, 2011














While it’s true that Canada does have a high proportion of funds coming from private sources, public sector support to higher education still amounts to 1.6% of GDP, which is substantially above the OECD average.  In fact, our public expenditure on higher education is the same as in Norway and Sweden; among all OECD countries, only Finland and Denmark (not included in graph) are higher.

And this doesn’t even consider the fact that Statscan and CMEC don’t include expenditures like Canada Education Savings Grants and tax credits, which together are worth another 0.2% of GDP, because OECD doesn’t really have a reporting category for oddball expenditures like that.  The omission doesn’t change our total expenditure, but it does affect the public/private balance.  Instead of being 1.6% of GDP public, and 1.2% of GDP private, it’s probably more like 1.8% or 1.9% public, which again would put us at the absolute top of the world ranking.

So it’s worth asking: when people say we are “underfunded”, what do they mean?  Underfunded compared to who?  Underfunded for what?  If we have more money than anyone else, and we still feel there isn’t enough to go around, maybe we should be looking a lot more closely at *how* we spend the money rather than at *how much* we spend.

Meantime, I think there should be a public shaming campaign against use of the term “underfunding” in Canada.  It’s embarrassing, once you know the facts.

February 25

Rankings in the Middle East

If you follow rankings at all, you’ll have noticed that there is a fair bit of activity going on in the Middle East these days.  US News & World Report and Quacquarelli Symonds (QS) both published “Best Arab Universities” rankings last year; this week, the Times Higher Education (THE) produced a MENA (Middle East and North Africa) ranking at a glitzy conference in Doha.

The reason for this sudden flurry of Middle East-oriented rankings is pretty clear: Gulf universities have a lot of money they’d like to use on advertising to bolster their global status, and this is one way to do it.  Both THE and QS tried to tap this market by making up “developing world” or “BRICs” rankings, but frankly most Arab universities didn’t do too well on those metrics, so there was a niche market for something more focused.

The problem is that rankings make considerably less sense in MENA than they do elsewhere. In order to come up with useful indicators, you need accurate and comparable data, and there simply isn’t very much of this in the region.  Let’s take some of the obvious candidates for indicators:

Research:  This is an easy metric, and one which doesn’t rely on local universities’ ability to provide data.  And, no surprise, both US News and the Times Higher Ed have based 100% of their rankings on this measure.  But that’s ludicrous for a couple of reasons.  First is that most MENA universities have literally no interest in research.  Outside the Gulf (i.e. Oman, Kuwait, Qatar, Bahrain, UAE, and Saudi Arabia) there’s no available money for it.  Within the Gulf, most universities are staffed by expats teaching 4 or even 5 classes per term, with no time or mandate for research.  The only places where serious research is happening are at one or two of the foreign universities that are part of Education City in Doha, and in some of the larger Saudi Universities.  Of course the problem with Saudi universities, as we know, is that at least some of the big ones are furiously gaming publication metrics precisely in order to climb the rankings, without actually changing university cultures very much (see for example this eye-raising piece).

Expenditures:  This is a classic input variable used in many rankings.  However, an awful lot of Gulf universities are private and won’t want to talk about their expenditures for commercial reasons.  Additionally, some are personal creations of local rulers who spend lavishly on them (for example, Sharjah and Khalifa Universities in UAE); they’d be mortified if the data showed them to spending less than the Sheikh next door.  Even in public universities, the issue isn’t straightforward.  Transparency in government spending isn’t universal in the area, either; I suspect that getting financial data out of an Egyptian university would be a pretty unrewarding task.  Finally, for many Gulf universities, cost data will be massively wonky from one year to the next because of the way compensation works.  Expat teaching staff (in the majority at most Gulf unis) are paid partly in cash and partly through free housing, the cost of which swings enormously from one year to the next based on changes in the rental market.

Student Quality: In Canada, the US, and Japan, rankings often focus on how smart the students are based on average entering grades, SAT scores, etc.  But those simply don’t work in a multi-national ranking, so those are out.

Student Surveys: In Europe and North America, student surveys are one way to gauge quality.  However, if you are under the impression that there is a lot of appetite among Arab elites to allow public institutions to be rated by public opinion then I have some lakeside property in the Sahara I’d like to sell you.

Graduate Outcomes:  This is a tough one.  Some MENA universities do have graduate surveys, but what do you measure?  Employment?  How do you account for the fact that female labour market participation varies so much from country to country, and that many female graduates are either discouraged or forbidden by their families from working? 

What’s left?  Not much.  You could try class size data, but my guess is most universities outside the Gulf wouldn’t have an easy way of working this out.  Percent of professors with PhDs might be a possibility, as would the size of the institution’s graduate programs.  But after that it gets pretty thin.

To sum up: it’s easy to understand commercial rankers chasing money in the Gulf.  But given the lack of usable metrics, it’s unlikely their efforts will amount to anything useful, even by the relatively low standards of the rankings industry.

October 30

Times Higher Rankings, Weak Methodologies, and the Vastly Overblown “Rise of Asia”

I’m about a month late with this one (apologies), but I did want to mention something about the most recent version of the Times Higher Education (THE) Rankings.  You probably saw it linked to headlines that read, “The Rise of Asia”, or some such thing.

As some of you may know, I am inherently suspicious about year-on-year changes in rankings.  Universities are slow-moving creatures.  Quality is built over decades, not months.  If you see huge shifts from one year to another, it usually means the methodology is flimsy.  So I looked at the data for evidence of this “rise of Asia”.

The evidence clearly isn’t there in the top 50.  Tokyo and Hong Kong are unchanged in their position.  Tsinghua Beijing and National University of Singapore are all within a place or two of where they were last year.  In fact, if you just look at the top 50, you’d think Asia might be going backwards, since one of their big unis (Seoul National) fell out of the top 50, going from 44th to 52nd in a single year.

Well, what about if you look at the top 100?  Not much different.  In Korea, KAIST is up a bit, but Pohang is down.  Both the Hong Kong University of Science and Technology and Nanyang were up sharply, though, which is a bit of a boost; however, only one new “Asian” university came into the rankings, and that was the Middle Eastern Technical University in Turkey, which rose spectacularly from the 201-225 band last year, to 85th this year.

OK, what about the next 100?  Here it gets interesting.  There are bad news stories for Asian universities.  National Taiwan and Osaka each fell 13 places. Tohoku fell 15, Tokyo Tech 16, Chinese University Hong Kong 20, and Yonsei University fell out of the top 200 altogether.  But there is good news too: Bogazici University in Turkey jumped 60 places to 139th, and five new universities – two from China, two from Turkey and one from Korea – entered the top 200 for the first time.

So here’s the problem with the THE narrative.  The best part of the evidence for all this “rise of Asia” stuff rests on events in Turkey (which, like Israel, is often considered as being European rather than Asian – at least if membership in UEFA and Eurovision is anything to go by).  The only reason THE goes on with its “rise of Asia” tagline is because it has a lot of advertisers and a big conference business in East Asia, and its good business to flatter them, and damn the facts.

But there’s another issue here: how the hell did Turkey do so well this year, anyway?  Well, for that you need to check in with my friend Richard Holmes, who runs the University Ranking Watch blog.  He points out that a single paper (the one in Physics Letters B, which announced the confirmation of the Higgs Boson, and which immediately got cited in a bazillion places) was responsible for most of the movement in this year’s rankings.  And, because the paper had over 2,800 co-authors (including from those suddenly big Turkish universities), and because THE doesn’t fractionally count multiple-authored articles, and because THE’s methodology gives tons of bonus points to universities located in countries where scientific publications are low, this absolutely blew some schools’ numbers into the stratosphere.  Other examples of this are Scuola Normale di Pisa, which came out of nowhere to be ranked 65th in the world, or Federica Santa Maria Technical University in Chile, which somehow became the 4th ranked university in Latin America.

So basically, this year’s “rise of Asia” story was based almost entirely on the fact that a few of the 2,800 co-authors on the “Observation of a new boson…” paper happened to work in Turkey.

THE needs a new methodology.  Soon.

September 30

The Problem with Global Reputation Rankings

I was in Athens this past June, at an EU-sponsored conference on rankings, which included a very intriguing discussion about the use of reputation indicators that I thought I would share with you.

Not all rankings have reputational indicators; the Shanghai (ARWU) rankings, for instance, eschew them completely.  But QS and Times Higher Education (THE) rankings both weight them pretty highly (50% for QS, 35% for THE).  But this data isn’t entirely transparent.  THE, who release their World University Rankings tomorrow,  hides the actual reputational survey results for teaching and research by combining each of them with some other indicators (THE has 13 indicators, but it only shows 5 composite scores).  The reasons for doing this are largely commercial; if, each September, THE actually showed all the results individually, they wouldn’t be able to reassemble the indicators in a different way to have an entirely separate “Reputation Rankings” release six months later (with concomitant advertising and event sales) using exactly the same data.  Also, its data collection partner, Thomson Reuters, wouldn’t be able to sell the data back to institutions as part of its Global Institutional Profiles Project.

Now, I get it, rankers have to cover their (often substantial) costs somehow, and this re-sale of hidden data is one way to do it (disclosure: we at HESA did this with our Measuring Academic Research in Canada ranking.  But given the impact that rankings have for universities, there is an obligation to get this data right.  And the problem is that neither QS nor THE publish enough information about their reputation survey to make a real judgement about the quality of their data – and in particular about the reliability of the “reputation” voting.

We know that the THE allows survey recipients to nominate up to 30 institutions as being “the best in the world” for research and teaching, respectively (15 from one’s home continent, and 15 worldwide); the QS allows 40 (20 from one’s own country, 20 world-wide).  But we have no real idea about how many people are actually ticking the boxes on each university.

In any case, an analyst at an English university recently reverse-engineered the published data for UK universities to work out voting totals.  The resulting estimate is that, among institutions in the 150-200 range of the THE rankings, the average number of votes obtained for either research or teaching is in the range of 30-to-40, at best.  Which is astonishing, really.  Given that reputation counts for one third of an institution’s total score, it means there is enormous scope for year-to-year variations  – get 40 one year and 30 the next, and significant swings in ordinal rankings could result.  It also makes a complete mockery of the “Top Under 50” rankings, where 85% of institutions rank well below the top 200 in the main rankings, and therefore are likely only garnering a couple of votes apiece.  If true, this is a serious methodological problem.

For commercial reasons, it’s impossible to expect the THE to completely open the kimono on its data.  But given the ridiculous amount of influence its rankings have, it would be irresponsible of it – especially since it is allegedly a journalistic enterprise – not to at least allow some third party to inspect its data and give users a better sense of its reliability.  To do otherwise reduces the THE’s ranking exercise to sham social science.

May 29

May ’14 Rankings Round-Up

I’ve been remiss  the last month or so in not keeping you up-to-date with some of the big international rankings releases, namely the Leiden Rankings, the Times Top 100 Under 50 rankings, and the U21 Ranking of National Higher Education Systems.

Let’s start with Leiden (previous articles on Leiden can be found here, and here), a multidimensional bibliometric ranking that looks at various types of publication and impact metrics.  Because of the nature of the data it uses, and the way it displays results, the rankings are both stable and hard to summarize.  I encourage everyone interested in bibliometrics to take a look and play around with the data themselves to see how the rankings work. In terms of Canadian institutions, our Big Three (Toronto, UBC, McGill) do reasonably well, as usual (though the sheer volume of publications from Toronto is a bit of a stunner), perhaps more surprising is how Victoria outperforms most of the U-15 on some of these measures.

Next, there’s the U21 National Systems Rankings (which, again, I have previously profiled, back here and here).  This is an attempt to rank not individual institutions, but rather whole national higher education systems based on Resources, Environments, Connectivity, and Outputs.  The US comes tops, Sweden 2nd, and Canada 3rd overall – we climb a place from last year.  We do this mostly on the basis of being second in the world in terms of resources (that’s right, folks: complain as we all do about funding, and how nasty governments are here to merely maintain budgets in real dollars, only Denmark has a better-resources system than our own), and third in terms of “outputs” (mostly research-based).

We do less well, though, in other areas, notably “Environment”, where we come 33rd (behind Bulgaria, Thailand, and Serbia, among others.  That’s mostly because of the way the ranking effectively penalizes us for: a) being a federation without certain types of top-level national organizations (Germany suffers on this score as well), b) for our system being too public (yes, really), and c) Statscan data on higher education being either unavailable or totally impenetrable to outsiders.  If you were to ignore some of this weirder stuff, we’d have been ranked second.

The innovation in this year’s U21 rankings is the normalization of national scores by per capita GDP.  Canada falls to seventh on this measure (though the Americans fall further, from first to fifteenth).  The Scandinavians end up looking even better than they usually do, but so – interestingly enough – does Serbia, which ranks fourth overall in this version of the ranking.

Finally, there’s the Times Higher Top 100 Institutions Under 50, a fun ranking despite some of the obvious methodological limitations (which I pointed out back here) and won’t rehash again.  This ranking always changes significantly each year because the institutions at the top tend to be close to 50 years out, and as such get rotated out and new ones take their place.  Asian universities took four of the top five spots globally (Postech and KAIST in Korea, HKUST in Hong Kong, and Nanyang in Singapore).  Calgary, in 19th place was the best Canadian performer, but Simon Fraser made 24th and three other Canadian universities took their place for the first time: Guelph (73) UQAM (84) and Concordia (96).

Even if you don’t take rankings overly seriously, all three rankings provide ample amounts of thought-provoking data.  Poke around and you’re sure to find at least a few surprises.

May 15

Does More Information Really Solve Anything?

One of the great quests in higher education over the past two decades has been to make the sector more “transparent”.  Higher education is a classic example of a “low-information” economy.  Like medicine, consumers have very limited information about the quality of higher education providers, and so “poor performers” cannot easily be identified.  If only there were some way to actually provide individuals with better information, higher education would come closer to the ideal of “perfect information” (a key part of “perfect competition”), and poor performers would come under pressure from declining enrolments.

For many people, the arrival of university league table rankings held a lot of promise.  At last, some data tools with some simple heuristics that could help students make distinctions with respect to quality!  While some people still hold this view, others have become more circumspect, and have come to realize that most rankings simply replicate the existing prestige hierarchy because they rely on metrics like income and research intensity, which tend to be correlated with institutional age and size. Still, many hold out hope for other types of information tools to provide this kind of information.  In Europe, the big white hope is U-Multirank; in the UK it’s the “Key Information Set”, and in Korea it’s the Major Indictors System.  In the US, of course, you see the same phenomenon at work with the White House’s proposed college ratings system.

What unites all of these efforts is a belief that people will respond to information, if the right type of information is put in front of them in a manner they can easily understand/manipulate.  The arguments have tended centre around what kind of information is useful/available, and the right way to display/format the data, but a study out last month from the Higher Education Funding Council for England asked a much more profound question: is it possible that none of this stuff makes any difference at all?

Now, it’s not an empirical study of the use of information tools, so we shouldn’t get *too* excited about it.  Rather, it’s a literature review, but an uncommonly good one, drawing significantly from sources like Daniel Kahneman and Herbert Simon.  The two key findings (and I’m quoting from the press release here, because it’s way more succinct about this than I could be) are:

1) that the decision-making process is complex, personal and nuanced, involving different types of information, messengers and influences over a long time. This challenges the common assumption that people primarily make objective choices following a systematic analysis of all the information available to them at one time, and

2) that greater amounts of information do not necessarily mean that people will be better informed or be able to make better decisions. 

Now, because HEFCE and the UK government are among those people that believe deeply in the “better data leads to better universities via competition model” the study doesn’t actually say “guys, your approach implies some pretty whopping and likely incorrect assumptions” – but the report implies it pretty strongly.

It’s very much worth a read, if for no other reason than to remind oneself that even the best-designed, most well-meaning “interventions”, won’t necessarily have the intended effects.

Page 1 of 41234