The Country 100 Ranking

You may have seen a spate of headlines recently, such as this one in University Affairsabout how Canada is considered #5 globally in terms of higher education according to a new “Country 100” ranking done some outfit called Measures HE. Cue minor celebrations: woo woo! Someone thinks we are top ten! Etc.

The troublemakers who make up the readership of this blog have questions, I am sure. Naturally, I am here to answer them.

Firstwho is “Measures HE”? It is a data consultancy formed by two guys who used to work for the Times Higher Ranking. Looking at their website, they seem to do niche rankings as a way to attract attention to their consulting work, which is not a million miles from what I do when publish reports like The State of Postsecondary Education in Canada, or The World of Higher Education – Year In Review (see what I did there?). In addition to the “Country 100” ranking there are also rankings on “Talent”  which looks at individual scientists’ “gravitas” (their somewhat idiosyncratic phrase) and “Journals” which looks at global Journal influence (#1 is Heritage Science, fwiw).

Is this the first time someone has tried to rank national systems rather than institutions? Nope. Back in the day (2012 to 2020), there was something called the U21 Rankings (which I wrote about here and here), after the Universitas 21 Alliance that sponsored them. The rankings’ author, University of Melbourne economist Ross Williams, used 20-odd indicators to look at four pillars: Resources (i.e. money), Environment (a mishmash of indicators about gender equality, labour market responsiveness, diversity of institutions), Connectivity (mostly indicators of research collaboration), and Outputs (a combination of research outputs and graduate attainment rates). It was an ambitious attempt to look at a multifaceted question, and as these things go it was a pretty solid effort. Canada was always near the top but never quite at the peak, mainly because we got knocked down for having poor data availability (thanks, Statscan!) and being too much of a monoculture (no private institutions…or at least none that Statscan bothers to acknowledge).

Is the Measures Ranking just the U21 Rankings redux? Again, no. Measures HE is a research consultancy and so, not surprisingly, this ranking is much more about research inputs and outputs than it is about system performance per se. In fact, although there are seven different “pillars”, effectively six of them, worth 90% of the weighting total, are research.

So how does it work, exactly? You can read the full methodology here but I can save you the trouble. The seven pillars of this ranking are:

  • Research (4 indicators, 35% weighting), which is about measuring a country’s ability to be at the forefront of science. It eschews straightforward indicators of publications or citations; instead choosing some quite funky indicators which I suspect are largely correlated with those things. The most important of these indicators is something called “Research Gravitas” (yes, really – I’ll come back to this in a bit).
  • Sustainability (2 indicators, 7% weighting), where sustainability means the UN SDGs. Again, this is not for the most part measured using publications or citations but rather using a method similar to that for “Research Gravitas” (see above).
  • Openness (3 indicators, 10% weighting). This is measured using a more standard publications/citations methodology, only applied to the breadth of citations (are your researchers being quoted around the world or just in a few places?), co-authorships with industry, and the % of national research which is available in Open Access form. 
  • International Integration (3 indicators, 8% weighting). This is a mishmash of indicators: percentage of articles with International co-authorships, percentage  of researchers from other countries (absolutely no idea how they got data for this, seems like a morass to me), and percentage of students who are international students. In principle at least, this looks pretty similar to measures of internationalization used by the QS and Times Higher rankings.
  • Global Standing (3 indicators, 20% weighting). This is straight up how well a country’s “top” institutions perform in the Times Higher and QS rankings (but strangely, not the Shanghai Rankings), plus another application of the “research gravitas” data.
  • Academic Integrity (3 indicators, 10% weighting). This is a genuinely interesting set of three indicators, which basically act to punish countries with high rates of research retraction, or high levels of authorial/institutional self-citation rates.  
  • Demographics and Investment (7 indicators, 10% weighting), which is partly about money spent on higher education and partly about student attainment rates and gender parity rates, but also about the number of teachers, researchers, and gender parity among teachers. A lot of this data is based on submission to UNESCO, which seems like an extremely dicey strategy to me given how slipshod most reporting to UNESCO is. But in any event, this is the one set of indicators which are not research-focused.

What was that you were saying about “Research Gravitas”? One of the interesting features of this ranking – quite definitely meant to show off the company’s own analytic chops – is its use of some oddball metrics. So, for instance, its measure of research gravitas (18% of overall weight) is described thus: “Measures a nation’s capacity to lead academic discourse. This is calculated using a PageRank algorithm applied to subject-level citation networks, identifying structural influence within the academic community rather than mere citation volume.”  What this actually means – and how different it is from straight-up citation analysis – is a bit mysterious, but man, it sounds cool.

In a similar vein, its measure of research quality (6% overall weight) which “assesses the baseline standard of a nation’s research while mitigating the skew of extreme outliers [and] is calculated as the outlier-trimmed arithmetic mean (Olympic mean) of the Field-Weighted Citation Impact (FWCI) for the country’s published works”, which I think means standard bibliometrics with a little bit of figure-skating scoring adjustment thrown in. There is also this, with respect to measuring citation diversity: “This is calculated using Shannon entropy to ensure the citations are genuinely widespread rather than concentrated within a few insular networks.” Sounds cool. Not sure it means anything, and it’s possibly just a showy method of brand differentiation, but it sounds cool.

So, is it a goodranking or not? That’s aneye of the beholder thing, really. Personally, I don’t think that a ranking which is 90% about research makes a whole lot of sense as a method of comparing national systems of education because literally no one thinks research is the purpose of national systems. The methodological innovations listed above do make it a bit interesting. I like the idea that people are trying to get a bit beyond publications and citations; I’m just not sure that this kind wonkery actually makes sense. However, one consequence of the methodological choices made is that in a ranking of national research systems, China somehow places 19th, directly behind Norway Denmark and Finland. This is of course simply bananapants and fails the only true measure of a good ranking which is the “fall-down-laughing test”.

In short, MeasuresHE’s Country 100 ranking probably fails both the fitness of purpose and fitness for purpose tests, but it would hardly be the first ranking to do that. It might also possibly represent a something of a methodological advance, though I’d want someone with better math skills than me to weigh in on that.

Caveat Emptor, etc.

Share:

Leave a Reply

Your email address will not be published. Required fields are marked *