HESA

Higher Education Strategy Associates

Category Archives: bibliometrics

April 25

The Leiden Rankings 2013

Though it was passed over in silence here in Canada, the new Leiden university research rankings made a bit of a splash elsewhere, last week.  I gave a brief overview of the Leiden rankings last year.  Based on five years’ worth of Web of Science publication and citation data (2008-2012), it is by some distance the best way to compare institutions’ current research output and performance.  The Leiden rankings have always allowed comparisons along a number of dimensions of impact and collaboration; what’s new – and fabulous – this year is that the results can be disaggregated into five broad areas of study (biomedical sciences, life & earth sciences, math & computer science, natural sciences & engineering, and social sciences & humanities).

So how did Canadian universities do?

The big news is that the University of Toronto is #2 in the world (Harvard = #1) in terms of publications, thanks mainly to its gargantuan output in biomedical sciences.  But when one starts looking at impact, the story is not quite as good.  American universities come way out in front on impact in all five areas of study – natural, since they control the journals and they read and cite each others’ work more often than they do that of foreigners.  The UK is second in all categories (except math & computer science), third place in most fields belongs to the Dutch (seriously – their numbers are stunning), followed by the Germans and Chinese, followed (at a distance) by Canada and Australia.   Overall, if you look at each country’s half-dozen or so best universities, sixth or seventh is probably where we rank as a country in all sub-fields, and overall.

Also of interest is the data on collaboration, and specifically the percentage of publications which have an international co-author.  That Canada ranks low on this measure shouldn’t be a surprise: Europeans tend to dominate this measure because there are so many countries cheek by jowl.  But the more interesting finding is just how messy international collaboration is as a measure of anything.  Sure, there are some good schools with high levels of international collaboration (e.g. Caltech).  But any indicator where the top schools are St. Petersburg State and King Saud University probably isn’t a clear-cut measure of quality.

Among Canadian schools, there aren’t many big surprises.  Toronto, UBC, and McGill are the big three; Alberta does well in terms of volume of publications, but badly in terms of impact; and Victoria and Simon Fraser lead the way on international collaborations.

If you have even the slightest interest in bibliometrics, do go and play around with the customizable data on the Leiden site.  It’s fun, and you’ll probably learn something.

October 26

Research Rankings Reloaded

You’ll recall that a couple of months ago we released a set of research rankings; you may also remember that complaints were raised about a couple of issues in our methodology. Specifically, critics argued was that by including all permanent faculty we had drawn the net too wide, and that we should have excluded part-timers.

Well, we’ve now re-done the analysis, and are releasing them today as an annex to our original publication for all to see. Two key things to highlight about the changes are (i) the effect of excluding part-time professors is more significant in SSHRC disciplines than in NSERC-ones, and (ii) the use and function of part-time professors appear seems to differ systematically on linguistic grounds. Compared to part-time professors at anglophone universities, those at francophone universities are both more numerous and have profiles which resemble those of adjuncts at anglophone institutions (whereas in Anglophone institutions, part-timers look reasonably similar to the full-time population).

At the top of the table, not a great deal changes – it’s still the same top six in both SSHRC and NSERC totals (though ordinal position does change slightly – McGill slips ahead of UBC to top spot in the SSHRC rankings because of much better performance on research income). Neither is there much change in the bottom quartile or so. In the middle ranks though, where institutions are more tightly bunched together, small changes in scores can cause some pretty big changes in rankings; University of Manitoba goes from 27th to 17th in the NSERC rankings; UQAM (which has by far the country’s largest contingent of part-time faculty) jumps from 43rd to 17th in the SSHRC ones. In fact, francophone institutions generally did a lot better with the revisions. What we had initially assumed was a “language effect” – poor results driven by the fact that publishing in French limits readership and hence citations – may in fact have been driven by employment patterns.

But for most institutions, contrary to the expectations of some of our critics, not much changed. Even where it did, gains in the one field were sometimes offset by losses in the other (Ottawa, for instance, rose five places in Arts but fell two in Science). Which makes sense: the only way excluding part-timers would change outcomes would be if your institution used part-timers in a significantly different way than others do.

Now, you may still dislike our rankings because you don’t like field-normalization, or the specific metrics used, or whatever. That’s cool: debating metrics is important and we’d love to engage constructively on that. But let’s bury the canard that somehow an improper frame significantly skewed our results. It didn’t. The results are the results. Deal with ‘em.

September 06

A Response to Critics

So, we’ve been hearing a number of criticisms – both directly and via the grapevine – of the research rankings we released last week. (Warning: if you’re not entranced by bibliometric methodology, you can safely skip today’s post).

The main point at issue is that at some schools, our staff counts appear to be on the high side. Based on this, some schools have inferred that we are judging them too harshly – that if we had fewer observations, the denominator would be smaller and their score would rise. But this is not quite correct.

There are two possible reasons why our staff counts are high. The first is that we do double-count people who are cross-posted across departments. But that’s a feature, not a bug. We aren’t taking one h-index score and dividing it across two observations – that would be silly. Instead, we calculate someone’s normalized h-index twice and then average them.

Say Professor Smith, with an H-index of 4, is cross-posted between discipline A (where the average H-index is 5) and discipline B (avg. H-index = 3). This person’s normalized score would be .8 (4/5) in discipline A and 1.33 (4/3) in discipline B. When aggregating to the institutional level, both scores are included but due to averaging this person would “net out” at (.8+1.33)/2 = 1.065. If they were only counted once, they would need to be assigned a score of either .8 or a 1.33, which doesn’t seem very sensible. Thus, to the extent that high staff numbers are due to such double-counts, we’re confident our methodology is fair and doesn’t penalize anyone.

The second possibility is that errors were made in harvesting faculty names from 3500-odd departmental websites. Some mistakes are probably ours, but a major factor seems to be a widespread practice of schools not distinguishing between permanent and part-time faculty on their websites. To the extent that the misidentified staff are graduate students or post-docs, then miscounts will lower institutional scores. However, to the extent that the misidentified staff are adjuncts – especially ones who are recently retired faculty or distinguished practitioners from outside the academy – then our miscounts may actually be inflating institutional scores. Smaller denominators don’t necessarily mean higher scores.

With the help of one affected institution, we’re trying to work out what the issue is and whether the problem in fact affects scores significantly. Since we believe in the importance of transparency, accuracy and accountability (see the Berlin declaration on rankings, which I helped draft), we’ll extend the offer to all institutions who feel our methodology has portrayed them inaccurately. If we find a problem, we’ll correct it and publish results here.

You can’t be fairer than that.

September 04

Too Much Peer Review?

One way in which Canada stands out internationally in higher education is our ultra-reliance on individual peer review as a means of allocating research funding. While peer review is in many ways the “gold standard” of research assessment mechanisms, it has the drawback of being incredibly time-consuming, both for the applicant and for the assessors.

What’s the alternative, though? Well, as Paula Stephan points out in her quite excellent book How Economics Shape Science, there are a number of ways that are in use in other countries.

The most common is block grants and assessments. This system plays a role in the funding of many European systems of education as well as Australia and New Zealand. Under this system, money is awarded based on assessments not of individual but of departmental performance. This is usually done through some mix of peer review with a heavy dose of bibliometric analysis.

This system doesn’t completely supplant individual peer review in those countries – they have granting councils like ours, too, though they tend to be smaller. Rather, these block grants cover some portion of materials costs as well as supporting that portion of professors’ salaries which pay for research time. In Canada, we don’t even think of these monies as “research funding” because of the way they are built into the grants universities receive from provinces. By international standards, the Canadian equivalent of the European/Antipodean “block grant and assessment” system should really be called “block grant and no assessment.”

Another approach which has gained favour recently is the use of prizes, where institutions compete to achieve a major scientific task. Though the most famous of these (e.g., the Google Lunar X Prize, the Archon X Prize) usually involve philanthropic money, various agencies of the U.S. government have established over 50 such prizes to spur scientific efforts. Using prizes is obviously only effective in limited circumstances, but they have their uses.

Perhaps none of these methods is as good as peer review, but they do all require professors to spend a lot less time writing grant applications. Given how big a concern that is among Canadian scientists these days, maybe we ought to consider adopting a few of these methods to simplify life. For instance, if a researcher is a demonstrably excellent one – say, if they are one of 12-13% or so whose H-index score is twice the average for their field (data suitably age-adjusted, etc.), why not just give them $100,000 a year and eliminate the hassle of applications? In NSERC disciplines the likelihood is they’d get that much anyway.

Bibliometrics aren’t just for nerds. They can save a lot of time and money, too, if we let them.

August 31

Who’s Not in the U-15 (But Could Be)

One of the interesting things about our new research rankings – which unlike previous attempts at such things are fully field-normalized – is that it shines a very different light on who the “leaders” are in terms of research.

Back in the day, the ten “leading” research institutions in the country (Laval, McGil, Montreal, Queen’s, Toronto, McMaster, Waterloo, Western, Alberta and UBC) created the “G-10.” It was a talking-shop, mostly: a forum where big universities could exchange data quietly amongst themselves. Around the turn of the century, three more institutions (Ottawa, Calgary and Dalhousie) were added, and more recently Manitoba and Saskatchewan were included as well.

Waterloo apart, the U-15 is basically a list of the country’s established medical schools. But the idea that simply having a medical school makes you research-intensive is questionable. If you were looking for “research leaders,” you’d probably start with looking at bibliometric measures, like the H-index measures. There are 16 schools which have an average H-index score above one (i.e., where the average professor at that school has an H-index above the national average) in NSERC disciplines, and 22 which have an average H-index above one in SSHRC disciplines.

So how does the U-15 membership fare in these? UBC, Toronto, McGill and Montreal and are in the top five in both SSHRC and NSERC disciplines, so they’re indisputably “tops.” Queen’s, McMaster, Alberta, Waterloo and Manitoba all have above-average scores in both areas. After that, it gets trickier: Ottawa has a well-above average score in NSERC disciplines but a below-average one in SSHRC disciplines; Saskatchewan and Calgary are above-average in SSHRC disciplines but not in NSERC ones. Laval, Dalhousie and Western are below-average in both.

But what about schools outside the U-15? Well, Simon Fraser makes the top ten in both fields, a claim most of the U-15 can’t make. York, Concordia and Trent (yeah, we did a double-take, too) both have above-average scores in both fields; from a purely bibliometric perspective, they are at least in the same class as Manitoba. Trent and Concordia don’t look so good when funding measures are taken into account, but the other two do OK and seem at least the equal of a few of the U-15. One could also make a decent case for Guelph, which is well above average in SSHRC disciplines, and only a shade below it in NSERC ones.

So why aren’t these schools in the U-15’s big research club, even though they clearly outperform some of the weaker U-15 members? Unfortunately, the answer is prestige. If York and Concordia were allowed into the club, Toronto and McGill would probably want to find another sandbox to play in. In academia, exclusivity matters.

August 30

Research Rankings: Burning Questions

We understand that some results from our research rankings are causing some head-scratching. We thought we’d give you some insight into some of the key puzzles.

Q: Why isn’t U of T first? U of T is always first.

The fact that we didn’t include medical research is a big reason; had we done so, the results might have been quite different. But part of it also is that Toronto’s best subjects tend to be ones with high research costs and high publication/citation rates. Once you control for that, UBC surpasses Toronto on all measures.

Q: Why does UBC appear to be so much better than everyone else in SSHRC-related disciplines?

A variety of reasons, but much of it is down to the fact that the Sauder School is really good.

Q: Looking at the data, which schools stand out as being under-rated?

Simon Fraser makes the top ten in both SSHRC and NSERC disciplines, which most of the U-15 can’t say. UQ Rimouski came seventh in science and engineering – they aren’t very big but their strength in marine sciences puts them close to the top overall. In SSHRC-related disciplines, the answer is Guelph, which does extremely well in this area, despite having a reputation which is more science-based. York and Trent over-perform in both science and arts. York might not be such a surprise – it’s a big school with lots of resources even if it isn’t super in any of the “money” disciplines. But Trent was a revelation – by far the best publication record of any small-ish school in the country across all disciplines.

Q: And over-rated?

Despite being U-15 members, Western, Dalhousie and Laval all had relatively modest performances. At these schools more than the others, a lot of their research prestige seems to hang on their medical faculties.

Q: Any anomalies?

Apart from l’Université de Montréal, none of the francophone schools do very well in the social sciences and humanities rankings, and the culprit is on the bibliometric side rather than the funding side. The practice of publishing in French has the tendency to lessen the size of the potential audience. This reduces potential citations and hence reduces H-index scores. In the sciences and engineering, where publication tends to happen in English, francophone schools punch actually punch above their weight.

Q: Any trends of note?

UBC aside, its’ the Ontario institutions who really steal the show. Sure, they’re funded abysmally, but they perform substantially better on publication measures than anyone else in the country. We can’t say why, for sure, but maybe those high salaries really work. They’re tough on undergrad class sizes, though…

August 29

Research Rankings

Today, we at HESA are releasing our brand new Canadian Research Rankings. We’re pretty proud of what we’ve accomplished here, so let me tell you a bit about them.

Unlike previous Canadian research rankings conducted by Research InfoSource, these aren’t simply about raw money and publication totals. As we’ve already seen, those measures tend to privilege strength in some disciplines (the high-citation, high-cost ones) more than others. Institutions which are good in low-citation, low-cost disciplines simply never get recognized in these schemes.

Our rankings get around this problem by field-normalizing all results by discipline. We measure institutions’ current research strength through granting council award data, and we measure the depth of their academic capital (“deposits of erudition,” if you will) through use of the H-index, (which, if you’ll recall, we used back in the spring to look at top academic disciplines). In both cases, we determine the national average of grants and H-indexes in every discipline, and then adjust each individual researcher’s and department’s scores to be a function of that average.

(Well, not quite all disciplines. We don’t do medicine because it’s sometimes awfully hard to tell who is staff and who is not, given the blurry lines between universities and hospitals.)

Our methods help to correct some of the field biases of normal research rankings. But to make things even less biased, we separate out performance in SSHRC-funded disciplines and NSERC-funded disciplines, so as to better examine strengths and weaknesses in each of these areas. But, it turns out, strength in one is substantially correlated with strength in the other. In fact, the top university in both areas is the same: the University of British Columbia (a round of applause, if you please).

I hope you’ll read the full report, but just to give you a taste, here’s our top ten for SSHRC and NSERC disciplines.

Eyebrows furrowed because of Rimouski? Get over your preconceptions that research strength is a function of size. Though that’s usually the case, small institutions with high average faculty productivity can occasionally look pretty good as well.

More tomorrow.

August 28

Research Grants by Discipline

So, tomorrow, HESA will be releasing its inaugural set of Canadian research rankings. We think they’re pretty cool; not only are they the first attempt in Canada to employ field-normalization techniques on bibliometric data, as far as we’re aware, they’re the first rankings anywhere in the world to employ field-normalization on research income.

Why does this matter? Well, not all research was created alike. Each discipline has a different publication culture, for starters. The average H-index score for an academic in astrophysics is about four times that of an environmental scientist and ten or eleven times that of a historian. Without field-normalization, any mediocre bunch of physicists trounces the best history department in the world. Yet, remarkably, most rankings and ratings systems choose to compare universities without normalizing for differences in publication culture.

It’s the same with research funding. Not only are researchers in some disciplines likelier to receive money than others, but the size of the average grant also differs because it’s inherently more expensive to run experiments in some disciplines than others. The gap between disciplines would be even greater if NSERC actually funded projects fully, but that’s another story.

A few months ago, we showed you some of the differences in disciplinary H-index averages, so you should already have a sense of how those differences play out. But we haven’t shown you the differences in funding by discipline. And so, herewith, the average amount of granting council funds distributed in 2010-11 per professor, by discipline, for selected disciplines.

Average Granting Council Awards per Faculty Member, by Discipline

There aren’t a lot of real surprises here: on average, the amount of funding per faculty member in engineering is about sixteen times what it is in the humanities and seven times what it is in the social sciences. It’s also about 40% more than it is in the sciences. This, of course, is the reason why one should field-normalize data; without it, schools with large engineering schools will tend to look good regardless of how good their scholars are in the rest of the university.

Anyways, all of this and more tomorrow in our all new rankings! (You know you love them, you naughty people.)

June 15

Bibliometrics Finale: Age and Size

Today, we use our H-index Benchmarking of Academic Research (HiBAR) to look at the relationship between institutional characteristics and H-index scores.

We’ve talked a lot this week about the positive correlation between a researcher’s age and his or her H-index score. But there’s another correlation to watch for: normalized institutional average H-index scores and institutional age. Check it out:

Normalized Institutional Average H-Index Score as a Function of Institutional Age

The result isn’t wholly clear cut: there are a lot of institutions that were created in the 50s and 60s which have surprisingly good normalized H-index scores, and a clutch of small liberal arts schools which have very low averages despite being quite old. However, overall, the relationship between age and normalized H-index score is negative.

This is interesting because it helps to demonstrate the degree to which institutional prestige – which is generally correlated with institutional age – has do with where talented academics try to locate. Academic salaries aren’t that different across the country; if academics only looked at money, one would expect a much smaller relationship between institutional age and average normalized H-index values. Basically, top researchers want to be where other top researchers are; and older, more prestigious institutions are always going to have a head start as far as concentrations of talent are concerned. It’s a virtuous circle – and one that’s very hard for new institutions to crack.

Another relationship we’re looking at is institutional size and average normalized H-index score. To wit:

Normalized Institutional Average H-Index Score as a Function of Institutional AgeSize

This is a much more straightforward result: big institutions have a lot more faculty members with long publication records. That may seem obvious, but it wouldn’t be the relationship that would hold in, say, the United States, where a lot of top institutions (e.g., Harvard, Yale) are quite small. In Canada, where the ability to pay for big-time research is dependent upon having a lot of undergraduates generating income that can be skimmed, it’s a much more direct relationship.

Stunningly, not a single university with less than 20,000 students has an average normalized H-index score above one (i.e., above the national average for all academics). This has some pretty significant implications for schools like Victoria and Saskatchewan, which have some significant research strengths but can’t generate sufficient revenue from undergraduate enrolment to make a really big push into the top league.

Now, the keenest-eyed among you may be looking at those two charts and wondering about those y-axis values. Are those really full-institution H-index values, normalized across all disciplines? Couldn’t somebody do a really interesting and unusually reliable research ranking with those?

The answers are yes and yes. But patience, grasshoppers: we’re not ready to roll it out just yet. You can spend the summer looking forward to it as a back-to-school treat.

June 14

Bibliometrics: Measuring Zero-Impact

Bibliometrics aren’t just useful for analyzing who’s being cited; they are also pretty good at telling you who’s not being cited, too.

Today, we’ll look at professors whose H-index (see here for a reminder of how it is calculated) is zero – that is, professors who have either never been published or (more likely) never been cited.

There are three reasons why a scholar might have an H-index of zero. The first is age; younger scholars are less likely to have published, and their publications have had less time in which to be cited. The second is prevailing disciplinary norms. there are some disciplines – English/Literature would be a good example – where scholarly communication simply doesn’t involve a lot of citations of other scholars’ work. The third is simply that a particular scholar might not be publishing anything of particular importance, or indeed publishing anything at all.

Let’s take each of these in turn. We can examine the first two questions pretty easily just by looking at the proportion of scholars with zero H-indexes by rank and field of study (our database has data on the rank of a little over three-quarters of academic staff in it – about 47,000 of the people in total, which is a pretty good sample).

Proportion of Academic Staff without a Cited Publication, by Rank and Field of Study

Surprised? So were we. Not because of the differences across ranks (H-index scores are necessarily positively correlated with length of career) or across fields of study (we did this one already0. What really blew us away was the number of full professors who have never had a paper cited, especially in the sciences. Who knew that was even possible?

So, what about that third reason? It is obviously difficult to generalize, but one should note that even within disciplines, there are some enormous gaps in publication/citation rates. In economics as a whole, 15.6% have an H-index of zero, but the proportion of economists in any individual economics department with an H-index of zero varies between 0% and 63%. In biology (disciplinary average: 7.7%), individual departments range between 0% and 60%; in history (disciplinary average: 13.4%), the range is between 5% and 50%. It is vanishingly unlikely that these differences are solely the result of different departmental age profiles; more likely, they reflect genuine differences of scholarly strength.

Now, there’s nothing saying all professors need to be publishing machines. But if that’s the case, maybe not all professors need to have 2/2 or 2/1 teaching loads to conduct all that impactful research, either. Running a university requires trade-offs between research and teaching: bibliometric analysis such as this is a way to make sure those trade-offs are well-informed.

Page 1 of 212