A Response to Critics

So, we’ve been hearing a number of criticisms – both directly and via the grapevine – of the research rankings we released last week. (Warning: if you’re not entranced by bibliometric methodology, you can safely skip today’s post).

The main point at issue is that at some schools, our staff counts appear to be on the high side. Based on this, some schools have inferred that we are judging them too harshly – that if we had fewer observations, the denominator would be smaller and their score would rise. But this is not quite correct.

There are two possible reasons why our staff counts are high. The first is that we do double-count people who are cross-posted across departments. But that’s a feature, not a bug. We aren’t taking one h-index score and dividing it across two observations – that would be silly. Instead, we calculate someone’s normalized h-index twice and then average them.

Say Professor Smith, with an H-index of 4, is cross-posted between discipline A (where the average H-index is 5) and discipline B (avg. H-index = 3). This person’s normalized score would be .8 (4/5) in discipline A and 1.33 (4/3) in discipline B. When aggregating to the institutional level, both scores are included but due to averaging this person would “net out” at (.8+1.33)/2 = 1.065. If they were only counted once, they would need to be assigned a score of either .8 or a 1.33, which doesn’t seem very sensible. Thus, to the extent that high staff numbers are due to such double-counts, we’re confident our methodology is fair and doesn’t penalize anyone.

The second possibility is that errors were made in harvesting faculty names from 3500-odd departmental websites. Some mistakes are probably ours, but a major factor seems to be a widespread practice of schools not distinguishing between permanent and part-time faculty on their websites. To the extent that the misidentified staff are graduate students or post-docs, then miscounts will lower institutional scores. However, to the extent that the misidentified staff are adjuncts – especially ones who are recently retired faculty or distinguished practitioners from outside the academy – then our miscounts may actually be inflating institutional scores. Smaller denominators don’t necessarily mean higher scores.

With the help of one affected institution, we’re trying to work out what the issue is and whether the problem in fact affects scores significantly. Since we believe in the importance of transparency, accuracy and accountability (see the Berlin declaration on rankings, which I helped draft), we’ll extend the offer to all institutions who feel our methodology has portrayed them inaccurately. If we find a problem, we’ll correct it and publish results here.

You can’t be fairer than that.

Posted in

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.