Bibliometrics, Part the First

The shock and horror generated by proposals of teaching-only universities makes it pretty clear that most of Canadian academia thinks that research is important. So important, indeed, that we want every professor to devote 40% of his or her time (under the 40-40-20 rule) to it.

Now, that’s a pretty serious commitment. Even before you get to the costs of graduate students, research grants and research infrastructure, 40% of staff time equals $2 billion/year on research.

So why do we spend so little time measuring its impact?

Oh, we’re all big on measuring inputs, I know – it’s hard to go a week without some communications office telling you how successful their institution is in getting tri-council grants. But presumably the sign of a good university isn’t how much money you get, but what you do with it. In a word, outputs.

Some countries have some extraordinary ways of measuring these. Take New Zealand’s Performance-Based Research Fund Assessment, for instance. This involves having researchers in every department put together portfolios of their work which are then peer-assessed by committees made up of local and international experts. There’s a lot to recommend this approach in terms of its use of peer evaluators, but it’s really expensive, costing something like 15% of the annual research budget in a given year (the equivalent in Canada would be about $400 million, though presumably there would be some economies of scale that would bring the cost down a bit). As a result, they only do it every six years.

The alternative, of course, is bibliometrics; that is, using data on publications and citations to measure research output. It’s in use across much of Europe and Asia, but is comparatively underused here in Canada.

Of course, as with any system of measurement, there are good and bad ways of using bibliometric data. Not all of the ways in which bibliometrics have been used make sense – in fact, some are downright goofy. Over the next couple of weeks, I’ll be demystifying the use of bibliometrics, to help you distinguish the good from the bad, the useful to the not-so-useful, and provide some food for thought about how they can be used in institutional management.

But there is one use of research metrics that I can tell you about right now. In next week’s Globe and Mail, HESA presents a list of Canada’s most distinguished academics, by field of study, based on a joint measure of publications and citations known as the “H-Index.” Take a look at Tuesday’s Report on Business and let us know what you think.

Posted in

One response to “Bibliometrics, Part the First

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.