HESA

Higher Education Strategy Associates

HiBAR – Bibliometric Analysis

HiBAR, or Hirsch-Index Benchmarking of Academic Research is a tool that measures the aggregate productivity and impact of groups of academic researchers and faculty, using advanced bibliometric analysis. HiBAR is designed to support review processes, complementing expert peer review and other data to help departments, faculties and administrations make better-informed decisions.

Why measure publication and citation counts?

Publication counts, citation counts and other bibliometric measures have been widely used to evaluate the productivity and impact of academic researchers. The increasing importance of publication records to hiring processes, quality assurance and accountability mechanisms have brought these measures into broad public use over the past decade. At the same time, the practical challenges to this type of analysis have faded as large citation databases have reduced the costs of collecting source data. Over the past decade, bibliometric analysis has become a well-established, affordable, and powerful window into the productivity and impact patterns of academic research.

The Hirsch Index

The Hirsch-index, commonly abbreviated as the H-index, was designed in 2005 as a compound metric based on a single author’s publications and citations per publication. Stated simply, a researcher’s H-index is the largest possible number n, for which n of the researcher’s publications have been cited at least n times. For example, a researcher would have an H-index of two if they had published three papers, which received, respectively, 4, 2, and 0 citations each. Hirsch argued that by combining the number of publications and their impact, the H-index reflects the evaluation by the corresponding scientific community (Hirsch 2005, see also Hirsch 2007). This index has a number of advantages over other bibliometric calculations that make it ideal for comparisons between groups of researchers:

  • It considers both productivity and impact. Two researchers with similar H-indexes are comparable in terms of their overall academic impact.
  • It is not influenced by a small number of very successful articles, which may not be representative of a researcher’s career output. The H-index similarly discounts the value of papers that are not influential.
  • Because the data required to calculate an accurate H-index is publicly available, large numbers of researchers can be assessed at a relatively low cost.

How can HiBAR be used to benchmark departments and institutions?

By only comparing departments with similar publication cultures, and by limiting the comparison groups to a small set of target institutions, HiBAR creates a set of relative comparisons that take into consideration the differences between disciplines and institutions. The dramatically different publication cultures among disciplines (bioscience researchers are typically far more productive than philosophy researchers, for example) can thereby be taken into consideration. Similarly, limiting the comparison group to a select set of institutions makes it possible to compare an institution only against its peers and competitors. By counterexample, there is limited utility in comparing a large research-focussed institution to a small college in any discipline.

The benchmarks used in HiBAR are specifically adapted to these issues, and offer relevant, case-specific comparisons that overcome the challenges faced by simpler bibliometric measures.

Metrics and benchmarks used in HiBAR

HiBAR uses three metrics to benchmark and measure groups of researchers: the mean H-index, the mean H-index among the lowest-performing 50% of included faculty, and the mean H-index among the top performing 20% of included faculty. A complete picture of productivity and impact requires all three of these measures, as the differences between them characterize the distribution of performance at an institution, offering insight into the structure and pattern of academic production.

It is not uncommon, for example, for a department to have a large number of very low-performing faculty, and a small number of “superstar” researchers – while other departments may have strong performance among even the least productive faculty. Similarly, the academic culture in some disciplines expects even newly-hired junior faculty to have published in peer-reviewed journals. In other disciplines, such as philosophy, many new faculty are hired before they have published.

By looking at the highest-performing and lowest-performing faculty separately, it is possible to observe these trends, and to compare the distribution of performance across groups of researchers. HiBAR measures the distribution of performance across a group of researchers – adding a level of detail that is much more valuable than a simple mean.

Sample HiBAR Report

 

For more information about HiBAR, or for a price list, contact info@higheredstrategy.com or (416) 848-0215.