Over the past 24-36 months, we’ve seen a real shift towards talking about universities in terms of community benefit/impact rather than in terms of their scientific output. No more valorization based on silly bibliometrics! Valorization rather on….well, what exactly?
The thing about the whole publish-or-perish thing is that it had created some reasonably fair and equitable standards. These standards varied from place-to-place, and in some places, they went overboard in being overly-rigid on pure publication metrics, but basically people were given some reasonable idea about how much publishable work they had to produce for tenure and promotion.
But “impact”? How do you measure that?
There is a significant literature on measurement of academic impact. The theoretical basis for measuring impact really comes from the work on Mode 1 vs. Mode 2 knowledge-production from the 1990s, in which Mode 1 is universal and context-free, and Mode 2 is situational, embedded and experiential (see more here). In the UK, where institutional research funding envelopes are is calculated and distributed on a separate basis from the funding to support teaching (this is common across Europe/Australasia but quite different from North America), the funding authorities began wondering why they were mostly basing their funding based on Mode 1 research and not Mode 2. So, in the 2014 Research Excellence Funding (REF) exercise, institutions began being scored on their research impact.
To be clear here, “impact” is something short-term. In the UK, the exact definition of impact in research is that it be “of direct relevance to the needs of commerce, industry, and to the public and voluntary sectors; scholarship; the invention and generation of ideas, images, performances, artefacts including design”. Lots of scholars don’t do anything like that – which is totally fine, no one says any single researcher must cover all the bases – but many do, even in the humanities, where opposition to the use of “impact” as a metric was the strongest.
In Canada, “impact” is something that occasionally gets trotted out in the assessment process for grants. But it is in many cases not much more than a creative writing exercise – something that one talks about at the beginning of a project in order to get money but plays little role thereafter and few agencies actually bother to check what the impact actually was (I’m told Genome Canada is a bit of an exception). But it’s not something that’s actually evaluated or, God Forbid, measured.
And whatever institutions say about valuing impact, very few have seriously reviewed their tenure and promotion rules to match the growing rhetorical emphasis on impact. That’s a problem. If institutions continue to claim community impact in research while continuing to only reward straight-up traditional research publications, then the professors will continue to chase publications and citations rather than impact, and I think institutions will get found out quickly.
From all the research on impact, and what kinds of research tends to be most impactful, I would say there are two big takeaways in the UK experience from which Canadian institutions could profitably learn. The first is that in practice, “impact” is not that different from “knowledge mobilization”: it is really a question of who is going to pay attention to your research findings and use them to make changes in organizations/policy/communications/whatever. As a rule of thumb, you’re going to get a lot more uptake of research results if at least some people the research is purported to benefit are brought into the project at the research design phase. That way, they have an interest in the results and are more likely to put them to use.
Sounds obvious, right? You’d be surprised how often it doesn’t happen, though. That’s because coming up with research designs is what scholars do, and the idea of bringing outsiders into that process slows things down, makes it less fun and spontaneous. That’s understandable, I think – and a valid reason to not do it if what you are looking for is high-output Mode 1 research. But here’s the thing: impact research is basically about relationships with people outside the university. These relationships are not something you can conjure into existence: they take time to develop and cultivate. They require give, take and patience. And if universities want their researchers to do more of this kind of research, then these processes (including tenure and promotion) are going to have to be encouraged inside the academy.
Second: if you want to measure impact, then the data collection methods for eventual knowledge mobilization – that is, how the research is used and how it changes opinions, behaviour, institutional practices, etc – need to be built into the actual research design and collected in the same way that “real” research data is. There is a tendency in Canada to engage in hand-waving about the benefits of higher education and research, perhaps never more so than when it comes to ways to “value” higher education. But for impact to be taken seriously – especially to governments which are clearly skeptical about the value of higher education in the first place – the academy is going to have to get a whole lot better at this. And that means vastly-improved data collection practices.
Remember: this kind of impact, and the measurement thereof, isn’t something you necessarily want to push on all scholars. But it is something every institution and indeed every faculty needs to be able to demonstrate is happening at least somewhere. Given the current fashion for talking about research this way, some systematic thinking about how to promote it and measure it is probably overdue.
Thank you once again for a stimulating take on a complex issue with implications extending beyond universities. My first thoughts after reading the piece were to recall recall Campbell’s and Goodhart’s Laws.
I, too, am reminded of Goodhardt’s law, with which administrators seem strangely unfamiliar.
More importantly, I really don’t think that universities are even capable of placing an emphasis on impact, while respecting the fact that eighty or ninety percent of everything we do has all the instrumental utility of a new-born baby. Instead, we can expect every application for tenure, promotion or merit to demand a “statement of impact.” Of course, this will just become another creative writing exercise, but that’s hardly a justification. It’s more like a tax on the souls of everyone forced to spin B.S.
Rather than trying to measure impact, our representatives should set about to convince governments and the world at large that the good we produce cannot be linked to outcomes in any straightforward manner. They might talk about the many different sorts of science that went into the mRNA vaccine for Covid, and how many of these were pursued more or less as interesting long before they became world-saving.
This is not an easy case to make. It is much easier to point to a sum of research dollars, or a particular applied doo-hickey that became lucratively patented, but it is vital to show how many breakthroughs depend on decades of work by unsung and often underfunded academics, who have to defend themselves against anti-intellectuals demanding that they show “impact.”
This blog makes me think about the work we have all been doing to rethink how we do and measure community-focused research, particularly with indigenous communities. I think there are lessons to be learned here for industry- and economic-community focused research.