Why do we pay for research from the public purse, exactly? As I wrote a few weeks ago, it wasn’t always the case. It was only after American scientists working in universities demonstrated how their knowledge and skills could contribute to national security that the idea really took off.
Fifteen years later, two American economists came to provide a dollars-and cents rationale for public funding of research. In 1959, Richard Nelson argued that the private sector was likely to underinvest in “basic research” with wide applications relative to “development” of specific products and applications because companies were simply much less likely to be able to fully capture the benefits of the former. Kenneth Arrow then popped up a couple of years later in a paper called The Rate and Direction of Inventive Activity and argued that there were actually three reasons why companies might not invest appropriately in research: indivisibility, inappropriability and uncertainty.
When we hear university lobbyists talk about the need for more “investment” in research, they’re usually relying on Nelson’s arguments, which imply that basic research is a classic case of government intervention to solve a market failure. But maybe we should pay more attention to Arrow’s arguments; the private sector doesn’t shun basic research because it’s unprofitable – it shuns it because it’s risky.
Yet, as governments around the world have increased their investments in research, they have also been promoting ideas about ensuring that the public receives “value-for-money.” The problem is that doing this creates a lot of incentives for “safe” research – research that one knows in advance has a good chance of “success”, in the sense that it will yield a modest advance in human understanding (and, of course, publishable results).
The problem is that the more governments insist on “value for money,” the less useful public funding actually is. If government-funded science is just as risk-averse as private science, what’s the point? Obviously all that research money is useful for the care and feeding of twenty thousand scientists or so, but what’s the actual public benefit?
There’s an interesting thought experiment here: what if we slashed research budgets but also removed all the constraints around value-for-money? There’d be less money overall, but just the crazy geniuses would be let loose to do things which are really innovative. Sure, there’ll be higher rates of failure but that’s what public funding is actually for.
What do you think the actual benefits to society would be? Would they increase or decrease? Hw big a research budget cut would it take before the loss of money outweighed the gain of losing the constraints. In other words, what’s the cost of “value-for-money?”
Very interesting thought.
Makes me wonder as well what this would do to University and faculty incentives regarding the emphasis on research funding and publication.