I am a big fan of the economist Paul Romer, who is most famous for putting knowledge and the generation thereof at the centre of discussions on growth. Recently, on (roughly) the 25th anniversary of the publication of his paper on Endogeneous Technological Change, he wrote a series of blog posts looking back on some of the issues related to this theory. The most interesting of these was one called “Human Capital and Knowledge”.
The post is long-ish, and I recommend you read it all, but the upshot is this: human capital (H) is something stored within our neurons, which is perfectly excludable. Knowledge (A) – that is, human capital codifed in some way, such as writing – is nonexcludable. And people can use knowledge to generate more human capital (once I read a book or watch a video about how to use SQL, I too can use SQL). In Romer’s words:
Speech. Printing. Digital communications. There is a lot of human history tied up in our successful efforts at scaling up the H -> A -> H round trip.
And this is absolutely right. The way we turn a patterns of thought in one person’s head into thoughts in many people’s heads is the single most important question in growth and innovation, which in turn is the single most important question in human development. It’s the whole ballgame.
It also happens to be what higher education is about. The teaching function of universities is partially about getting certain facts to go H > A > H (that is, subject matter mastery), and partially about getting certain modes of thought to go H > A > H (that is, ways of pattern-seeking, sense-making, meta-cognition, call it what you will). The entire fight about MOOCs, for instance, is a question of whether they are a more efficient method of making H > A > H happen than traditional lectures (to which I think the emerging answer is they are competitive if the H you are talking about is “fact-based”, and not so much if you are looking at the meta-cognitive stuff. But generally, “getting better” at H > A > H in this way is about getting more efficient at the transfer of knowledge and skills, which means we can do more of it for the same price, which means that economy-wide we will have a more educated and productive society.
But with a slight amendment it’s also about the research function of universities. Imagine now that we are not talking H > A > H, but rather H > A > H1. That is, I have a certain thought pattern, I put it into symbols of some sort (words, equations, musical notation, whatever) and when it is absorbed by others, it generates new ideas (H1). This is a little bit different than what we were talking about before. The first is about whether we can pass information or modes of thought quickly and efficiently; this one is about whether we can generate new ideas faster.
I find it helpful to think of new ideas as waves: they emanate outwards from the source and lose in intensity as they move further from the source. But the speed of a wave is not constant: it depends on the density of the medium through which the ideas move (sound travels faster through solids than water, and faster through water than air, for instance).
And this is the central truth of innovation policy: for H > A > H1 to work, there has to be a certain density of receptor capacity for the initial “A”. A welder who makes a big leap forward in marine welding will see his or her ideas spread more quickly if she is in Saint John or Esquimault than if she is in Regina. To borrow Matt Ridley’s metaphor of innovation being about “ideas having sex”, ideas will multiply more if they have more potential mates.
This is how tech clusters work: they create denser mediums through which idea-waves can pass; hence, they speed up the propagation of new ideas, and hence, under the right circumstances, they speed up the propagation of new products as well.
This has major consequences for innovation policy and the funding of research in universities. I’ll explain that tomorrow.