I recently came across a fascinating counterintuitive piece of trivia in Timothy Taylor’s Conversable Economist blog. At the time ATMs were introduced in 1980, there were half a million bank tellers in America. How many were there 30 years later, in 2010? Answer: roughly 600,000. Don’t believe me? See the data here.
Most people to whom I’ve told this story tend to get confused by this. ATMs are one of the classic examples about how technology destroys “good middle class jobs”. And so the first instinct many people have when confronted with this information is to try and defend the standard narrative – usually with something like “ah, but population growth, so they still took away jobs that could have existed”. This is wrong, though. When we look at manufacturing, we see absolute declines in jobs due to (among other things) automation. With ATMs, however, all we see is a change in the rate of growth.
The key thing to grasp here is that the machines did not put the tellers out of business; rather, they modified the nature of bank telling. To quote Taylor, “tellers evolved from being people who put checks in one drawer and handed out cash from another drawer to people who solved a variety of financial problems for customers”.
There’s an important truth here about the way skill-use evolves in the economy. When most people think about technological change and its impacts on skills, they initially tend to presume “more machines → high tech → more tech skills needed → more STEM”. But actually this is, at best, half the story. Yes, new job categories are springing up in technical areas that require new forms of training. But the more important news is that older job categories evolve into new ones with different kinds of requirements, and requiring a different skill set. And in most cases, those new skills are – as in our bank teller example – about problem-solving.
Now, as a society, every time we see job requirements changing, our instinct is to keep kids in school longer. But: a) pretty soon cost constraints put a ceiling on that strategy; and, b) this approach is of limited usefulness if all you’re doing is teaching the same old things for longer.
At a generic level, it’s not hard to teach in such a way that you’re giving students necessary skills to thrive in the future labour market. Most programs, at some level, teach problem-solving (identifying a problem, synthesizing data about it, coming up with possible solutions, evaluating them, and coming up with a solution), although not all of them test for them explicitly, or explain to students how these skills are likely to be applied later on. More could be done with respect to encouraging teamwork and interpersonal skills, but these aren’t difficult to add (although having the will to add them is something different).
The more difficult problem has to do with understanding where technology is likely to replace jobs and where it is likely to modify them. What do driverless cars mean for the delivery business? At a guess, it means an expanded market for the delivery of personalized services during commuting time. Improved automatic diagnostic technology or robot pharmacists? More demand for health professionals to dispense lifestyle and general health counselling. Increased automation in legal affairs? Less time on research means more time for, and emphasis on, negotiation.
I could go on, but I won’t. The point, as Tyler Cowen makes in Average is Over (a book whose implications for higher education have been criminally under-examined) is that the future in many fields belongs to people who can best blend human creativity with the power of computers. And so the relevant question for universities is: to what extent are you monitoring technology trends and thinking about how they will change what you teach, how you teach it, and how you evaluate it? Or, put differently: to what extent are your curricula “future-ready”?
In too many cases, the answers to these questions land somewhere between “not very much” and “not at all”. As a sector, there is some homework to be done here.