HESA

Higher Education Strategy Associates

Category Archives: Technology

March 04

A New Logo for Canadian Higher Education

Last week, the government of Canada announced to great fanfare (Hip Hip Hooray! Caloo Callay!) that Canada has a new international education brand.  They actually meant “logo” not “brand”, but whatever – long past due because the old logo was terrible.  To wit:

unnamed

 

 

 

 

 

 

 

 

 

 

 

Ridiculous, right?  “Education in/au Canada”?  Most students who want to come study in Canada do so in order to improve their English, and Ottawa comes up with a logo that requires you to already be bilingual in order to understand it.  Mercy.

Now, here’s our new logo:

1

 

 

 

 

 

 

 

 

 

Um… OK.  That’s a little bit better, I guess.  But who in their right mind thinks the Canada word mark and the CMEC logo belong on this thing?  Are they worried that prospective students in Izmir, Lagos, or Dnepropetrovsk would think less of us as a study destination if those logos weren’t there?  That some eager would-be student from Togo would begin to get heart palpitations about the potential quality of higher education in Canada if the word mark wasn’t there?  That a potential Colombian graduate student would interpret the lack of a CMEC logo as evidence of a scam?

But if you really want to shake your head in despair, take a look at the Study in Canada website, which is probably even dumber than the old logo was – note that despite the big announcement, no one seems to have found the time to actually update the logo on the website.  Anyways, the website is a monstrosity.  Fifty per cent of it is blank space, and its overall web sensibility would have been considered primitive even back in the MySpace era.  Literally, the only thing you can say about it is that it meets official federal government web guidelines.

And this, in a very real sense, is the entire problem.  The logo, the website – pretty much everything about our  international education effort – is centred around what makes sense for governments and their bureaucracies.  It is not centred around students.  Go ahead, take a look at the Study in UK website, the Study in New Zealand website, the Study in Australia website, or even the German DAAD website.  Do you see a lot of white space?  In the case of DAAD – an organization partially funded by the Germans states (provinces), do you see any CMEC-equivalent logos cluttering up the visuals?

No?  Me neither.  Apparently, the awfulness of Canada’s efforts in this area are unique.  But as all those other efforts show, it doesn’t have to be this way.  We can do better.  It starts simply by asking: “are we doing this because it will make sense to students?  Or to governments?”

January 28

The Future of Work (and What it Means for Higher Education), Part 2

Yesterday we looked at a few of the hypotheses out there about how IT is destroying jobs (particularly: good jobs).  Today we look at how institutions should react to these changes.

If I were running an institution, here’s what I’d do:

First, I’d ask every faculty to come up with a “jobs of the future report”.  This isn’t the kind of analysis that makes sense to do at an institutional level: trends are going to differ from one part of the economy (and hence, one set of fields of study) to another.  More to the point, curriculum gets managed at the faculty level, so it’s best to align the analysis there.

In their reports, all faculties would need to spell out: i) who currently employs their grads, and in what kinds of occupations (an answer of “we don’t know” is unacceptable – go find out); ii) what is the long-term economic outlook for those industries and occupations? iii) what is the outlook for those occupations with respect to tasks being susceptible to computerization (there are various places to look for this information, but this from two scholars at the University of Oxford is a pretty useful guide)? And, iv) talk to senior people in these industries and occupations to get a sense of how they see technology affecting employment in their industry.

This last point is important: although universities and colleges keep in touch with labour market trends through various types of advisory boards, the question that tends to get asked is “how are our grads doing now?  What improvements could we make so that out next set of grads is better than the current one?”  The emphasis is clearly on the very short-term; rarely if ever are questions posed about medium-range changes in the economy and what those might bring.  (Not that this is always front and centre in employers’ minds either – you might be doing them a favour by asking the question.)

The point of this exercise is not to “predict” jobs of the future.  If you could do that you probably wouldn’t be working in a university or college.  The point, rather, is to try to highlight certain trends with respect to how information technology is re-aligning work in different fields over the long-term.  It would be useful for each faculty to present their findings to others in the institution for critical feedback – what has been left out?  What other trends might be considered? Etc.

Then the real work begins: how should curriculum change in order to help graduates prepare for these shifts?  The answer in most fields of study would likely be “not much” in terms of mastery of content – a history program is going to be a history program, no matter what.  But what probably should change are the kinds of knowledge gathering and knowledge presentation activities that occur, and perhaps also the methods of assessment.

For instance, if you believe (as economist Tyler Cowen suggests in his book Average is Over that employment advantage is going to come to those who can most effectively mix human creativity with IT, then in a statistics course (for instance), maybe put more emphasis on imaginative presentation of data, rather than on the data itself.  If health records are going to be electronic, shouldn’t your nursing faculty be developing a lot of new coursework involving the manipulation of information on databases?  If more and more work is being done in teams, shouldn’t every course have at least one group-based component?  If more work is going to happen across multi-national teams, wouldn’t it be advantageous to increase language requirements in many different majors?

There are no “right” answers here.  In fact, some of the conclusions people will come to will almost certainly be dead wrong.  That’s fine.  Don’t sweat it.  Because if we don’t look forward at all, if we don’t change, then we’ll definitely be wrong.  And that won’t serve students at all.

January 27

The Future of Work (and What it Means for Higher Education), Part 1

Back in the 1990s when we were in a recession, Jeremy Rifkin wrote a book called The End of Work, which argued that unemployment would remain high forever because of robots, information technology, yadda yadda, whatever.  Cue the longest peacetime economic expansion of the century.

Now, we have a seemingly endless parade of books prattling on about how work is going to disappear: Brynjolfsson and McAfee’s The Second Machine Age, Martin Ford’s Rise of the RobotsJerry Kaplan’s Humans Need not Apply, Susskind and Susskind’s The Future of the Professions: How Technology will Transform the Work of Human Experts (which deals specifically with how info tech and robotics will affect occupations such as law, medicine, architecture, etc.), and from the Davos Foundation,  Klaus Schwab’s The Fourth Industrial Revolution. Some of these are insightful (such as the Susskinds’ effort, though their style leaves a bit to be desired); others are hysterical (Ford); while others are simply dreadful (Schwab: seriously, if this is what rich people find insightful we are all in deep trouble).

So how should we evaluate claims about the imminent implosion of the labour market?  Well first, as Martin Wolf says in this quite sober little piece in Foreign Affairs, we shouldn’t buy into the hype that “everything is different this time”.  Technology has been changing the shape of the labour market for centuries, sometimes quite rapidly.  We will go on changing.  The pace may accelerate a bit, but the idea that things are suddenly going to “go exponential” are simply wrong.  Just because we can imagine technology creating loads of radical disruption doesn’t mean it’s going to happen.  Remember the MOOC revolution, which was going to wipe out universities?  Exactly.

But just because the wilder versions of these stories are wrong doesn’t mean important things aren’t happening.  The key is to be able to lose the hype.  And to my mind, the surest way to get past the hype is to clear your mind of the idea that advances in robotics or information technology “replace jobs”.  This is simply wrong; what they replace are tasks.

We get a bit confused by this because we remember all the jobs that were lost to technology in manufacturing.  But what we forget is that the century-old technology of the assembly line had long turned jobs into tasks, with each individual performing a single task, repetitively.  So in manufacturing, replacing tasks looked like replacing jobs.  But the same is not true of the service sector (which covers everything from shop assistants to lawyers).  This work is not, for the most part, systematic and routinized, and so while IT can replace tasks, it cannot replace “jobs”  per se.  Jobs will change as certain tasks get automated, but they don’t necessarily get wiped out.  Recall, for instance, the story I told about ATMs a few months ago: that although ATMs had become ubiquitous over the previous forty years, the number of bank tellers not only hadn’t decreased, but had actually increased slightly.  It’s just that, mainly, they were now doing a different set of tasks.

Where I think there are some real reasons for concern is that a lot of the tasks that are being routinized are precisely the ones we used to give to new employees.  Take law, for instance, where automation is really taking over document analysis – that is, precisely the stuff they used to get articling students to do.  So now what do we do for an apprenticeship path?

Working conditions always change over time in every industry, of course, but it seems reasonable to argue that job change in white-collar industries – that is, the ones for which university education is effectively an entry-requirement – are going to change substantially over the next couple of decades.  Again, it’s not job losses; rather, it is job change.  And the question is: how are universities thinking through what this will mean for the way students are taught?  Too often, the answer is some variation on “well, we’ll muddle through the way we always do”.  Which is a pretty crap answer, if you ask me.  A lot more thought needs to go into this.  Tomorrow, I’ll talk about how to do that.

December 08

Innovation Ecosystems: Promise and Opportunism

We sometimes think of innovation policy as being about generating better ideas through things like sponsored research.  And that’s certainly one part of it.  But if those ideas are generated in a vacuum, they go nowhere – making ideas spread faster is the second pillar of innovation policy (a third pillar – to the extent that innovation is about new product-generation – has to do with venture capital and regulatory environments, but we’ll leave those aside for now).

Yesterday, I discussed why the key to speeding up innovation was the density of the medium through which new ideas travel: basically, ideas about IT travel faster in Waterloo than in Tuktoyaktuk; ideas about marine biology travel faster in Halifax than in Prince Albert.  And the faster ideas travel and collide (or “have sex” in Matt Ridley’s phrase), the more innovation is produced, ceteris paribus.

Now, although they don’t quite use this terminology, the proponents of big universities and big cities alike find this logic pretty congenial.  You want density of knowledge industries?  Toronto/Montreal/Vancouver have that.  You want density of superstar researchers?  U of T, McGill, and UBC have that (especially if you throw in allied medical institutes).  That makes these places the natural spot to invest money for innovation, say the usual suspects.  All you need to do is invest in “urban innovation ecosystems” (whatever those are – I get the impression it’s largely a real estate play to bring scientists, entrepreneurs, and VCs into closer spatial proximity), and voila!  Innovation!

This is where sensible people need to get off the bus.

It’s absolutely true that innovation requires a certain ecosystem of researchers, and entrepreneurs, and money.  And on average productive ecosystems are likelier to occur in larger cities, and around more research-intensive universities.  But it’s not a slam dunk.  Silicon Valley was essentially an exurb of San Francisco when it started its journey to being a tech hub.  This is super-inconvenient to the “cool downtowns” argument by the Richard Floridas of this world; as Joel Kotkin has repeatedly pointed out, innovative companies and hubs are as likely (or likelier) to be located in the ‘burbs, as they are in funky urban spaces, mainly because it’s usually cheaper to live and rent space there.  Heck, Canada’s Silicon Valley was born in the heart of Ontario Mennonite country.

We actually don’t have a particularly good theory of how innovation clusters start or improve.  Richard Florida, for instance, waxes eloquent about trendy co-working spaces in Miami as a reason for its sudden emergence as a tech hub. American observers tend to attribute success to the state’s low tax rate, and presumably there are a host of other possible catalysts.  Who’s right?  Dunno.  But I’m willing to bet it’s not Florida.

We have plenty of examples of smaller communities hitting tech take-off without having a lot of creative amenities or “urban innovation strategies”. Somehow, despite the lack of population density, some small communities manage to get their ideas out in the world in ways that gets smart investors’ attention.  No one has a freaking clue how this happens: research on “why some cities grow faster than others” is methodologically no more evolved than research on “why some universities become more research intensive than others”, which is to say it’s all pretty suspect.  Equally, some big cities never get particularly good at innovation (Montreal, for instance, is living proof that cheap rent, lots of universities, and bountiful cultural amenities aren’t a guarantee of start-up/innovation success).

Moreover, the nature of the ecosystem is likely to differ somewhat in different fields of endeavor.  The kinds of relationships required to make IT projects work is quite different from the kinds that are required to make (for example) biotech work.  The former is quick and transactional, the latter requires considerably more patience, and hence is probably less apt to depend on chance meetings over triple espressos in a shared-work-environment incubator.  Raleigh-Durham and Geneva are both major biotech hubs that are neither large nor particularly hip (nor, in Raleigh’s case, particularly dense).

It’s good that governments are getting beyond the idea that one-dimensional policy instruments like “more money in granting councils” or “tax credits” are each unlikely on their own to kickstart innovation.  It’s good that we are starting to think in terms of complex inter-relations between actors (some, but not all of which involve spatial proximity), and using “ecosystem” metaphors.  Complexity is important. Complexity matters.

But to jump from “we need to think in terms of ecosystems” to “an innovation agenda is a cities agenda” is simply policy opportunism.   The real solutions are more complex. We can and should be smarter than this.

December 07

H > A > H

I am a big fan of the economist Paul Romer, who is most famous for putting knowledge and the generation thereof at the centre of  discussions on growth.  Recently, on (roughly) the 25th anniversary of the publication of his paper on Endogeneous Technological Change, he wrote a series of blog posts looking back on some of the issues related to this theory.  The most interesting of these was one called “Human Capital and Knowledge”.

The post is long-ish, and I recommend you read it all, but the upshot is this: human capital (H) is something stored within our neurons, which is perfectly excludable.  Knowledge (A) – that is, human capital codifed in some way, such as writing – is nonexcludable.  And people can use knowledge to generate more human capital (once I read a book or watch a video about how to use SQL, I too can use SQL).  In Romer’s words:

Speech. Printing. Digital communications. There is a lot of human history tied up in our successful efforts at scaling up the H -> A -> H round trip.

And this is absolutely right.  The way we turn a patterns of thought in one person’s head into thoughts in many people’s heads is the single most important question in growth and innovation, which in turn is the single most important question in human development.  It’s the whole ballgame.

It also happens to be what higher education is about.  The teaching function of universities is partially about getting certain facts to go H > A > H (that is, subject matter mastery), and partially about getting certain modes of thought to go H > A > H (that is, ways of pattern-seeking, sense-making, meta-cognition, call it what you will). The entire fight about MOOCs, for instance, is a question of whether they are a more efficient method of making H > A > H happen than traditional lectures (to which I think the emerging answer is they are competitive if the H you are talking about is “fact-based”, and not so much if you are looking at the meta-cognitive stuff.  But generally, “getting better” at H > A > H in this way is about getting more efficient at the transfer of knowledge and skills, which means we can do more of it for the same price, which means that economy-wide we will have a more educated and productive society.

But with a slight amendment it’s also about the research function of universities.  Imagine now that we are not talking H > A > H, but rather H > A > H1.  That is, I have a certain thought pattern, I put it into symbols of some sort (words, equations, musical notation, whatever) and when it is absorbed by others, it generates new ideas (H1). This is a little bit different than what we were talking about before.  The first is about whether we can pass information or modes of thought quickly and efficiently; this one is about whether we can generate new ideas faster.

I find it helpful to think of new ideas as waves: they emanate outwards from the source and lose in intensity as they move further from the source.  But the speed of a wave is not constant: it depends on the density of the medium through which the ideas move (sound travels faster through solids than water, and faster through water than air, for instance).

And this is the central truth of innovation policy: for H > A > H1 to work, there has to be a certain density of receptor capacity for the initial “A”.  A welder who makes a big leap forward in marine welding will see his or her ideas spread more quickly if she is in Saint John or Esquimault than if she is in Regina.  To borrow Matt Ridley’s metaphor of innovation being about “ideas having sex”, ideas will multiply more if they have more potential mates.

This is how tech clusters work: they create denser mediums through which idea-waves can pass; hence, they speed up the propagation of new ideas, and hence, under the right circumstances, they speed up the propagation of new products as well.

This has major consequences for innovation policy and the funding of research in universities.  I’ll explain that tomorrow.

May 26

Game-Changing Institutional Alliances

A couple of weeks ago, Arizona State University and EdX announced an institutional tie-up, which received a fair bit of publicity.  Basically, the deal was that EdX – a well-known MOOC platform, owned jointly by Harvard and MIT – would help ASU put an undisclosed (but judging by the rollout, somewhere between 15 and 20) number of its big first-year courses online.  There were two startling things about this announcement:

1)      The MOOCs are not time-delimited, requiring students to start and move ahead synchronously.  It is much more an on-demand learning system;

2)      Arizona state is prepared to offer actual credit – up to one year’s worth – to students who complete the courses, provided they pay a fee to do so.

The value proposition here is simple: give higher education a try at no, or minimal, cost; if you do well, pay the fee, get the credit, and use the credit at ASU, or transfer it to anywhere in the world (ASU is promising not to indicate whether the credits were delivered in person or online, since “they are identical”).  ASU is calling this the Global Freshman Academy, with the implication being that people from around the world will tryout higher education in this way.

Some MOOC-skeptics like Johnathan Rees are going bananas, calling this the apocalypse, because now MOOCs are actually going to be for credit.  I’m not sold on this.  The price-per-credit on these courses isn’t cheap, isn’t covered by student aid, and it’s not entirely clear to me why you’d want to go after 30 credits in this way, when there’s no guarantee any other institution is going to accept them.  In short, I’m not sure the demand for this is actually there.  Similar projects – albeit with distinctly less-slick marketing – have already failed spectacularly at the University of California and the University of Illinois.

But there’s another project out there that, despite receiving less publicity, is probably more important, and that’s the tie-up between Harvard and Amherst.  Amherst is a liberal Arts college, and like arts colleges (and for that matter, Arts faculties everywhere), it sees the value in helping students get some Business education at the same time.  But rather than develop its own business capabilities, it has decided to outsource the whole thing to Harvard (via, again, EdX).  The Cambridge institution supplies the content, but students who finish the courses receive Amherst credit.

This full-on outsourcing of the production of internal credits is fascinating for a couple of reasons, and not just because Amherst famously called a time-out on MOOCs two years ago.  It’s fascinating from a management viewpoint simply because it opens up possibilities for institutions to extend programming in certain fields, without necessarily incurring the permanent cost increases that would be entailed in hiring tenured staff.

It’s also fascinating from a branding/reputation point of view.  For a deal like this to work, you need to have an institution of lesser prestige decide that it has more to gain by outsourcing part of its work to a more prestigious institution.  And you have to have a more-prestigious institution that is prepared to gamble its own reputation by associating itself with a lesser-prestige institution.  Harvard is unlikely to do this kind of deal with Southwestern North Carolina State, for instance, but it could easily do it with any Tier 1 Liberal Arts School.

In Canada, you can imagine where this kind of thing might be headed.  UBC, McGill, and U of T (all of which are charter members of EdX) are all in a position to offer these kind of deals to comprehensive universities (and some of the more selective undergraduate schools, like Mount Allison).  One can also see how this kind of association might be useful from the smaller institution’s perspective:  Acadia finds it hard to hold on to students in the face of competition from Dalhousie, which can simply out-compete them on breadth of offerings?  Why not do a deal with McGill to increase its own breadth of courses?

More radically, this could be a way to continue to offer classes in fields of study where numbers at any single institution aren’t very large, and hence are quite expensive to offer?  Why not let UBC offer zoology at institutions across the country?  What’s to stop Alberta and Toronto being near-monopolistic providers of Slavic Language courses to the whole country?

It’s not all going to happen tomorrow, of course: higher education is, after all, the single most conservative industry in the world.  But this kind of alliance has the potential to produce far-ranging effects, especially in the ways institutions choose to specialize and focus their own offerings.  Harvard says it has several more Amherst-like deals in the pipeline.  Watch this space.

April 23

The State is not Entrepreneurial

If you’re interested in innovation policy, and haven’t spent time under a rock for the last couple of years, you’ve probably heard of Mariana Mazzucato.  She’s the professor economics at the University of Sussex who wrote The Entrepreneurial State, which is rapidly becoming the source of an enormous number of errors as far as science and economic policy are concerned.

Mazzucato’s work got a fair bit of publicity when it was released for pointing out that a lot of private sector tech is an outgrowth of public sector-sponsored research.  She has a nice chapter, for instance, outlining how various components of the iPhone – the touchscreen, the GPS, the clickwheels, the batteries… hell, the internet itself – are based on research done by the US government.  This is absolutely bleeding obvious if you’re in science policy, but apparently people out there need to be reminded once in awhile, so Mazzucato found an audience.

Where Mazzucato goes wrong, however, is when she begins to draw inferences; for instance, she suggests that because the state funds “risky” research (i.e. research that no one else wold fund), it’s role in R&D is that of a “risk-taking” entity.  She also argues that since the state takes a leading position in the scientific development of some industries (e.g. biotech), it is therefore an “entrepreneurial” entity.  From this, Mazzucato concludes that the state deserves a share of whatever profits private companies make when they use technology developed with public science.

There are two problems here.  The first is that Mazzucato is rather foolishly conflating risk and uncertainty (risk is tangible and calculable, uncertainty is not).  Governments are not a risk-takers in any meaningful sense: they are not in any danger of folding if investments come to naught, because they can use taxing power (or in extremis, the ability to print money) to stay afloat.  What they do via funding of basic research is to reduce uncertainty: to shed light on areas that were previously unknowable.  Individual companies do very little of this, not just because it’s difficult and expensive (if a company is big enough, that’s not a problem – see Bell Labs or indeed some of the quite amazing stuff Google is doing these days), but because the spillover from such research might allow competitors to reap much of its value (a point Kenneth Arrow made over fifty years ago).

The second issue is that nearly all of the examples Mazzucato offers of public research leading to technological innovation and profit are American, and a fairly high percentage of these examples were funded by the Defense Advanced Research Projects Agency (DARPA).  To put it mildly, these examples are sui generis.  It’s not at all clear that what works in terms of government investment in the US, with its massive defense infrastructure, huge pools of venture capital, and deep wells of entrepreneurial talent, hold very many lessons for countries like Canada, which are not similarly endowed.  Yet Mazzucato more or less acts as if her recommendations are universal.

The book’s recommendations amount to: government should own a share of young innovative companies by gaining shares in return for use of publicly-funded knowledge.  But this is pretty tricky: first, there are very few cases where you can draw a straight line from a specific piece of publicly-funded IP to a specific product, and even where you can, there’s no guarantee that the piece of IP was publicly-funded by your local government (Canadian start-ups benefit from knowledge that has been created through public subsidies in many different countries, not just Canada).  And while there’s a case for greater government investment in emerging companies (economist Dani Rodrik makes it here for instance), the case is not in any way predicated on government investments in R&D.  In Canada, the CPP could adopt such a policy right now if it wanted – there’s no reason why it needs to be linked to anything Industry Canada is doing in science funding.  To the contrary, as Stian Westlake points out, countries that have been most successful in converting public science investments into private hi-tech businesses eschew the idea of equity in return for scientific subsidies.

Worst of all – though this is not entirely Mazzucato’s fault – her argument is being picked up and distorted by the usual suspects on the left.  These distortions are usually variations on: “Someone said the state is entrepreneurial?  That means the state must know how to run businesses!  Let’s get the state more involved in the direction of the economy/shaping how technology is used!”  This way disaster lies.

So, Mazzucato did everyone a service by forcefully reminding people about the importance of publicly-funded R&D to any innovation system.  But her policy prescriptions are much less impressive.  Treat with care.

April 08

ATMs and the Future of Education

I recently came across a fascinating counterintuitive piece of trivia in Timothy Taylor’s Conversable Economist blog.  At the time ATMs were introduced in 1980, there were half a million bank tellers in America.  How many were there 30 years later, in 2010?  Answer: roughly 600,000.  Don’t believe me?  See the data here.

Most people to whom I’ve told this story tend to get confused by this.  ATMs are one of the classic examples about how technology destroys “good middle class jobs”.  And so the first instinct many people have when confronted with this information is to try and defend the standard narrative – usually with something like “ah, but population growth, so they still took away jobs that could have existed”.  This is wrong, though.  When we look at manufacturing, we see absolute declines in jobs due to (among other things) automation.  With ATMs, however, all we see is a change in the rate of growth.

The key thing to grasp here is that the machines did not put the tellers out of business; rather, they modified the nature of bank telling.  To quote Taylor, “tellers evolved from being people who put checks in one drawer and handed out cash from another drawer to people who solved a variety of financial problems for customers”.

There’s an important truth here about the way skill-use evolves in the economy.  When most people think about technological change and its impacts on skills, they initially tend to presume “more machines → high tech → more tech skills needed → more STEM”.  But actually this is, at best, half the story.  Yes, new job categories are springing up in technical areas that require new forms of training.  But the more important news is that older job categories evolve into new ones with different kinds of requirements, and requiring a different skill set.  And in most cases, those new skills are – as in our bank teller example – about problem-solving.

Now, as a society, every time we see job requirements changing, our instinct is to keep kids in school longer.  But: a) pretty soon cost constraints put a ceiling on that strategy; and, b) this approach is of limited usefulness if all you’re doing is teaching the same old things for longer.

At a generic level, it’s not hard to teach in such a way that you’re giving students necessary skills to thrive in the future labour market.  Most programs, at some level, teach problem-solving (identifying a problem, synthesizing data about it, coming up with possible solutions, evaluating them, and coming up with a solution), although not all of them test for them explicitly, or explain to students how these skills are likely to be applied later on.  More could be done with respect to encouraging teamwork and interpersonal skills, but these aren’t difficult to add (although having the will to add them is something different).

The more difficult problem has to do with understanding where technology is likely to replace jobs and where it is likely to modify them.  What do driverless cars mean for the delivery business?  At a guess, it means an expanded market for the delivery of personalized services during commuting time.  Improved automatic diagnostic technology or robot pharmacists?  More demand for health professionals to dispense lifestyle and general health counselling.  Increased automation in legal affairs?  Less time on research means more time for, and emphasis on, negotiation.

I could go on, but I won’t.  The point, as Tyler Cowen makes in Average is Over (a book whose implications for higher education have been criminally under-examined) is that the future in many fields belongs to people who can best blend human creativity with the power of computers.  And so the relevant question for universities is: to what extent are you monitoring technology trends and thinking about how they will change what you teach, how you teach it, and how you evaluate it?  Or, put differently: to what extent are your curricula “future-ready”?

In too many cases, the answers to these questions land somewhere between “not very much” and “not at all”.  As a sector, there is some homework to be done here.