As discussed yesterday, Kevin Carey’s The End of College pinpoints higher education’s key ills in its inability (or unwillingness) to provide students with any real signal about the quality of their work. This serves students badly in a number of ways. First, it makes finding job matches harder, and second, it means institutions can mis-sell themselves by investing in the accoutrements of excellence (ivy, quads, expensive residences) without its substance.
Essentially, Carey believes that technology will solve these problems. He’s not a blind MOOC-hypester; in fact, his chapter on Coursera is reasonably astute as to the reasons the current generation of MOOCs have yet to set the world alight. But he is utterly certain that the forces of technology will eventually provide high-quality, low-price solutions, which will overwhelm the current model. The ability to learn without the need for physical classrooms or libraries, the ability to get tutorial and peer assistance online, and the ability to test and certify at a distance will largely do away with the need for current (expensive) physical universities, and usher in the age of “The University of Everywhere”. Cue the usual stuff about “disruption”.
Carey provides readers with a useful overview of some of the ed tech companies whose products are trying to provide the basis of this revolution, with a particular emphasis on technologies that can capture and measure learning progress, and use that information both to immediately improve student performance, and to provide feedback to instructors and institutions to improve courses. He also spends a chapter looking at the issue of credentials. He correctly recognizes that the main reason universities have been able to maintain their position for so long is the strength of the Bachelor’s degree, a credential over which they maintain a near-monopoly. And yet, he notes, credentials don’t actually tell much about what a graduate’s capabilities are. And so he spends an entire chapter talking about alternatives to Bachelor’s degrees, such as Digital “badges” – open-sourced, machine-readable competency-based credentials which, in theory at least, are better at communicating actual skills to potential employers.
The problem is that this argument misses the mark, somewhat. To measure learning in the way techno-optimists wish, the “learning” has to be machine-readable. That is to say, student capabilities at a point in time have to be captured via clicks or keystrokes, and those keystrokes have to be interpretable as capabilities. The first is trivially easy (although implementing into a classroom setting in a disciplined way may end up being a form of torture); the second will vary from easy to unimaginably difficult depending on the discipline.
A lot of the promise people see in machine learning is based on things like Sebastian Thrun’s early MOOCs, which were in some ways quite intriguing. But these were in computer science, where answering a question rightly or wrongly is a pretty good indication of a mastery of underlying concepts, which in turn is probably a reasonable measure of “competence” in a field. But extrapolating from computer science is less helpful; most disciplines – and indeed, all of business and the social sciences – are not susceptible to capture this way. The fact that a history student might not know a “correct” answer to a question (e.g. “in what year was the Magna Carta signed”?) doesn’t tell you how well that student has mastered skills like how to interpret sources. In the humanities and social sciences (here including Law, Education, and Business), you can capture information, but it tells you very little about underlying skills.
With badges, the problem is roughly the same. Provided you are in a field of study where discrete skills are what matters, badges make sense. But by and large, those fields of study aren’t where the problem is in higher education. What problems do badges solve in humanities and social sciences? If the skills you want to signal to employers are integrative thinking or teamwork (i.e. skills the majority of employers say they most desperately need), how do badges solve any of the problems associated with the current Bachelor’s degree?
Two final points. First, I think Carey is too optimistic about learners, and insufficiently mindful that universities have roles beyond teaching. One justified criticism of much of the “disruption” crowd is that their alternative vision implies a high degree of autodidacticism among learners: if you put all these resources online for people, they will take advantage of them on their own. But in fact, that’s likely the case only for a minority of learners: a University of Everywhere will – in the early years at least, and quite possibly much longer – likely impose significant penalties on learners who need a bit more assistance. They need a level of human contact and interaction higher than that which can be provided over the internet.
Finally, one of the main reasons people go to universities is the social aspect. They meet people who will remain friends, and with whom they’ll associate for the rest of their lives. They learn many skills from each other via extra-curricular activities. Basically, they learn to become adults – and that’s a hugely important function. And sure, most universities do a half-assed job (at best) of communicating and executing this function, but Carey’s alternative is not an improvement on this. It is why I’m fairly sure that even if most students could go to the University of Everywhere, they would still choose not to. Even if it were practical, I’m not sure it passes the market test.
So if Carey’s diagnosis about universities’ weaknesses are accurate but his predictions incorrect, what are the real alternatives? I’ll tackle that tomorrow.