I have been noodling for a while on the question of how the use of Artificial Intelligence is likely to change the cost structure of institutions, so I thought it was worth a blog. Particularly since most of the theories I hear about in this area are almost certainly wrong.
The one thing I think we can confidently rule out about AI and teaching is that AI will “replace professors” (or in more extreme versions, “replace universities”). This is a story that we’ve heard many times before. MOOCs were going to replace universities, make them irrelevant. So was the internet, and before that television, radio, and the printing press. Yes, there are dystopians with suspect motives who might want this to be true, but it is not going to happen. Teaching matters. Period.
There are, I think, two areas where we will see significant changes in institutional activity, but not changes in the underlying economics of education. These are:
Changing the range of programs being taught. That is, we will teach more in new fields relating to AI, and we will stop teaching as much in fields which AI makes irrelevant. We already do this: Ontario universities alone have started 25 master’s programs in AI (which I suspect is too many, but whatever—the market will shake out). This kind of migration of programs happens all the time. I doubt, however, that this will affect institutional costs very much. At worst, from a cost perspective, it’s a continuation of a two-decade trend replacing cheap humanities/social science programs with slightly more expensive STEM programs, but even then, AI courses are on the cheaper end of per-student within STEM programs, so probably not a big deal.
Changing program learning outcomes across a wide range of programs. As AI starts to change the way knowledge workers interact with computers, the kinds of learning outcomes we expect from educational programs are going to change; simply put, programs will need to familiarize students with the use of technology in ways that make them more valuable to the firms hiring them. This isn’t really that expensive a proposition either. Institutions already do (if perhaps not as frequently as they might) revise course outcomes to reflect economy-wide changes in the use of technology.
There is another area where I think the cost implications are somewhat unknowable, and that is the declining cost of creating useful educational simulations via Large Language Models (LLMs). There have been a lot of fascinating uses of AIs/LLMs to create new pedagogical materials. For instance, with a little bit of training, it is possible to create an LLM that imitates various historical figures based on their writings (ancient Chinese philosophers for example), with which students can interact, or LLMs that can create simulations of various historical events (settler-indigenous encounters) for instance that students can navigate independently. This is exciting stuff! It probably requires more investment in educational resources; that is, either training instructors on how to create AI modules that can do this or hiring more instructional developers who can do it in conjunction with instructors (I suspect the latter is more efficient, but YMMV). But—and this is crucial—while the cost of new materials is being lowered, these costs remain incremental to the existing cost base. That is, even though they are cheaper, they still require increased expenditure to make them work. I don’t have a sense of how this is going to play out in the near term: I do know that it is one of those areas where Canadian universities almost certainly could benefit from working together to create common AI-powered pedagogical tools (I am pretty sure they won’t do so because co-operation isn’t in their nature, but hope springs eternal).
This brings me to the last big area of change, which I think is also the most difficult and controversial, which is how, in some fields—primarily those in the humanities and social science (including law and business)—AI will change the way we teach and assess students. This is the change that is causing moral panic right now (for example, this piece in New York Magazine about cheating in college, which for some reason focuses on a student at Wilfrid Laurier, pieces like this one in the Chronicle, etc.). Basically, students are all using AI for their essays, which makes essay-based assessment a farce, and oh no, what are we supposed to assess now?
These articles aren’t wrong, exactly. If you assume that the only way to assess mastery of a topic is through assignments and essays that a student does on his or her own, at home, away from the supervisory eyes of a professor, then yes, AI is a complete disaster. You can’t stop students from using it. You can’t reliably detect it. And it absolutely rots the brain in the sense that in many cases, students are outsourcing not just the scribbling but the actual thinking—which is the point of the essay as a form of academic work—and so they are missing out on gaining the actual mastery of subjects which may be required in the labour market and is definitely the point of academic disciplines as currently structured.
In principle, this is not all that difficult a problem to overcome (and indeed in some ways has been overcome before, in math-ier related disciplines back in the days when electric calculators were first introduced—then, too, calls for bans were rife). Just change the method of assessment. The essay is not the only way to measure critical thinking and judgment. In the early days of the university, this was done through the practice of disputation. It is still done in universities through things like oral examinations and written, invigilated testing. We can bring all that back, or at least weight these elements more heavily for assessment purposes. Professors can also spend more time working out what kinds of questions AI has difficulty answering (this piece from British academic Alice Evans is very good on that). All this is possible.
The challenge is that the essay, whatever its other pedagogical merits, is an extremely efficient assessment technology. A student goes out and spends 10 hours on a paper, which a professor can read and mark in 10 minutes (all times approximate, don’t @ me). For decades now, we have been working on the assumption that only with these kinds of student work-to-instructor-assessment time ratios can we offer courses that result in actual mastery of a particular corpus of work in the social sciences and humanities at a mass level. The alternatives to essays I have listed above, whatever their pedagogical merits, are more time-intensive for professors. Pretty much all of the methods listed above mean extra work on the part of professors relative to the work we ask students to undertake. And I think there is a real question about whether or not we can ask them to adopt all this without reducing other parts of their workload or reducing class sizes. For humanities, where student-teacher ratios are already falling rapidly, that’s not necessarily a big challenge, but in social sciences and especially business, where ratios are already sky-high, that’s a bigger challenge, and one we have yet to really face up to.
AI and LLMs are only a threat to education and students’ intellectual development if academics don’t update their assessment methods. But in many disciplines, that almost certainly means a change in the teaching production function of academia, one which institutions will have real difficulty accommodating if we remain in the current era of Nobody Wants to Pay for Education. But if universities and colleges don’t accommodate, the education they provide really will become sub-standard and unlikely to produce the graduates society needs, an outcome that can only reduce the esteem in which institutions are held. And that really would be disastrous.
In sum, in the age of AI, the most critical thing you can be doing is focusing on how students are assessed. But equally, if you’re focusing on assessment without thinking about the economics of teaching, you’re not taking the issue seriously. This is a really hard problem, and it deserves a lot more attention than it currently receives.