Some Thoughts on the Use of AI in Teaching

I spent part of last week in Tempe at Arizona State University’s conference on Agentic AI and the Student Experience, which was a pretty interesting event. It made me think awhile about AI in higher education, which I thought I’d share with y’all.

My POV on this basically comes down to six things:

  1. Large Language Models (LLMs) remain, for the moment at least, vastly overhyped in terms of their economic potential, and are definitely not heading towards anything like Artificial General Intelligence. Companies are not yet finding a lot of business cases that justify massive expenditure on them, which limits their likely development. 
  2. Some of the companies hyping LLMs – in particular Open AI, but not limited to it – are grifters, higher education is one of the targets of their grifts, and the grifting will only get more desperate as these companies run out of money and go bust over the next 24 months or so (contracts from public or quasi-public agencies are godsends to companies who can’t make money off freemium models)
  3. In the long run, it is far more likely that AI turns out to be a “normal technology”, less earth-shattering than the combustion engine than it is something akin to the discovery of fire (those who believe otherwise, in the main, have either fallen for the grift or are in on it).
  4. BUT AI technology is not just LLMs/Generative AI. Remember the term “Machine Learning”? That stuff, provided you’re working on good, clean, unbiased datasets, is all still pretty spectacular (I particularly like this example of a 17 year old using AI to make some pretty significant breakthroughs in Astronomy). And while there is a tremendous amount of AI slop out there, it isn’t all necessarily slop, and there is little question LLMs and Generative AI are improving all the time, even if they remain a considerable distance from where the hype-meisters say it is.
  5. That said, it is all enough of a big deal to think about how students should be prepared for a world of AI, and if/how it can be used as an educational tool (it won’t always be a useful tool but saying it can never be a useful tool is just silly). This is the main reason we at HESA are interested in the technology.
  6. Things get a lot clearer if you think about AI in higher education as a subset of “problems higher education faces in dealing with technological change” rather than a subset of “let’s make everything run on AI”.

That last one is the most important. I’ll expand on it. 

Higher education, being as near to eternal as any institution can be, constantly lives with technological changes. In the past 50 years, we have had to work out how to adopt to a world with first microcomputers (to use a very early 80s term) and then the internet. Over the course of decades, we found ways to integrate these technologies into teaching in hundreds of different ways. New disciplines grew up to take advantage of them (not just data science, but also things like the digital humanities). What’s going on with AI is not substantially different from these earlier changes; the difference is mainly that the hype ecosystem is a bit different and the speed of change is somewhat faster.

Now, one thing that higher education did not do when computers and the internet came along was to mandate their academic staff to use these tools in class in specific ways. That probably would not have ended well, not just because no one likes to be ordered to do things but also because it is never the case that a new technology can be blindly applied in a uniform way across an institution. Different fields of study are going to absorb tech in different ways. At most, what institutions tried to do was create conditions in which professors would be encouraged to use the new technologies (does anyone remember how Acadia was briefly the techiest school in Canada because they threw a laptop in everyone’s frosh kit? 1996 was a great year). 

At the best universities, that’s exactly what’s going on today. Institutions look for AI “champions” – people who are curious about technologies, or “early adopters” and try to help them experiment. That might involve i) giving profs time release and some funds to create something by themselves. Or it might mean ii) providing intensive professional development to train themselves up to do the same, or even iii) making trained instructional designers available to professors to turn their existing course material into something interesting. 

(At the Tempe conference I attended a very interesting session by some folks from Florida State University, an institution that went with option iii) in a big way. The result in a technical sense wasn’t very good: training up a good AI is sufficiently difficult that most professors who participated in the scheme got frustrated quickly. But in a broader sense, the result was more interesting – they built a significant community of people interested in pedagogical uses of technology. The next time they try something like this, when the technology develops a little, they’ll be ready to move that much more quickly. All told, that’s a decent result.)

Anyways, long story short: getting good at applying any new technology to pedagogy, AI or no, really comes down to time, resources, a willingness to experiment and a determination to share experiences and promote good practice at scale. The problem most universities have with technology adoption processes isn’t that institutions and/or faculty are anti-technology; it’s that modern universities make very little space available to talk pedagogy or to properly resource experimentation in pedagogy.  

And that’s something we need to fix regardless of how one feels about AI.

Share:

One Response

  1. You mention the hype ecosystem being “different” this time around. Does that just mean more intense? It definitely is more rapid, which can make it seem more intense, but is there something else there too in your understanding?

    Universities need space to foster connections around pedagogy, and there’s generally nothing wrong with jumping on the most recently hyped technology as a starting point (and PR-able nugget) for bringing together pedagogically engaged people and letting them spend some time collaborating to see what emerges.

    The rub is that most such people also care about respecting and imparting longstanding academic norms around academic integrity and plagiarism avoidance to their pedagogical subjects, and AI is complicating this significantly. In my take, that is part and parcel of its overdone hype, and very much designed to make higher ed writ large feel extraordinarily threatened — which is part of the con (going beyond just hype). It’s a psychological and rhetorical strategy to get higher ed institutions to cede moral and reputational ground, and other levers of power and influence as well (cough certain education “compacts” cough).

    They shouldn’t blink, or else that “very little space” for pedagogy talk could very well shrink to zero, when it ought to be expanding.

    This too shall pass.

Leave a Reply

Your email address will not be published. Required fields are marked *