You may have noticed—via the odd banner ad on this blog over the past six months—that HESA held a fantastic event in Calgary last week on the use of Artificial Intelligence in Higher Education. We had a great time. There were 25 sessions, roughly 100 presenters and 400 delegates from 80 institutions. It really was a great, pan-Canadian exchange, and the first in Canadian higher education to look at AI. A really huge thank you to all of our partners and sponsors and a massive shout-out to our organizing team (Peter Smythe, Barry Burciul, Meg Patterson, Janet Balfour, Sam Pufek, and Sandrine Desforges) who put together a fantastic two-day gathering.
There were a few things that made the event quite special. Most prominently for some was the complete absence of anyone talking about IRCC and student visas, the first such gathering in our sector in at least 18 months. But more importantly, it was the fact that the delegates—400 strong—were so diverse in their backgrounds and opinions; the number of active faculty present was gratifying, and testament to the interest this subject generates. The meeting was not designed to be a scholarly meeting, but it managed to be very serious and challenging all the same.
Some of the most interesting sessions were, in a word, the most Canadian (and yes, folks, #elbowsup was a theme). There was a lot of concern about equity and accessibility, particularly when it came to languages other than English (the standing-room only crowd of 400 for the session Indigenous perspectives on AI was amazing). And of course, the sessions on the ethics of AI, particularly with respect to its environmental impact, were also well attended.
But at the hear of the conference, were, I think, three significant questions.
The first was how can we get institutions to be leaner, more efficient and just all around better at providing services to faculty and students. Lots of ideas here, but none got quite as big a gasp from the audience as the idea that an AI-propelled system of Institutional Research Boards could cut project approval time by 80% or more. Personally, I think this is probably more a reflection of the fact that humans on IRBs invent a lot of nonsensical objections to feel important/ justify their existence rather than any superior ability of AI to suss out real problems, but that’s the great thing about process re-imagination. It doesn’t matter so much why the re-engineering works, as long as the relentless re-thinking and experimentation continues. But I’m happy to give credit to AI if that’s what it takes. Some folks seem to think that it’s difficult to drive AI adoption while in the midst of the kind of budget crises many institutions are currently facing; but in fact, AI is potentially the solution to a lot of budget woes as well.
The second has to do with experimentation in the use of AI in the classroom. I was very glad about this because the focus on AI and plagiarism is getting tired; the real action is coming where instructors are looking to unlock student creativity and enhance student learning this new technology. This is obviously going to take a while because the technology is moving quickly, the work involved in re-imagining courses is considerable, and profs don’t have unlimited time. But the sheer number of fabulous examples of teachers experimenting with using AI to develop more creative teaching and learning experiences for students in disciplines like modern languages, philosophy and history (yes folks, AI might have a more significant positive effect in the humanities than elsewhere). In this respect, agentic AI is a significantly bigger deal than generative AI. The common point that people were making was basically that the room to experiment and improve here is effectively infinite. A hundred flowers are already blooming, and the only question now is how fast the good experiments spread.
(Huge thanks by the way to Lev Gonick, Arizona State University’s Chief Information Officer, for joining us and letting us know how his hugely ambitious organization was empowering its staff to experiment with AI and share some of the most exciting examples. It woke a lot of the audience up to just how quickly the bar is being raised in AI and how important a culture of experimentation will be in dealing with this challenge).
A final question raised was less about teaching and more about the point of teaching. At its simplest, the question posed was: if AI can do pretty much anything, what remains worth being taught at universities and colleges? Not many tended to the most pessimistic answers to that question, but that didn’t settle all fears about what institutions are meant to be teaching with respect to skills in using AI. As the excellent James Bessen wrote in his excellent book Learning By Doing: the Real Connection Between Innovation, Wages and Wealth, it’s hard to use education as a vector to improve skills during technological revolutions. Put simply, education depends on codification (someone’s got to have written a textbook, right?) and when technological change is happening too quickly, there’s no time available for codification. But what that highlights is the need to integrate time spent in the labour force with time in classes, so that students can experience how AI skills are being used in real life. Basically, MITACS but for AI. Smart universities and colleges should be trying to work out how to create community AI councils, bringing together those employers furthest ahead in AI adoption and working out how to generate more Work-integrated learning experiences in these businesses. Fast-moving companies might find the burden of running WIL programs a challenge, but governments in particular could usefully remind them that this is the fastest way to expand the pipeline of talented AI users (it was noted during a couple of the international sessions that Canada seems to be lagging places like Singapore in general AI literacy and skill uptake—initiatives like this might help close the gap).
However, it was During the final Presidential panel (thanks to Bill Flanagan, Joy Johnson, Annette Trimbee and Misheck Mwaba for joining Simon Bates on that one), that the most important lesson of the two days finally clicked for me. All the things that institutions need to succeed as institutions in meeting the AI challenge are the same things they need to succeed in overcoming their longstanding financial challenges. Institutions need to experiment widely to see what works, and that means changing our cultures to accept, acknowledge, and learn from the occasional failure along the way. We need to share experiences and learn from others, and that means changing our cultures to understand that not everything needs to be “invented here.” And above all, it calls for ambition. Just as no institution is going to cut its way to greatness, no institution is going to reach new heights by avoiding the implications of radical technological change.
In short, it’s not so much the details of specific initiatives that matter so much as the ability to shift institutional cultures towards greater ambition, experimentation and the sharing of/ learning from that experimentation. Institutions that can manage this are going to leap to the forefront of higher education not just for the next decade but possibly for the rest of the century.
Which is fortunate for us here at HESA Towers, because promoting ambition, experimentation, and learning in higher education is why HESA exists, and we’re excited that so many institutions want to seize the moment and move in that direction. Bank on it: you’ll be hearing more from us very soon about more events (some AI focused, some not) designed to help institutions pursue these values. And of course, if you’re interested in working with us to get more done in the AI space, please get in touch with us. We’d love to help.