Good morning. As you probably know, we at HESA have been paying a lot of attention to Generative AI over the past few months. We launched our AI Observatory during the Summer, have been organizing monthly AI Roundtables with the help of a super group of volunteers from across Canada, and began putting out our Friday AI-focused emails. Today, as we approach the first anniversary of the release of ChatGPT, we wanted to share some observations about the state of play in GenAI on Canadian campuses.
At the end of 2022 or early 2023, many reacted to ChatGPT by releasing a generic institutional statement, either prohibiting GenAI or approaching it with caution. For the most part, in these early days, the issue was framed solely in terms of academic integrity. As time passed, HEIs began embracing – or at least, accepting = GenAI, and many changed their statements to something along the lines of “since we can’t put the toothpaste back in the tube, let’s learn how to live with it – and potentially even learn to use it for ‘the good’”. Some, like Seneca Polytechnic, even affirm that they want to enable all students to critically engage with GenAI to adequately prepare them for an AI-supported workforce.
More and more HEIs are now starting to release resources and guidance for the members of their community. Most, like Wilfrid Laurier University or Carleton University, are sharing advice on how to (re)design curricula and assessments to integrate GenAI and/or prevent academic misconduct, as well as language to incorporate in syllabi. Durham College developed a framework to assess the use of GenAI for teaching and learning, and Kwantlen Polytechnic University and the University of Regina, to only name a few, also developed guidelines for potential use of GenAI within the institution. Many institutions are reviewing or amending their academic integrity policies to incorporate GenAI, like Université de Montréal, and others, like McMaster University or the University of Northern British Columbia, are starting to take position around the use of AI-detection tools.
One key bottleneck around all these policies seems to be around instructor knowledge and training. For many of these strategies to work, institutions need large numbers of professors who are engaged and informed on issues of GenAI. Two things are at issue: first, teaching and learning centres on campus need more resources to be able to assist instructors on these issues, and many instructors need to be more inclined to engage on the pedagogical implications of AI. We haven’t seen many Canadian institutions with the requisite institutional investment or enthusiasm; as a result, we expect it may take some time for many institutions to step up at the necessary level of engagement to see these policies really take root at an institutional level.
While more and more institutions are developing policy initiatives on teaching and learning, we are aware of very few institutions that have yet managed to develop general policies on GenAI, that goes beyond teaching and learning to also cover considerations such as the procurement of GenAI, and its use in business operations. On the governance front, Western recently announced that they appointed a chief AI officer, whose mandate will be to develop and implement a university-wide AI strategy, and Laurentian University’s senate just voted to create an ad hoc committee dedicated to creating policies around the use of AI.
The lack of more generalized policy development seems to be down to two main factors. First, HEIs’ bureaucracy is the enemy of innovation and adaptation. The process of developing and amending policies takes too much time for an issue that is as rapidly evolving as is the rise of GenAI. Secondly, HEIs still haven’t decided what they’d like to see in such policies. Clearly a few areas need to be addressed: academic integrity, responsibilities of instructors and students, considerations to protect data privacy and security, etc. But many gray areas remain, notably with respect to equity, research, and business operations. Do institutions want to allow GenAI to be used by instructors to build their courses, or even evaluate assessments? What about allowing GenAI to automate certain tasks of the admissions process? By and large, Canadian institutions have yet to grapple with these issues, with many institutions saying they were “waiting to see what the big institutions do” to follow their lead.
The Canadian experience is not entirely out of line with those seen in other countries, but there are some exceptions. One of the few comprehensive policies we’ve seen so far is at the University of Technology Sydney, where they clearly state that AI may be used to support the completion of a number of tasks, such as making operations more efficient, providing feedback to students, identifying at-risk and high-achieving students, recommending optimal allocation of campus facilities, and marketing activities. Some other universities, like the University of Hong Kong and Lingnan University, have bought licenses to provide access to ChatGPT to their faculty and students. Some scientific journals are also starting to release their own policies and authorship guidelines for AI-produced content.
We must also consider the broader regulatory and legal context for GenAI. China was the first country to formally adopt a law on GenAI. In addition, China’s new Degree Law would specifically impact higher education, and could lead to students seeing their degrees being revoked if they have used ‘illegal means’ to complete academic work, which could include GenAI tools. Canada, for now, has only released a voluntary code of conduct for the use of GenAI systems. Education being under provincial and territorial jurisdictions while AI governance and data privacy issues are mostly being regulated at a national level adds complexity to the discussion.
While there is no panacea (yet) on how to respond to GenAI in higher education, we can provide a few pointers towards elements that could help institutions that currently feel a bit at lost make some progress. That includes assembling a committee or appoint a new role focused on GenAI and that would be tasked with staying informed of the latest developments of GenAI applications and ensuring this committee/role has the necessary AI literacy to make adequate decisions within the institution. It also requires making sure to bring the right people around the table, like VPs Operations, Deans of Graduate Studies, Registrars, IT, accessibility/disability offices, and students, to ensure all perspectives are being considered beyond solely teaching and learning. And finally, it is important to communicate with the whole campus community regarding the institution’s position on the matter, so that people know what is going on and where the institution stands on various issues. One thing that is definitely apparent from our discussions across the country is that even where institutions are being pro-active, their efforts are not well-promoted or well-understood even within their own communities.
Anyways, that’s your digest of what we’ve been seeing and hearing on campuses across the country. Anything catch your attention? Make sure to continue getting the latest on our AI work by subscribing to our AI-focused newsletter. See you there.