HESA’s AI Observatory: What’s new in higher education (Dec. 14th, 2023)

Spotlight

Good afternoon all, 

2023 is soon coming to an end, and this is already the last AI-focused email of the year. 

Since HESA launched its AI Observatory in August, we noticed that the pace at which institutions have been releasing policies, guidelines and statements around the use of GenAI in higher education seems to have slowed down. Has the feeling of urgency passed? Have other issues emerged that forced institutions to focus their energy elsewhere? Or is the current silence hiding the fact that many institutions are currently doing the work behind curtains, reuniting their advisory councils, engaging with their communities, and trying to figure out how best to react to AI? Our bet is that it’s a bit a mix of the three. Some institutions might also think that the responsibility of addressing AI lies within each faculty member, to decide how they want to allow or prohibit AI in their respective courses. If that’s your institution’s approach, you might be ignoring a lot of the more complex issues, such as how GenAI impacts research, how business operations might be re-designed using AI tools, or how AI can act as an assistive technology for students with disabilities, to name only a few.

What’s clear, though, is that if your institution hasn’t yet taken the AI bull by its horns, you are late. Every day, more and more students, faculty and staff are adding GenAI tools to their toolbox, exploring their endless possibilities, and incorporating them in their daily activities. If your institution is still struggling with how to properly respond to GenAI in higher education, send us an email. We might be able to help you out.

If you have developed policies or guidelines around GenAI in higher education that are not yet featured on our AI Observatory, please send them our way so our team can add it to our repository. 

The AI newsletters will be back on January 12th, 2024. Until then, wishing you a bit of rest amidst the planification of the next semester!

Policies & Guidelines Developed by Higher Education Institutions

Tags: Guidelines, Academic integrity, Pedagogy, Research, Governance, North America

Chapman University’s (United States) Artificial Intelligence Hub provides access to a series of guidelines. Guidelines Relating to Data Privacy and Security When Using Generative Artificial Intelligence Tools state what type of data may be input into GenAI tools, and which should note be entered into any GenAI tool at this time (data considered “high risk”). The University has a data risk classification matrix to classify data. Chapman Considerations for Syllabus Policies on the Use of AI Generative Tools cover approaches to syllabus statements concerning GenAI, and Artificial Intelligence in the Classroom provides ideas on how to incorporate GenAI in teaching and learning. Guidelines for the use of Artificial Intelligence in Research / Scholarship / Creative Activities talk about AI and grant proposal writing, and AI and data analysis, amongst other things. 

Tags: Guidelines, Research, Western Europe

University of Greenwich’s (UK) Guidance on the Use of Artificial Intelligence includes the university position on the matter: “The university believes that AI can be a very useful tool to aid learning, and its effective, responsible use is likely to be a desired trait for employers. However, its use must be guided by principles of academic integrity and with awareness of the risks it poses, when not used with care.” The use of AI to aid learning is encouraged, but students must not copy and paste directly from an AI tool. Unacceptable uses of AI include “submitting work produced in whole or part by AI without proper referencing, copying or paraphrasing AI-generated content without proper referencing, or using AI to undertake analysis, evaluation or calculations without acknowledgement via the declaration of AI use form”. In addition, personal or sensitive information should not be input into AI. The guidance indicates how to reference use of AI, and lists a series of risks of using AI.

News & Research

1EdTech.

1EdTech’s Emerging Digital Pedagogies Innovation Leadership Network created an AI Preparedness Checklist to “provide institutions with guiding prompts for establishing protocols, policies, and best practices for using AI in teaching and learning”. The checklist offers organizational guiding prompts, policy guiding prompts, pedagogical guiding prompts, and literacy guiding prompts. 

Ioku, T., Kondo, S. and Watanabe, Y. University World News. December 9th, 2023.

The authors of this article reviewed the home pages of 100 highly ranked universities to identify how they responded to GenAI in higher education. They identified 68 documents, that were then categorized into four levels, “indicating the degree of support or opposition toward adopting GenAI in higher education and the extent of perceived risks associated with using the new technology”. The authors also used data from the QS World University Rankings to determine the ratios of international students and academic reputation of each of these institutions. “Approximately 35% of the universities demonstrated a negative stance (strongly against or against) toward using GenAI in higher education. About 30% took a neutral stance, and 35% were supportive. […] Universities with a more diverse ratio of international and domestic students were more likely to be pessimistic about accepting the new technology and also foregrounded the risks involved in using GenAI”.

EAB.

In this infographic, EAB shares 12 ways to unlock AI’s potential in higher education in the following spheres: academic and career support for students, enrollment support for prospective students, operational efficiency gains, personalized content generation for enrollment marketing and donor relations, and faculty efficiency gains in teaching and research.

Cyr, M. Inside Higher Ed. December 13th, 2023.

Following the UPCEA MEMS conference, the author of this article provides six recommendations for those who are navigating using AI in higher education: 1) think first of AI as an efficiency creator; 2) experiment with the tools; 3) use AI strategically, as you would with any other technology; 4) use AI tools to facilitate more personalized website experiences; 5) get started on the sticky stuff (organizational policies, procedures, privacy approaches, etc.); and 6) use AI to facilitate human interactions.

McMurtrie, B. and Supiano, B. The Chronicle of Higher Education. December 11th, 2023. 

Institutions’ responses to GenAI are all over the map: some encourage it, some allow it under specific circumstances, others prohibit it. The Chronicle asked their readers to describe what was happening with respect to AI in the classrooms during the fall semester, and nearly 100 faculty members answered to the call. Less than 10 respondents said they kept their assignments and policies the same. Many instructors added language to their syllabus to outline what an appropriate use of AI looks like or at least had a conversation about it with their students. Many respondents altered or even eliminated certain types or assignments or assessments, and many said they allowed AI in some assignments but not in others. Some faculty members have also begun using AI tools themselves to help with designing their courses. “By and large, responses to The Chronicle’s questions suggest that professors’ worries about the scope and severity of students cheating with GenAI have dissipated. Asked what has surprised them, they were more likely to point to how little students use GenAI than to how many of them use it to cheat.”

Sharma, Y. University World News. December 6th 2023. 

Most of the data that large language models (LLMs) train on is in English. However, some researchers are currently trying to get similar levels of accuracy as LLMs, but with less data and smaller models. This would notably be useful for non-English-speaking countries that might not have access to infinite resources. For example, some work is underway to develop language models for transcription and translation in under-resourced languages. Small language models are also easier to control and correct, if needed.

Keyhani, M., Hemmati, H. and Salgado, L. The Conversation. November 21st, 2023. 

In this article, the authors argue that collaboration is required to be able to discover the full potential of GenAI and how it might help tackle today’s global challenges. “Including GenAI in the curriculum cannot be treated as top-down teaching. Given the rapid development and newness of the technology, many students are already ahead of the professors in their GenAI knowledge and skills. We must recognize this as an era of collective discovery, where we are all learning from each other.”

Mitchell-Yellin, B. Inside Higher Ed. December 12th, 2023. 

The author of this article argues that the way AI tools increase efficiency is by ‘cutting out humans’, and that these gains in efficiency ‘come at the cost of alienation’ by distancing individuals from one another, eliminating their engagement in the process and ‘undermining their ability to control and benefit from their own labor’. He also argues that it isn’t in the students’ interests to prep them to properly use AI in the workforce because “the more normalized the use of GenAI, the more bosses will demand of workers. And the more efficiently we produce things, the more potential there is for others to profit from our labor”.

More Information

Want more? Consult HESA’s Observatory on AI Policies in Canadian Post-Secondary Education.

This email was forwarded to you by a colleague? Make sure to subscribe to our AI-focused newsletter so you don’t miss the next ones.

Posted in

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.