HESA’s AI Observatory: What’s new in higher education (Sept. 1st, 2023)

Spotlight

Good afternoon all,

Thanks so much to the 185 of you who joined us last Monday for our AI Roundtable meeting on Governance and policy. We’re encouraged by the continued strong interest on this issue and we’re pleased to continue hosting these sessions and populating our Observatory on AI Policies in Canadian Post-Secondary Education.

It was striking in Monday’s session to hear the inter-institutional disparities with respect to the pace of policymaking. Some institutions are already working on developing comprehensive formal policies, while others have yet to create even an institutional taskforce to address the issue. One thing that appeared to be true across the board, regardless of the state of policy development: institutions seem much more focused on adapting teaching and learning than they are either on the use of AI in research or in institutional operations.  

To better help institutions keep track of developments with respect to the use of AI in institutional operations, we have created a tag for it on our AI Observatory specifically targeted at institutional operations. You’ll find it both under Policies and Guidelines, and News and Research. We hope that this will be of interest to all of you and help you to navigate this complex issue.

Please continue sharing your institutional policies and guidelines with us, so we can showcase them on the AI Observatory! You can either send them our way in response to this email, or submit them via our online form. We also welcome any comments or suggestions moving forward. Finally, please be sure to opt-in to the AI-focused emails by clicking here or on the button below if you want to keep receiving these weekly AI-focused emails. After September comes to an end, we’ll shift to only sending these AI-focused emails to those who subscribed to that specific list.

Next Roundtable Meeting

Date: Tuesday, September 26th, 2023
Time: 10h00-11h00AM ET

Join us on September 26th, from 10:00AM to 11:00AM ET, for our next Roundtable meeting focused on Pedagogy and curriculum. This meeting with be facilitated by Grant Potter, Instructional Designer at UNBC’s Centre for Teaching, Learning, and Technology. We’ll also welcome representatives from JISC’s National Centre for AI as guest speakers. Register now for free to save your spot!

We’ll soon upload the recording of our last Roundtable meeting on Governance and policy. Until then, you can also catch up on what happened in our Roundtable meeting that focused on Academic integrity here.

Policies & Guidelines Developed by Higher Education Institutions

Tags: Guidelines, Academic integrity, Pedagogy, Governance, Inclusion, Canada

McMaster University’s (Canada) Provisional Guidelines on the Use of Generative Artificial Intelligence (AI) in Teaching and Learning state that, as per existing policies, the use of GenAI tools without formal allowance to do so by instructors is considered academic misconduct. Instructors are expected to clearly communicate with students if, and to what extent, the use of GenAI is accepted in the course outline. They can decide individually if they want to incorporate GenAI tools into T&L; if they do so, they should clearly explain, in the course outline, the extent to which it is being used. Where possible, courses that rely on GenAI tools should rely on free versions of such tools. Under certain circumstances, students may opt-out of assessments that require the use of GenAI. The use of GenAI detection softwares is currently discouraged. Finally, GenAI tools can be used to provide formative feedback on student work; however, it can’t be used to provide summative evaluation.

More resources, such as citation guidelines, sample syllabus statements, sample rubrics and honour pledges, can be found here

Tags: Policy, Academic integrity, Pedagogy, Governance, Inclusion, Canada

Carleton University’s (Canada) Recommendations and Guidelines on GenAI mention that the university “encourages teaching innovation and supports instructors who wish to try and/or adopt new pedagogical approaches and educational technologies”. As per Carleton’s policy, using GenAI tools without the instructor’s consent and proper citation is considered a violation of academic integrity. Carleton provides guidance for instructors that suspect that an assignment has been completed with unauthorized use of GenAI tools. Instructors are discouraged from relying on AI detection tools. The guidelines also provide statement examples for allowing (or not) the use of GenAI tools, as well as syllabus language regarding the use of AI. Finally, the guidelines list a series of ethical and privacy considerations, such as data privacy, copyrights, commercialization of student text, inequitable access, and inherent bias and discrimination.

Carleton also put together guidance for Using Generative AI in the Classroom (e.g., as a search engine, as a language learning tool, to provide feedback on essays, to generate ideas on certain topics, as a study tool, etc.), and for Teaching Without the Use of AI

Tags: Statement, Pedagogy, Canada

Seneca Polytechnic’s (Canada) Artificial Intelligence Emerging Technologies Institutional Academic Statement recognizes Seneca’s “responsibility to provide opportunities for [its] students to critically engage with these emerging technologies” in order to get students career-ready. Hence, “all Seneca students will have the opportunity to critically engage with AI emerging technologies to prepare for their careers and life as engaged citizens. Students must learn to recognize and take advantage of the benefits and potentialities AI emerging technologies have to offer while understanding the limitations and possibilities of misuses and abuses”. Seneca’s institutional academic statement was developed following a consultation with the Seneca community and the creation of a committee composed of faculty members, chairs and managers from various academic areas, representatives from ITS, Seneca Innovation, Academic Quality, Teaching & Learning, the Registrar’s Office, and Library Services. 

Tags: Guidelines, Academic integrity, Pedagogy, Europe

University College London (UCL)’s (England) guidance on Designing assessments for an AI-enabled world provides support to instructors to adapt assessment to better support learning. It includes a list of changes that instructors can make for the 2023-2024 academic year, such as discussing AI and academic integrity with students, increasing formative assessments, revising exam and essay questions, converting generic questions to scenario-based questions, and upgrading multiple choice questions. Short videos accompany each of these recommendations. UCL also developed resources for Using AI tools in assessments (with examples of categories of assessments, and explanations regarding whether or not AI could be integrated into these assessments), as well as a note on Students’ perspectives on GenAI, which include a list of suggestions coming from students.

Tags: Guidelines, Academic integrity, Pedagogy, East Asia

The Chinese University of Hong Kong’s (Hong Kong) has developed a Guide for students on the use of artificial intelligence tools in teaching, learning and assessments, which mentions different approaches to adopting AI tools in teaching and learning. The details regarding what approach to follow (prohibit all use of AI tools, use only with prior permission, use only with explicit acknowledgement, or use freely with no acknowledgement) must be stated in the course outline and/or in the instructions of the assignments. The guide also includes a list of considerations for students if the use of AI tools is permitted, such as responsible and ethical use, the direct correlation between the quality of input and the quality of output (prompt engineering), the need to fact-check all AI-produced outputs, and the need to properly acknowledge the use of AI tools. 

News & Research

Sharples, M. University College London. August 7th, 2023.

This article presents video highlights of the 2023 UCL Education Conference’s keynote address from Emeritus Professor of Educational Technology at the Open University, Mike Sharples. His address covers topics such as “What is GPT-4?”, how can instructors interact with AI (e.g., possibility engine, collaboration coach, Socratic opponent, guide on the side), using AI to give feedback to students, accessibility and biases, as well as responsibility and creativity. 

Higgins, A. Inside Higher Ed. August 25th, 2023.

In this article, the author calls for the need to improve evidence-gathering on students’ use of GenAI prior to making significant institutional changes: “faculty have an ethical obligation to know that students are using AI and to know how and why they are using it before they make dramatic changes to their curricula. In addition to the ethical obligation, it’s just plain foolish to make major changes to our curriculum – let alone redesign the structure of the university […] – without any concrete data about how students are engaging with AI”, especially considering the wide range of student responses to AI.

Young, J. EdSurge. August 24th, 2023.

Earlier in August, the online conference AI x Education, fully organized by students, gathered more than 2,000 participants. The conference aimed at developing AI literacy amongst instructors, and encouraged them to incorporate AI in their courses to better prepare students to make ethical use of AI. 

Equity issues in accessing AI tools and proper training on how to use them was one of the biggest concerns discussed at the event.The full report of the conference can be accessed here.

Draisey, B. Iowa Capital Dispatch. August 25th, 2023.

An Iowa State University Associate English Professor, Abram Anders, developed a course on AI as a way to co-learn with his students about the challenges and opportunities of GenAI. He hopes this will not only help the institution be better positioned to properly teach students about AI, but also provide an institutional response to the rise of these tools. 

Some professors worry less about academic integrity than about intellectual property or privacy issues, as well as biases of GenAI tools. Hence, they believe that teaching students about how to properly use ChatGPT and other tools of the like might help alleviate these risks. 

C. J. OpenAI.

OpenAI has developed a new feature, called ‘shared links’, that could help users of ChatGPT to properly cite their use of the GenAI tool. Shared Links “allow users to generate a unique URL for a ChatGPT conversation, which can then be shared with friends, colleagues, and collaborators. Shared links offer a new way for users to share their ChatGPT conversations. […] With shared links, users can let others see – and continue – […] exchanges with ChatGPT”.

Posted in

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.