HESA’s AI Observatory: What’s new in higher education (Aug. 25th, 2023)

Spotlight

Good afternoon,

With back to school around the corner (or already behind for some of you), higher education institutions are facing increasing pressure to come up with proper policies or guidance surrounding the use of generative AI.

HESA’s Observatory on AI Policies in Canadian Post-Secondary Education is here to help institutions find guidance in the development of their institutional response to this complex issue.

We have already begun to receive policies and guidelines from institutions across the country. Thanks so much, and keep them coming! If you would like your institution’s policy or guidelines to be showcased on the AI Observatory, please reach out. You can also submit it via our online form. We also welcome any comments or suggestions moving forward.

Make sure to opt-in to the AI-focused emails by clicking here or on the button below if you want to keep receiving these weekly AI-focused emails.

Next Roundtable Meeting

Date: Monday, August 28th, 2023
Time: 12h00-1h30PM ET

Join us on next Monday, August 28th, from 12:00PM to 1:30PM ET, for our next Roundtable meeting focused on Governance and policy. This meeting will be facilitated by Simon Bates, Vice-Provost and Associate Vice-President, Teaching and Learning at the University of British Columbia. We’ll also welcome Leeann Waddington, Associate Vice-President, Teaching and Learning at Kwantlen Polytechnic University as a guest speaker. Register now for free to save your spot!

Until then, you can also catch up on what happened in our last Roundtable meeting, that focused on Academic integrity, here.

Policies & Guidelines Developed by Higher Education Institutions

Tags: Guidelines, Academic integrity, Pedagogy Governance, Canada

University of Toronto’s (Canada) guidance on ChatGPT and Generative AI in the Classroom includes “sample syllabus statements for instructors to include in course syllabi and course assignments”, as well as Guidance on the Appropriate Use of Generative Artificial Intelligence in Graduate Theses. The FAQ covers considerations such as: ethical considerations regarding the use of generative AI tools; using AI for pedagogical purposes in the classroom; permitted use of AI for the completion of assessments; the use of AI by instructors to assess student work; and citing generative AI tools. UofT’s Centre for Teaching Support & Innovation also put up a resource page on the use of Generative Artificial Intelligence in the Classroom, which includes guidance for course and assessment design.  

Tags: Policy, Academic integrity, North America

Stanford University’s (US) Policy guidance regarding generative AI in the context of coursework mentions that absent a clear statement from a course instructor, use of or consultation with generative AI shall be treated analogously to assistance from another person. Individual course instructors are free to set their own policies regulating the use of generative AI tools in their courses, including allowing or disallowing some or all uses of such tools.

Tags: Policy, Guidelines, Prohibition, Academic integrity, North America

Vanderbilt University Center for Teaching’s (US) Guide for Teaching in the Age of AI mentions that if faculty does not expressly tell students that technologies like ChatGPT are allowed, then the use of that technology would represent a violation of Vanderbilt’s Honor Code, which already has a provision for giving and/or receiving unauthorized aid that covers any unauthorized technology. 

Tags: Policy, Prohibition, Academic integrity, MENA

The American University of Armenia (Armenia) amended their clause regarding cheating to include the use of AI in their Student Code of Ethics. The clause goes as follow: 6.4.2. Cheating includes but is not limited to: 6.4.2.1. using or referring to notes, books, devices or other sources of information, including advanced Artificial Intelligence (AI) tools, such as ChatGPT, in completing an Academic Evaluation or Assignment, when such use has not been expressly allowed by the faculty member who is conducting the examination”.

Tags: Guidelines, Academic integrity, Governance, Oceania

Deakin University’s (Australia) Student guide to using generative AI provides guidelines for students when using generative AI in their study and assessments. AI-generated text can be used as a learning tool for inspiration or guidance, but the final submitted assessment must be students’ own work, creation and analysis. The guidelines also require students to appropriately acknowledge where generative AI tools have been used, and to what extent. Appropriate referencing is detailed here. Finally, the guide also lists a series of limitations (such as currency, accuracy, obscured authorship, biases and quality of output) as well as risks (including infringements to intellectual property, privacy and security) of AI-generative tools.

News & Research

Douglas Heaven, W. MIT Technology Review. April 6th, 2023.

Generative AI tools are here to stay, so to try banning them is futile. Rather, post-secondary institutions need to prepare learners for a world where AI will become more and more prevalent. The author of this article argues that, if well used, ChatGPT could even help improve education (e.g., by generating personalized lesson plans and teaching materials to meet individual learning preferences). However, teachers need to adapt their teaching style to build media literacy amongst students.

Dixon, J. University World News. August 14th, 2023.

This article circles back on some of the discussions that took place during the Universitas 21 Educational Innovation Symposium, held at McMaster University. The Universitas 21 network believes that values such as compassion and emotional intelligence are at the core of ensuring machine learning technology serve educators better. They reiterate obligations of higher education institutions, “both to their students and to the wider society they are positioned in”. This includes reassessing teaching and learning methods, as well as collaborating with students to co-create innovative responses to AI. 

D’Agostino, S. Inside Higher Ed. August 22nd, 2023.

This article touches on the complexities of intellectual property and copyright with respect to the use of generative AI tools. Who owns AI outputs? The debate remains unsolved regarding who has ownership: the authors whose copyrighted works are used to train AI systems? those who used the generative AI tools? the companies that created the AI tools? “To examine legal and policy issues related to generative AI, the U.S. Copyright Office launched an initiative earlier this year.”

Brams, S. Count on 2. August 18th, 2023.

The College of Charleston is launching its very own AI chatbot to help students get connected with support services. Named “Clyde the Chatbot”, this AI-powered text messaging service will be able to answer students’ questions and will “provide students with 24-hour access to campus resources”. In addition, the chatbot will reach out to students many times throughout the semester to check on how they are doing, and connect them with relevant resources. 

Benning-Williams, L. The evoLLLution. August 22nd, 2023.

In an interview with the evoLLLution team, Lauren Benning-Williams, Director of Marketing and Communications at George Washington University, explains how AI can be used to support higher education institutions’ marketing efforts. This includes personalizing marketing solutions to increase engagement, and improving access to real-time data.

Posted in

One response to “HESA’s AI Observatory: What’s new in higher education (Aug. 25th, 2023)

  1. It would be great if you could also discus false accusations of academic fraud based on the use of AI to identify the use of AI. The number of accusations of use of ChatGPT has grown. Perhaps some institutions could provide numbers on the number of accusations. Teaching Assistants who are doing marking and who do not understand how ChatGPT works from a technical perspective are using it to experiment and to try to reproduce students’ answers. They don’t understand that If they feed ChatGPT any part of a student’s answer while trying to see if the answer could be influenced by ChatGPT, ChatGPT remembers what was entered previously in the conversation and their account and then spits something out that uses similar wording and examples to what was in the student’s answer, possibly, but not always, combined with other content. The TAs then take this overlap of content with the students answer as proof that the student used ChatGPT when in fact ChatGPT copied content from the student’s answer that it was given. They and the professors take this false overlap as evidence of cheating and send it straight to the academic discipline process without talking to the students at all to discern if perhaps the students could be innocent and from there the students are treated as guilty with all of the mental health damage and interruptions in their ability to enroll in required courses as the discipline processes drag out for months and they are not given a chance to discuss the case or demonstrate the knowledge and prove their innocence in a way that is heard outside the official process which is designed for people who are guilty. The persons responsible for deciding if the allegations of fraud have merit also do not understand how ChatGPT works or that how it was used was not a credible method to determine use of ChatGPT. They only look at the similarities in examples and wording between the student’s answer and the one from ChatGPT after it was shown the student’s answer and take this as proof that the student copied ChatGPT when in fact it was the opposite and ChatGPT copied the student. They still are relying on this falsely manufactured evidence even when the university’s own policies say that student’s work should not be put into AI tools like ChatGPT. They are not taking time to consult experts to see if the methods used were valid and the TAs and faculty are not being educated in university policy or the appropriate use of AI tools. In the case of academic fraud accusations, the damage to the student is so high that there must be special care to keep the rate of false positive accusations extremely low, but the institutions are now so paranoid about AI that they are going the other way and falsely accusing many students through their own lack of understanding and they reliance on TAs for marking and using their own invented methods to try to detect use of ChatGPT.

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.