HESA’s AI Observatory: What’s new in higher education (Aug. 18th, 2023)

Spotlight

Good morning,

It’s been very encouraging to see the positive response to the launch of HESA’s Observatory on AI Policies in Canadian Post-Secondary Education over the last week. Thank you to those of you who have already shared with us the policies or guidelines that were developed by your institution.

We will be populating the Observatory every week with new policies and guidelines with respect to AI that have been developed by higher education institutions across the country and around the world. The recent additions to the Observatory will be showcased in weekly newsletters sent every Friday. Make sure to opt-in to the AI-focused emails by clicking here or on the button below if you want to keep receiving these emails.

If you would like your institution’s policy or guidelines to be showcased on the AI Observatory, please reach out! You can also submit it via our online form. We also welcome any comments or suggestions moving forward.

Next Roundtable Meeting

Date: Monday, August 28th, 2023
Time: 12h00-1h30PM ET

Join us on August 28th, from 12:00PM to 1:30PM ET, for our next Roundtable meeting focused on Governance and policy. This meeting will be facilitated by Simon Bates, Vice-Provost and Associate Vice-President, Teaching and Learning at the University of British Columbia. We’ll also welcome Leeann Waddington, Associate Vice-President, Teaching and Learning at Kwantlen Polytechnic University as a guest speaker. Register now for free to save your spot!

Until then, you can also catch up on what happened in our last Roundtable meeting, that focused on Academic integrity, here.

Policies & Guidelines Developed by Higher Education Institutions

Tags: Guidelines, Academic integrity, Pedagogy, Canada

Wilfrid Laurier University (Canada) “supports faculty members in critically considering the adoption of generative AI in their courses”. WLU developed guidelines on Generative AI in Teaching and Learning, which covers implications related to academic integrity and properly citing the use of generative AI tools, how instructors should inform students if they are permitted to use generative AI in their courses (including sample course outline statements), how to (re)design course assessments to either incorporate generative AI or limit its use, and how to incorporate generative AI into learning activities. The guidelines also discourage instructors to use tools that detect the use of generative AI, notably to protect students’ intellectual property.

Tags: Policy, Academic integrity, Governance, Inclusion, Operations, Oceania

The University of Technology Sydney (Australia) developed an Artificial Intelligence Operations Policy that “guides the use, procurement, development and management of AI at UTS for the purposes of teaching, learning and operations”. The policy recognizes that AI “may be used for administrative and operational functions, teaching and learning activities and to improve user experiences as part of the delivery of the university’s object and functions”. This includes making operations more efficient, improving competitiveness, providing feedback to students, identifying at-risk and high-achieving students to improve learning outcomes, recommending study pathways towards identified career goals, recommending optimal allocation of campus facilities, marketing activities, and more. The policy also covers ethical principles for the use of AI, AI risks and opportunities, AI endorsement and oversight, and privacy and records management.

Tags: Policy, Prohibition, Canada

Université de Montréal (Canada) updated its disciplinary regulations on plagiarism and fraud to include article 1.2 o), which formally bans the use, complete or partial, literal or disguised, of a text, table, image, presentation, recording or any other creation generated by a generative AI tool (including ChatGPT), with the exception of specific authorization in the context of an evaluation.

Tags: Guidelines, Academic integrity, Governance, North America

Harvard University’s (US) Initial guidelines on the use and procurement of generative artificial intelligence (AI) tools, such as OpenAI’s ChatGPT and Google Bard, cover implications related to the following: 1) protection of confidential data (individuals shall not enter confidential data, e.g. non-public research data, in generative AI tools); 2) responsibility for the content produced using AI-generated material (which falls on the individuals using the tools); 3) academic integrity (the importance of following directions regarding permitted use, if any); 4) awareness and protection against phishing; and 5) the procurement of generative AI tools to ensure appropriate privacy and security protections.

Tags: Guidelines, Academic integrity, Pedagogy, Europe

University of Edinburgh’s (Scotland) Guidance for students on the use of Generative AI (such as ChatGPT) stipulates that faculty should 1) emphasize that assignments should be students’ own original work; 2) highlight the limitations of generative AI; and 3) emphasize the need to acknowledge the use of generative AI where it is (permitted to be) used. The University’s instructions on the matter are to cite AI generated content as “personal communication”, since it is based on giving a prompt and receiving an answer. The use of generative AI tools (e.g., to generate ideas) should be cited even if no AI generated content is included in the final work. The guidance also highlights the risks of copyright infringements by using generative AI. 

News & Research

Humphries, M. Generative History. July 28th, 2023.

The author elaborates on the challenges that will face higher education institutions that attempt to craft policies for generative AI. He argues that the range of available generative AI tools makes it impossible to craft a simple one-size fits all type of policy. Institutions need to consider each tool available, such as Chatbots (like ChatGPT), Application Programming Interfaces and In-House Large Language Models, which all have different implications with respect to data privacy and security, research ethics, and copyright.

Gannon, K. The Chronicle of Higher Education. July 31st, 2023.

This article highlights what should be done by instructors to properly include AI policies in their syllabi. When determining what to factor into their AI policies, instructors should revisit their institutional policies on academic integrity, remember the landscape is quickly evolving, be detailed in the dos and don’ts, consider reviewing assignments, and workshop syllabi with other colleagues. The author also shares a list of questions developed by educator Derek Bruff to help with redesigning assignments. 

Cohn, J. University World News. August 10th, 2023.

This article summarizes discussions that took place during a keynote panel about AI at the 2023 Student Experience in the Research University Consortium Symposium, held at the University of California, Berkeley. Discussions ranged from considerations on how generative AI might alter the student experience, to the impact on higher education jobs and operations. Panelists also discussed the institutions’ obligations to respond to AI.

Schroeder, R. Inside Higher Ed. August 2nd, 2023.

The article makes the case for the use of generative AI tools by institutions to increase performance levels: “[Institutions] who are not optimally using these awesome tools are losing competitive advantage, creativity and efficiencies that others are realizing.” The author provides examples of how generative AI could provide assistance, such as comparing departments’ performances; reviewing policies; recruiting students; budgeting; analyzing data; personalizing learning experiences; enhancing student learning outcomes; assisting with administrative tasks; and more.

Mowreader, A. Inside Higher Ed. August 9th, 2023.

Career services professionals can use generative AI to better support students in their preparation to enter the workforce. Tools like ChatGPT can support students to develop, review and adapt résumés and cover letters; to explore different career paths; to job hunt; and even to prepare for interviews.

Coffey, L. Inside Higher Ed. July 31st, 2023.

While some institutions are banning the use of generative AI tools, others are developing courses to support the development of AI literacy and help students better use these tools. Both the Arizona State University’s School for the Future of Innovation in Society and the Vanderbilt University’s Initiative of the Future of Learning and Generative AI developed their own courses focusing on how to come up with better prompts to achieve better results (Basic Prompt Engineering with ChatGPT and Prompt Engineering for ChatGPT).

Mitchell, K. Inside Higher Ed. August 16th, 2023.

In this blog post, Dr. Mitchell, from the University of Manitoba, discusses an experiment she led in her classroom, allowing students to use ChatGPT in exchange of them having to submit, with their assignment, a reflection on their use. Over half of the class reported using the generative AI tool, in different capacities. Some used it to help them in their preparation, while others used it as a writing support. Interestingly, many students reported that using ChatGPT was overall more unhelpful than it was helpful.

Liang, W et al. Patterns. 4, 7. July 10th, 2023.

“GPT detectors frequently misclassify non-native English writing as AI generated, raising concerns about fairness and robustness. Addressing the biases in these detectors is crucial to prevent the marginalization of non-native English speakers in evaluative and educational settings and to create a more equitable digital landscape.”

Studiosity. 2023 Canadian Student Wellbeing Survey.

Results from a survey conducted by Studiosity and Angus Reid Forum in March 2023 show that “the growth of AI software, including ChatGPT, appears to be having a major impact on the way students are studying, particularly those in business, engineering or law programs”. Key findings include: more than 40% of students have encountered instances of AI-linked cheating in the past year; 26% of students admitted they were considering cheating due to AI’s prevalence; and 65% of graduate students reported witnessing cheating.

Times Higher Education. Campus. August 7th, 2023.

In this podcast episode, chief scientist at Georgia Tech’s Center for 21st Century Universities, Ashok Goel, discusses what generative AI tools mean for teaching and learning, and makes some predictions about where things are headed. He argues that the role of teachers and higher education staff will evolve as a result of generative AI: as more and more tasks get automated, there will be an increased demand for components that require humanity, such as creativity, curiosity, and critical thinking.

Posted in

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.