HESA’s AI Observatory: What’s new in higher education (Nov. 17th, 2023)

Spotlight

Good afternoon all, 

Today, we share a series of recommendations on how institutions should respond to AI, and missteps they should avoid. You will also find a publication by editors sharing recommendations on the responsible use of GenAI in scholarly journal publishing. 

We hope these resources are helpful! 

Next Roundtable Meeting

Date: Tuesday, November 21st, 2023
Time: 12h00-1h00PM ET

Join us next Tuesday, on November 21st, from 12:00PM to 1:00PM ET, for our next AI Roundtable meeting, focused on Inclusion. During this session, a series of guest speakers will discuss critical perspectives on GenAI tools and how they can perpetuate inequities, and also address how these tools can be used to support accessibility and inclusion for individuals living with disabilities within the higher education context. This session will be facilitated by Lan Keenan, President of the Schulich Disability Alliance at Dalhousie University’s Law School. Register now for free to save your spot!

If you missed our last AI Roundtable on Student Perspectives, you can watch the recording here.

Policies & Guidelines Developed by Higher Education Institutions

Tags: Guidelines, Academic integrity, Pedagogy, Canada

University of Victoria’s (Canada) website on Scholarly use of AI tools includes guidance for students and educators. It recommends instructors to have a discussion around academic integrity and the importance of individual expression of knowledge. It lists tips to promote academic integrity. The guidance also highlights how ChatGPT can be used as a learning opportunity. The University of Victoria emphasizes that ChatGPT and other AI tools have not been vetted for privacy and security by the institution. Hence, these tools can be used as a learning opportunity, but they cannot be used to assess students for learning.

Tags: Guidelines, Research, North America

Arizona State University’s (United States) Guidelines for Use of Artificial Intelligence in Research state that it is strongly recommended to discuss appropriateness of using AI tools with co-investigators, collaborators, and field experts. If using GenAI tools, researchers must keep in mind the following: many federal agencies have tools to detect AI-generated content; content generated from AI often paraphrases from other sources, which could raises plagiarism and intellectual property concerns; AI-generated content may be inaccurate or biased; one should not rely solely on GenAI for decision-making purposes; one should not place federal, state, or ASU data into an externally sourced GenAI tool, because it then makes it available to the public and open source; and when working with vendors or subcontractors, one should inquire about their practices of using AI. 

Tags: Guidelines, Governance, North America

University of Iowa’s (United States) Guidelines for the secure and ethical use of Artificial Intelligence state that since the institution doesn’t have a contract or agreement for most AI tools, standard UI security, privacy and compliance provisions are not in place. Hence, AI tools should only be used with institutional data classified as public (low sensitivity). AI-generated code should not be used for institutional IT systems unless it is reviewed by a human. 

News & Research

Complete College America. November 2023. 

“This publication serves as a comprehensive playbook for higher education institutions, offering a curation of over 200 potential applications of AI designed to bolster student success, with a particular emphasis on college completion”. This playbook covers elements such as leadership and culture; strategic planning; information technology; institutional effectiveness and institutional research; strategic finance; pedagogy, curriculum and instruction; assessment and evaluation; student engagement and digital learning infrastructure; faculty support; and student experience.

Tasneem, A. EAB. June 8th, 2023. 

In conversations with presidents, provosts, chief strategy officers, chief business officers, and chief information officers, the author has identified seven missteps university leaders are making in their AI approach: 1) dismissing AI as just another hype; 2) thinking we can distinguish between AI-generated and human-generated work; 3) not preparing students for the AI-fluent workforce; 4) waiting for vendors before making an institutional plan for AI inclusion in operations; 5) adopting a piecemeal approach to AI; 6) assuming leaders need a formalized strategy before discussing AI with the campus community; and 7) failing to raise campus awareness about AI risks, especially public ones like ChatGPT. 

Darby, F. The Chronicle of Higher Education. November 13th, 2023. 

The author of this article talks about three ‘camps’, based on types of response to AI: the AI enthusiasts, the AI realists, and the AI resistors. “No doubt that mix of reactions is as much here to stay as AI”. While he understands the resistance, he also believes that not incorporating GenAI in the classroom will end up doing students a disservice. There are many concerns with respect to embracing GenAI: the human-labor and environmental costs of AI, reinforcing existing online biases, the already limited capacity of faculty, concerns regarding privacy issues, and more. Still, he believes that banning GenAI will never work. The author advocates for what he calls ‘the middle ground’, which is to teach students how to think about and use AI tools. He recommends five ways: 1) being explicit about the use of AI tools in your class; 2) teaching them how to use AI tools appropriately; 3) demonstrating in class how scholars in your discipline might use AI; 4) analyzing AI results in class; and 5) teaching about AI.

Kaebnick, G. E. et al. Hastings Center Report. 53(5). October 1st, 2023. 

This paper is written by a collective of editors of bioethics and humanities journals who have been contemplating the implications of GenAI on scholarly publishing. They believe that GenAI might both pose a threat to, and be valuable in research. They have developed a preliminary set of recommendations for GenAI use in scholarly publishing. The recommendations are as follow: 1) LLMs or other GenAI tools should not be listed as authors on papers; 2) Authors should be transparent about their use of GenAI, and editors should have access to tools and strategies for ensuring authors’ transparency; 3) Editors and reviewers should not rely solely on GenAI to review submitted papers; 4) Editors retain final responsibility in selecting reviewers and should exercise active oversight of that task; and 5) Final responsibility for the editing of a paper lies with human authors and editors. 

Coffey, L. Inside Higher Ed. November 14th, 2023. 

Some institutions have started using AI clones to simulate personalized interactions between institution leaders and students. While these have the potential of improving student engagement and the feeling of belonging, it also poses the question of increased risks of phishing and scams. If institutions start using AI, they need to disclose it. Furthermore, they need to properly teach students about the risks of AI. 

More Information

Want more? Consult HESA’s Observatory on AI Policies in Canadian Post-Secondary Education.

This email was forwarded to you by a colleague? Make sure to subscribe to our AI-focused newsletter so you don’t miss the next ones.

Posted in

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.