HESA’s AI Observatory: What’s new in higher education (Sept. 15th, 2023)

Spotlight

Good afternoon all,

In the planning of our series of AI Roundtable meetings, we at HESA towers are currently hoping to host a session on student perspectives during the month of November.

We’d appreciate your help in connecting us with student leaders from your institutions that would be open to share their views on how AI is impacting higher education. We’re looking for diverse perspectives, including voices from graduate students.

Let us know if you think of anyone by responding to this email! We’re also open to any comments or suggestions on topics or guest speakers for future sessions.

As always, please continue sharing your institutional policies and guidelines with us (by email or via this online form) so we can showcase them on our AI Observatory.

Finally, make sure to opt-in to the AI-focused emails by clicking here or on the button below if you want to keep receiving these weekly AI-focused emails. After September comes to an end, we’ll shift to only sending these AI-focused emails to those who subscribed to that specific list.

Next Roundtable Meeting

Date: Tuesday, September 26th, 2023
Time: 10h00-11h00AM ET

Join us on September 26th, from 10:00AM to 11:00AM ET, for our next Roundtable meeting focused on Pedagogy and curriculum. This meeting will be facilitated by Grant Potter, Instructional Designer at UNBC’s Centre for Teaching, Learning, and Technology. We’ll also welcome representatives from JISC’s National Centre for AI as guest speakers. Register now for free to save your spot!

We’re also happy to share that we’ve uploaded the recording of our last session on Governance and policy on our website. If you missed the session, you can access it here. We have also uploaded a summary of the key insights that emerged from the breakout group activity. Main takeaways revolved around disparities between institutions, lack of clear and visible leadership, limited scope of work (focusing solely on Teaching & Learning), limited involvement of students and staff, and the various roadblocks faced by institutions in their AI-related work. 

Policies & Guidelines Developed by Higher Education Institutions

Tags: Guidelines, Academic integrity, Pedagogy, Canada

University of Regina’s (Canada) Generative AI Guidelines for faculty and instructors present ways in which GenAI tools can be used to enhance T&L practices, such as assisting in the development of course syllabi, assignments and rubrics; producing personalized learning; generating accessible course materials; providing adaptive learning assistance via AI tutoring; and supporting formative assessment techniques through automated, on-demand feedback. The decision of whether to allow the use of GenAI rests within each instructor. The guidelines also include ways in which instructors can help prevent the misuse of GenAI tools in academic work, which include developing AI literacy, clearly laying out rules and boundaries for use of GenAI tools, identifying innovative ways to incorporate GenAI in curriculum, requiring students to disclose when they are using GenAI in coursework, and having open discussions about the ethical considerations of GenAI. UofR discourages relying solely upon AI detection tools to ensure academic integrity: the report produced by Turnitin may be presented to the investigating dean alongside other potential evidence, but it cannot be considered conclusive proof of academic misconduct by itself.

Tags: Guidelines, Academic integrity, Canada

Red River College Polytechnic’s (Canada) Guidance for faculty highlights several cautions around the use of GenAI tools, such as privacy risks, legal risks (notably regarding intellectual property), ethical considerations and the quality of AI-generated content. It then mentions that “the use of ChatGPT and GenAI tools does not automatically equate to cheating”, and that instructors need to consider appropriate and inappropriate uses of GenAI tools in their courses and assignments. The guidance also indicates what instructors should do if they suspect a breach to academic integrity. Notably, it highlights that the use of AI detection tools is problematic for various reasons, including false positives, over-monitoring and privacy, and copyright and intellectual property rights. Finally, the guidance provides a checklist for instructors that include having open discussions with students about GenAI tools and academic integrity expectations, and adapting course assessments. 

Tags: Guidelines, Academic integrity, Pedagogy, Canada

Concordia University’s (Canada) guidance on AI in the Classroom and ChatGPThighlights the three pillars of AI in the classroom, which are 1) to actively mitigate misuse, 2) to integrate student knowledge and expertise, and 3) to discuss and share findings. Recommendations are listed for each pillar, such as the need to discuss academic dishonesty and to tailor assignment directions. A list of ways to use GenAI in T&L is also provided, such as editing or critiquing output, fact-checking the output, critiquing the perspective, and more. 

Tags: Guidelines, Academic integrity, Pedagogy, North America

University of South Carolina’s (US) Q&A on ChatGPT for Teaching & Learningcovers elements such as best practices for using ChatGPT in education (such as incorporating ChatGPT as part of a larger instructional strategy, emphasizing the importance of academic integrity, monitoring student use of ChatGPT, and monitoring and evaluating ChatGPT performance), how to redesign assessments that discourage the use of ChatGPT (such as authentic assessments like scientific experiments and real-world case studies; higher-order thinking like critical thinking, problem-solving and creativity; collaborative learning; and feedback), and how instructors can use ChatGPT as a learning tool (like as a research tool, as writing assistance, for language practice, for interactive learning, and for customized content).

Tags: Guidelines, Academic integrity, East Asia

Tokyo University of Foreign Studies’ (Japan) Guidelines for Instructors Regarding AI in University Education states that “the instructor should be allowed to determine whether to prohibit, restrict or actively utilize [GenAI], depending on the characteristics of the course”. If it is used, the following points should be considered: instructors’ understanding of AI, shared understanding with students, clarity and fairness of rules, limitations of AI detection services, and protection of personal information. The guidelines also highlight the limitations of GenAI (like hallucinations, biases and lack of sources).

News & Research

D’Agostino, S. Inside Higher Ed. September 13th, 2023.

Views around AI in higher education are as dispersed as were reactions to shifting to online learning during the pandemic. On one end, individuals that choose to embrace it believe those who don’t are “dinosaurs”. On the other, those who are opposed to it “think that anybody who is choosing to do AI literacy in their classes or help their students to understand how to use AI as a tool is part of the problem” and that they don’t take academic dishonesty seriously. The author of this article argues that to move forward with that conversation, there needs to be spaces for both sides to express their concerns and feel heard, and to experiment and make mistakes.

Schroeder, R. Inside Higher Ed. September 15th, 2023.

“AI will not take your job; a person with AI skills will replace you”. This common adage calls for adequate AI literacy training at the post-secondary level. The author of this article lists a series of steps that institutions should complete to ensure career readiness of their graduates, which include partnering with employers to properly identify what AI skills will be required and expected in specific fields and positions, and integrate these skills into relevant learning modules. 

McInnes, R. Times Higher Education. September 11th, 2023.

In this article, Richard McInnes, learning designer at the University of Adelaide (Australia), talks about how instructors can incorporate GenAI in their work to “enhance the learning experience for our students and create efficiencies in our own work”. This includes, for example, using GenAI to rewrite information for different audiences, copy-edit written content, write course emails and announcements, draft and reword course learning objectives, develop first drafts of course content, generate summaries of readings, generate case studies based on existing content, and more. However, he emphasizes that GenAI can’t replace human intelligence, and that the deep understanding of students and their context possessed by instructors is paramount.

Naidoo, R. University World News. August 16th, 2023.

This article is an edited version of Professor Rajani Naidoo’s keynote presentation to the CIHE Biennial Conference on International Higher Education. Professor Naidoo is the UNESCO Chair in Higher Education Management and co-director of the International Centre for Higher Education Management at the University of Bath (UK). The keynote presentation elaborates on the disruptive potential of AI. It also reiterates that AI is, in itself, an imperial power that can perpetuate and reinforce political divisions, and data-centric oppression. It finally elaborates on the role of universities in responding to AI.

McMurtrie, B. The Chronicle of Higher Education. September 8th, 2023.

In this article, the author presents five questions society should be asking this fall to assess the impact GenAI will have on teaching in the short term: 1) Will generative AI find acceptance in academe?; 2) What AI guidance and training will colleges provide instructors?; 3) Will the regulatory climate around GenAI heat up?; 4) Will AI make some courses obsolete?; and 5) Will the way courses are taught fundamentally change?. 

Posted in

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.