HESA’s AI Observatory: What’s new in higher education (Oct. 6th, 2023)

Spotlight

Good afternoon all,

It’s great to see the 1,4k of you that opted-in to receive these AI-focused emails. If you know someone who’d like to receive them, please send them this link. We love to see the numbers growing. 

Quick update – We recently added a sub-section ‘Recommendations’ under ‘News & Research’ on our AI Observatory, where we gather recommendations on how to respond to GenAI from various types of organizations, including governments, coalitions of higher education institutions, interest-based networks, student unions, and more. We hope these give you additional ideas on how to shape your institutional response to GenAI. 

If you stumble across anything that might be relevant to add to our AI Observatory, please send them our way, either by email or via this online form.

Wishing you all a restful long week-end! 

Next Roundtable Meeting

Date: Tuesday, October 24th, 2023
Time: 12h00-1h00 PM ET

Join us on October 24th, from 12:00PM to 1:00PM ET, for our next AI Roundtable meeting, focused on Student perspectives. During this session, a panel of students from across the country, representing a wide range of perspectives, will share their opinions on GenAI in higher education. The panel will be facilitated by our team at Higher Education Strategy Associates. We’ll share with you more information about our panelists in the upcoming weeks. Register now for free to save your spot!

If you missed our last AI Roundtable on Pedagogy and curriculum, you can watch the recording here.

Policies & Guidelines Developed by Higher Education Institutions

Tags: Guidelines, Academic integrity, Pedagogy, Canada

British Columbia Institute of Technology’s (Canada) Introduction to Generative AI Tools provides guidance regarding how GenAI tools can be used to “support the work of teachers, transform students’ learning experiences and create innovations in learning assessment”. BCIT provides numerous examples of good practices regarding GenAI, such as using it to produce lesson plans, to provide feedback, to generate quiz questions, to generate summaries, or even to generate assignments that students will then critique and improve. The guidance lists a series of considerations, such as ethics and privacy. It also provides a sample course policy statement, and sample citations.

Tags: Statement, Academic integrity, Governance, Western Europe

London School of Economics’ (UK) Statement on Artificial Intelligence, Assessment and Academic Integrity states that “Developments in generative AI also create opportunities for both staff and students to explore the potential of these tools to enhance teaching, learning and assessment. We are confident that we can maintain the credibility of our qualifications while finding new ways to utilise generative AI tools in ethical and responsible ways to improve our students’ education and skills development.”. LSE has created a working group on AI made up of academic, professional staff and students, which will explore the challenges and potential for assessment and academic integrity, as well as the broader implications of AI for higher education. The statement mentions that LSE needs to ensure “clear and consistent communication and guidelines on the authorised use of generative AI in teaching and assessment”.

Tags: Guidelines, Academic integrity, Western Europe

Cambridge University (UK) released guidance for The use of generative AI in coursework from November 2023, which mentions that using GenAI to carry out initial research into a topic, or to quote briefly from AI-generated text within an essay and engage in critical discussion of the quotation, is acceptable for submission as coursework if clearly acknowledged in the work. It also provides guidance for teachers on how to guard against inappropriate use of AI by students.

News & Research

Ulrichsen, H. BayToday. September 29th, 2023.

Laurentian University’s senate recently voted to establish an ad hoc committee with the mandate to bring forward policy recommendations on the use of AI at Laurentian by December 2023. The committee includes representation from faculty, librarians, and Laurentian’s chief information officer.

Russell Group. July 4th, 2023.

The Russell Group principles on the use of GenAI in education are the following: 1) Universities will support students and staff to become AI-literate; 2) Staff should be equipped to support students to use GenAI tools effectively and appropriately in their learning experience; 3) Universities will adapt teaching and assessment to incorporate the ethical use of GenAI and support equal access; 4) Universities will ensure academic rigor and integrity is upheld; and 5) Universities will work collaboratively to share best practice as the technology and its application in education evolves. 

European Commission. 2022.

To plan for effective use of AI in education, the guidelines recommend to review current AI systems and data use, initiate policies and procedures, carry out pilots of AI systems, collaborate with AI system providers, and monitor the operation of AI systems and evaluate the risks. The policies and procedures could include measures for: ensuring public procurement of trustworthy and human-centric AI; implementing human oversight; ensuring that input data is relevant to the intended purpose of the AI system; the provision of appropriate staff training; monitoring the operation of the AI system and taking corrective actions; and complying with relevant GDPR obligations, including carrying out a data protection impact assessment.

Harker, J. Environmental Factor. March 2023. 

This article presents some of the policies regarding AI developed by several scientific journals have established policies regarding AI, such as Springer Nature, the family of Science journals and the JAMA Network of journals. These policies range from prohibiting the use of AI-produced text altogether, to prohibiting AI co-authorship. All human authors are responsible for the integrity of the content generated and used, and the use of AI tools should be properly cited. David Resnik, associate editor at the Accountability in Research journal, recommends that policies include the following: 1) Disclose and describe the use of any NLP systems in writing the manuscript text or generating ideas for the manuscript; 2) Accept full responsibility for the text’s factual and citation accuracy; mathematical, logical, and commonsense reasoning; and originality; 3) Authors should specify who used the system, the time and date of the use, the prompt(s) used to generate the text, the section(s) containing the text; and/or ideas in the paper resulting from NLP use; and 4) The text generated by NLP systems should be submitted as supplementary material. 

Alves, T. ScienceEditor. June 5th, 2023.

This article summarizes a talk between speakers Chirag Jay Patel (Cactus Communications), Emilie Gunn (American Society of Clinical Oncology) and Avi Staiman (Academic Language Experts). Gunn advised organizations that want to develop policies around the use of AI the keep the policies broad and general, since it isn’t possible to address every use of the technologies. She recommends thinking in terms of categories (uses, users, article types, etc.), and to be clear about expectations. She also mentioned that in certain scenarios, there are good reasons to forbid the use of AI tools. Finally, she reminds the audience of the importance to properly disseminate the policies, and to plan ways to respond in case of suspicion of a violation to the policies.

Webb, M. Jisc National Centre for AI. September 18th, 2023.

In this article, the author reiterates that no AI detection software can conclusively identify AI-produced text, and that all AI detectors give false positives. For these reasons, Jisc doesn’t recommend any AI detection tools – Turnitin or others. They have seen none that are reliable enough, and they all include bias (such as gaainst non-native English speakers). “Institutions therefore shouldn’t rely on AI detection, but instead, if they do opt to use it, make it part of a discussion with students if academic misconduct is suspected, and understand that the results might not be accurate.” For the institutions that do choose to use AI detection tools, Turnitin appears to be the ‘less bad’ option.

Carter, T. Business Insider. September 22nd, 2023. 

Many higher education institutions, like Vanderbilt University, Northwestern University and the University of Texas, have stopped using AI detection tools, or discourage their use by instructors, due to accuracy concerns that could lead to false positives, where students could be falsely accused of cheating. ChatGPT’s creator, OpenAI, itself warns that AI detectors are not reliable, and confirmed that many detectors had a tendency to incorrectly identify work written by non-English authors as AI-generated.

Fédération étudiante collégiale du Québec. 2023.

In this briefing note, the Fédération étudiante collégiale du Québec (FECQ) share their recommendations regarding the use of predictive AI to predict potential rates of success or failure or students in certain courses, and identify students who are at risk of dropout. The FECQ is concerned that such tools might start to be used in admissions processes, which could perpetuate discrimination, and in turn increase inequalities in accessing education. The FECQ also cautions against the Pygmalion effect; predicting students’ chances of failure might cause instructors to act as such, which might in turn lead the students to believing they will fail and, as a result, increase their likelihood of failing. For these raisons, the FECQ states that instructors shouldn’t have access to these predictions, and that learning supports should receive a training to ensure appropriate use of this information. Regarding AI chatbots, the FECQ recommends ensuring chatbots are available in multiple languages, notably to better meet the needs of Indigenous students and international students. Finally, the FECQ cautions against misuse of personal data, and recommends that the Quebec provincial government establishes clear regulation regarding the ethical use of personal data by AI systems.

Adams, C. Wonkhe. September 26th, 2023. 

The author of this article argues that the demand for AI literacy among the workforce will only keep on increasing, and that “students should be empowered to harness AI through understanding its limitations and how best to leverage it for their future success”. The author also argues that AI can “enable more equitable and more human centred careers provision in universities”: “Imagine a ‘career co-pilot’ with unlimited capacity to comprehend your unique skills, interests, and goals, possesses encyclopedic knowledge of all occupations and can help chart customized suggested career paths, by analyzing millions of student trajectories. Acting as a personal assistant, this co-pilot can prompt students about relevant on-campus events that may be useful and connect them with careers professionals, alumni, peers or employers in their community”.

Hodson, J. The Conversation. October 3rd, 2023. 

This article presents three different skills that instructors should be teaching students so that they become more resistant to AI-based misinformation: 1) lateral reading of texts; 2) research literacy; and 3) technological literacy.

Posted in

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.