HESA’s AI Observatory: What’s new in higher education (May 24th, 2024)

Spotlight

Good afternoon all, 

Today’s newsletter includes various articles about the impacts of AI on research and science. 

Also, here’s your last chance to register to our AI Roundtable on decolonization, which will take place next Tuesday. See details below.

Next Roundtable Meeting

Date: Tuesday, May 28th, 2024
Time: 12h00-1h00PM ET

Join us next Tuesday, May 28th, from 12:00PM to 1:00PM ET, for our next AI Roundtable, which will focus on Decolonization. In this session, we will have the pleasure to hear from different guest speakers: Dr. Gurnam Singh, Hon Associate Professor of Sociology at the University of Warwick, will talk about his book chapter “Can AI Be Anti-Racist?” from Chris Rowell’s book “AI Conversations: Critical discussions about AI, art, and education”; Dr. Oscar Mwaanga, Programme Director of the PGCE Certificate in International Sport Management and Fellow at the Centre for Online and Distance Education, will talk about Decolonizing curriculum; and Dr. Eric Atwell, Professor of Artificial Intelligence for Language at Leeds University and LITE Fellow, will talk about AI for Decolonizing reading lists. The audience will be able to ask questions following each 10-min presentation, and the session will conclude with an open discussion. This session will be facilitated by Sandrine Desforges, Research Associate at Higher Education Strategy Associates and lead of HESA’s AI Observatory. Register now (it’s free!) to save your spot!

If you missed last month’s AI Roundtable on Pedagogy and Curriculum, you can watch the recording here.

Policies & Guidelines Developed by Higher Education Institutions

Tags: Canada, Guidelines, Academic integrity, Operations, Inclusion, Governance

The University of British Columbia (Canada) published earlier this month their Principles for the use of Generative AI Tools. The principles span four main themes: Generative AI Usage; Content Ownership; Security and Risk Management; and Social and Environmental Impact. The full list of principles include: Understanding and Remaining Current with Generative AI; Appropriate and Responsible Use; Accountability for Results; Ownership of Original, Generated Content; Copyright Infringement Risk Mitigation; Plagiarism Risk Mitigation; Confidentiality of UBC Data; Privacy Risk Mitigation; Information Security Risk Mitigation; Mitigating the Risk of Reproducing Cultural Bias and Systemic Inequities; Mitigating the Risk of Incorrectness; and Mitigating Ecological Impacts. 

Advertisement

News & Research

Blumenstyk, G. The Chronicle of Higher Education. May 22nd, 2024

In this edition of The Edge, Goldie Blumenstyk highlights five ways colleges are building their AI expertise. The Metropolitan State University of Denver runs monthly workshops on AI Empowerment in Higher Education, and is building an AI for All website. Randolph College’s Writing Board has been soliciting its faculty members for opinions on and strategies for GenAI. The Hudson County Community College created a GenAI Professional Learning Community. “Over the past year, the community has organized a series of individual and group exercises using the tools to develop lessons plans, prepare meeting agendas, write grant proposals, and the like.” Marshall University created a Presidential Task Force on AI which has developed templates for language for syllabi and practical guidance for teaching with AI. Finally, at Camden County College, a librarian has developed a guide of topics related to AI and higher ed.

Wang, A. University World News. May 16th, 2024

“Following recent Chinese government measures to restrict the use of generative AI in scientific research, a growing number of Chinese universities have released their own updated measures to curb AI-assisted academic writing. University World News has identified at least five universities that have issued their first notices regarding AI-generated content (AIGC) in graduation theses over the past month. They include the Beijing University of Technology, Southeast University, Tianjin University of Science and Technology, Fuzhou University and Hubei University.” Some of these Chinese institutions are starting to use detection systems to test the proportion of AI generated content in theses, and are implementing specific actions to undertake if a significant proportion of text is identified as being written by AI. 

Koplin, J. University World News. May 18th, 2024

AI is increasingly being used for academic writing. However, “many people are worried by the use of AI in academic papers. Indeed, the practice has been described as ‘contaminating’ scholarly literature.” “Unlike (most) humans, AI systems are fundamentally unconcerned with the truth of what they say. If used carelessly, their hallucinations could corrupt the scholarly record.” Still, the author believes that outright banning the use of GenAI is not the solution: first, because current AI detection tools have proven unreliable and can be circumvented, and second, because “banning generative AI outright prevents us from realizing these technologies’ benefits”. He believes that “the problem is poor quality control, not AI”. “The most serious problem with AI is the risk of introducing unnoticed errors, leading to sloppy scholarship. Instead of banning AI, we should try to ensure that mistaken, implausible or biased claims cannot make it onto the academic record.”

Grove, J. Times Higher Education. May 23rd, 2024

A new research by Oxford University Press, “which surveyed more than 2,300 researchers, found that 76 per cent use some form of AI tool in their research, with machine translations and chatbots cited as the most popular tools, followed by AI-powered search engines or research tools. AI is most used for discovering, editing and summarizing existing research, the report found. However, only 8 per cent of researchers trust that AI companies will not use their research data without permission while just 6 per cent believe AI companies will meet data privacy and security needs”. “Almost half (46 per cent) of researchers [said] that the institution they work at has no AI policy”. 

Erduran, S. University World News. May 16th, 2024

The author of this article reflects on the importance of incorporating AI in science education in universities. “The dizzying pace of developments of AI use in science is raising questions about how science education in universities is incorporating these recent developments. Some important questions are emerging for science education in universities: Is science changing because of AI use? How can science teaching and learning in universities reflect the recent developments in science?” “To prepare future professionals whose jobs will rely on science and AI, the science curriculum in universities will need to incorporate the various dimensions of science”. “At a time when AI use in science as well as in higher education is exploding at an exponential rate, what gets reflected in university science education is likely to be sporadic and fragmented. Some students may have experiences in independent research studies in the tech industry or with their advisors who are already using AI tools in their research. Other’s experiences may not be as extensive, being restricted to some lectures.”

Van Belle, J.-P. University World News. May 10th, 2024

“This article aims to explain what it means for an LLM to be ‘open-source’; why and which LLMs are being made open; the benefits and issues of open LLMs; some recent trends; and finally what some opportunities of open-source LLMs present to researchers in the developing world.” 

Schroeder, R. Inside Higher Ed. May 22nd, 2024

In this article, the author provides examples of how AI can facilitate the work of instructional designers, researchers, administrators and other nonteaching professionals in colleges and universities. For example, the author shares how Small Language Models, which are trained on domain-specific datasets, can help retain privacy by “maintaining all prompts, data, processing and results in an environment that is not connected to the cloud. In this secure environment, administrators, can ask apps to perform personnel, budget, and organizational comparisons, analyses and recommendations without exposing the data to possible discovery by others.”

Chen, J. Times Higher Education. April 18th, 2024

Julia Chen, Director of educational development at the Hong Kong Polytechnic University, poses three fundamental questions for higher education in this GenAI era: 1) What is learning?; 2) What is the value of formal institutional education?; and 3) How are HEIs addressing pressure points and roadblocks to stay relevant? “As we venture into uncharted territory, those who dare to take bold steps will have the opportunity to reform, redefine and elevate higher education”. 

Advertisement

More Information

Want more? Consult HESA’s Observatory on AI Policies in Canadian Post-Secondary Education.

This email was forwarded to you by a colleague? Make sure to subscribe to our AI-focused newsletter so you don’t miss the next ones.

Posted in

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.