HESA’s AI Observatory: What’s new in higher education (Feb. 9th, 2024)

Spotlight

Good afternoon all, 

 

In today’s newsletter, we share multiple resources and articles focusing on GenAI in research. If this topic is of interest to you, make sure to attend our upcoming AI Roundtable on research integrity (see below for more details). 

 

Also, ICYMI – HESA recently launched its AI Advisory Services. If your institution is struggling to develop its response to this new technology, or is simply in search of an extra hand, we might be able to help. Reach out to our team to discuss ways in which we could support your efforts.

Next Roundtable Meeting

Date: Tuesday, February 20th, 2024
Time: 12h00-1h00PM ET

Join us on Tuesday, February 20th, from 12:00PM to 1:00PM ET, for our next AI Roundtable, which will focus on opportunities, risks and considerations surrounding the use of GenAI tools in research, with a research integrity lens. We will have the pleasure of welcoming Dr. Sarah Elaine Eaton, Associate Professor at the Werklund School of Education at the University of Calgary, Editor-in-Chief of the International Journal for Educational Integrity and Council member of the Committee on Publication Ethics (COPE), who will share with us preliminary reflections on the issue before we open up the discussion to participants. Participants will be asked to fill a short survey upon registration to better align the discussion. This session will be facilitated by Amanda McKenzie, Director of the Office of Academic Integrity at the University of Waterloo. Register now (it’s free!) to save your spot!

If you missed last month’s AI Roundtable, you can watch the recording here.

Are you a faculty member or an instructional designer who has been using GenAI tools in teaching and learning in innovative and practical ways? We invite you to present during our “community poster” session on Pedagogy and Curriculum roundtable in April (date TBD). Mention your interest here

Policies & Guidelines Developed by Higher Education Institutions

Tags: Guidelines, Research, North America

Cornell University’s (United States) Generative AI in Academic Research: Perspectives and Cultural Norms provides a framework for using GenAI in research, and elaborates on the use of GenAI across the various research stages (including but not limited to literature review, data collection, dissemination, and research funding). It also offers a summary of existing community publication policies regarding the use of GenAI in research from funders, journals, professional societies, and peers.

Advertisement

News & Research

Parrilla, J. M. Nature. October 13th, 2023 

In this article, the author criticizes what he calls cumbersome and flawed grant-application and grant proposal review processes. He adds that ChatGPT did a better job than himself at writing sections of some grant applications – cutting the workload from three days to three hours. A survey conducted by Nature in 2023 also showed that 15% of researchers use the technology to help them write grant proposals. The author then asks “What is the point of asking scientists to write documents that can be easily created with AI? What value are we adding?”. He ends by calling funding bodies to rethink their application processes.

Australian Government’s Tertiary Education Quality and Standards Agency. February 2nd, 2024

Australia’s Tertiary Education Quality and Standards Agency updated its guidance note on academic and research integrity to account for the rise of artificial intelligence. In 5.3 Monitoring Review and Improvement, it now states the following: “comprehensive reviews of courses take place to ensure learning outcomes and teaching methods consider emerging trends and developments in the field of education and associated risks, such as developments in artificial intelligence” and “policies and procedures can adapt to emerging trends that impact on the delivery of education, such as artificial intelligence”. It also states that TEQSA expects providers to demonstrate it “ensures its training, policies and procedures evolve to respond to developments in technology, such as the rising prevalence of artificial intelligence”. To foster an environment of protecting academic integrity, it recommends to embed academic integrity and academic artificial intelligence literacy in the curriculum.

Sharma, Y. University World News. February 3rd, 2024  

In this article, the author talks about how the emergence of GenAI is leading researchers to explore new areas and ways of doing. “The emergence of ChatGPT and other generative AI tools means that there are new problems to identify and resolve. This stimulates AI research.” “Generative AI needs us to define new process of how we do research.” “With generative AI, a careful evaluation process in research is much more important because the results from generative AI are not always reproducible, so how can we convince ourselves that we are getting true research results rather than ‘hallucinations’?”. 

McMaster University

A research team at McMaster University is currently conducting a study on GenAI and student assessment in higher education. Their survey is open to higher education instructors across Canada and they will continue taking submissions until the end of February. As part of this project, they are collecting sample assessments from instructors across Canada for analysis, and some of these avec been made available online as open educational resources. Sample assessments cover disciplines such as anthropology, history, literature, and marketing.

EAB

This insight paper gives actionable advice to enrollment leaders on four key priorities: 1) AI-enabling staff; 2) Nailing down policies on applicant use of AI; 3) Activating AI themes in recruitment outreach; and 4) Preparing to promptly adopt future AI innovations. The report also includes results from a national survey of enrollment leaders that explores their attitudes toward AI, the degree to which they have already incorporated new AI tools into their work, their future plans with regard to the adoption of AI tools, and particular uses they are making of AI tools.

MacGregor, K. University World News. February 2nd, 2024

The Higher Education Policy Institute, Freeman and Kortext just released their policy note “Provide or Punish? Students’ views on generative AI in higher education”, which followed a national survey of 1,250 UK undergraduate students. Amongst other findings, the report highlights the growing of a digital divide between students: “For every student who uses generative AI every day, there is another who has never opened ChatGPT or Google Bard, which gives some students a huge advantage”. Male and Asian students, as well as those coming from more privileged backgrounds, would be more likely to use AI than their peers. “The divide will only grow larger as generative AI tools become more powerful. Rather than merely adopting a punitive approach, institutions should educate students in the effective use of generative AI – and be prepared to provide AI tools where they can aid learning”. “There is also a digital divide between institutions, with some embracing and others sidelining AI.” The policy note recommends that universities: 1) develop clear policies on the acceptable use of generative AI in learning and assessment, and communicate these policies to students from the beginning of their course; 2) teach all students how to use beneficial AI tools appropriately and effectively; and 3) equalize access to GenAI tools to help learning.

Stansbury, J., Kelly Jr., D., Wynne, K. and Zahadat, N. Ithaka S+R. February 6th, 2024

This blog post discusses the findings from a survey of faculty, staff and students conducted at the University of Baltimore in the Fall of 2023. Findings notably reveal a discrepancy in familiarity with AI tools between faculty and students, with faculty demonstrating higher familiarity with more advanced tools compared to students. However, students displayed more openness to leveraging AI tools to enhance their learning experiences compared to faculty, who were more cautious. Students perceived that AI tools could boost their competency and autonomy. However, both faculty and students expressed concern that overreliance on AI could have a negative impact on critical thinking skills. 

Nielsen, S. Fox 10 Phoenix. February 6th, 2024 

This article talks about how an English professor at Arizona State University, Dr. Kyle Jensen, has integrated GenAI in his classroom. “It’s not really a choice as to whether or not it will be adopted, but the more you know about it, the more effectively and ethically you can adopt it just in your day-to-day life, or as a professional”, says a student. 

More Information

Want more? Consult HESA’s Observatory on AI Policies in Canadian Post-Secondary Education.

This email was forwarded to you by a colleague? Make sure to subscribe to our AI-focused newsletter so you don’t miss the next ones.

Posted in

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.