HESA’s AI Observatory: What’s new in higher education (May 5th, 2024)

Spotlight

Good morning all, 

In today’s newsletter, you will find a couple of resources focusing on the integration of GenAI in graduate studies and research. We also share some articles that cover the use of GenAI in operations, as well as considerations for integrating GenAI into post-secondary institutions’ governance. 

We hope these are useful reads!

Policies & Guidelines Developed by Higher Education Institutions

Tags: Guidelines, Research, Academic integrity, Canada

University of Alberta’s Faculty of Graduate & Postdoctoral Studies’ Guidelines on Responsible and Ethical Use of Generative AI in Graduate Thesis, Research, and Writing state the following: “Graduate students who intent to use generative AI tools in their thesis research and writing must seek prior permission and approval from their supervisors and supervisory committee members. They must transparently disclose the use of generative AI tools and technologies in the preface to their thesis and in any publications resulting from their research to ensure academic integrity.” The guidelines then answer a series of questions, such as regarding the use of GenAI tools for writing and editing candidacy proposals, qualifying exams, comprehensive exams, thesis, and scholarly publications; to research and write graduate course papers, projects reports, and assignments; for research, including literature review, data analysis, or simulation; for preparing a manuscript for submission to scholarly publications; and in awards and scholarship applications.

Advertisement

News & Research

Coffey, L. Inside Higher Ed. May 1st, 2024

The Association of Research Libraries recently released its Research Libraries Guiding Principles for Artificial Intelligence to help librarians deal with the increase in inquiries they receive related to GenAI. These seven principles are: 1) Foster digital literacy; 2) Understand and raise awareness of AI bias; 3) Advocate for openness and transparency; 4) Understand there’s no AI without humans; 5) Security and privacy are key; 6) Continuation of copyright law enforcement; and 7) Equity in digital information.

Western Canadian Deans of Graduate Studies. November 24th, 2023

The Western Canadian Deans of Graduate Studies released a set of recommendations for the use of GenAI in graduate and postdoctoral research and supervision. These recommendations are divided between: 1) university-wide responsibilities and considerations; 2) academic writing: theses, dissertations, courses assignments, candidacy and research papers; 3) research (idea generation, literature reviews, data analysis); 4) citations and references; 5) graduate and postdoc application proposals; and 6) scholarships and awards application proposals. This report also suggests ways in which graduate supervisors and students can develop AI literacy.

Coffey, L. Inside Higher Ed. May 2nd, 2024

A Yale University freshman, Nicolas Gertler, created an AI chatbot based on his professor Luciano Floridi’s research into AI ethics. The chatbot, called LuFlot Bot, focuses on the ethics, philosophies, and uses of AI. Floridi is the founding director at Yale’s Digital Ethics Center, and has dozens of research papers and books studying the ethics of AI. You can ask it questions such as “Is AI environmentally harmful?” and “What are the regulations on AI?”. Try it out for yourself here.

University of Limerick

This initiative from the University of Limerick, in Ireland, aims to give a space for students to share innovative ways in which GenAI tools can enhance their education experience in positive and creative ways, from multiple personal and discipline perspectives. This podcast touches on themes like neurodiversity, Universal Design for Learning, authentic assessment, and day-to-day student pressures. Episodes will be released every second Thursday.

Bowen, J. A. and Watson, C. E. Inside Higher Ed. April 23rd, 2024 

The authors of this article, also authors of Teaching with AI: A Practical Guide to a New Era of Human Learning, believe that AI might “be the technology that gives faculty more time and more assistance for the most important educational and relational tasks”. They believe AI can relieve faculty from some tedious tasks and do it at scale, and can improve relationships. The authors provide a series of prompts to be used by instructors for course design, content and pedagogy, grading, assessment and accreditation, research, grant writing, job search, student support, and administration. 

Wang, L. and Wang, T. University World News. April 24th, 2024

This article presents practical ways to start integrating AI in academic research, such as in literature review, to uncover research gaps, to analyze and visualize data, and to enhance academic writing. The article then dives into challenges of using AI in research. Finally, it argues that post-secondary institutions have a responsibility of adequately preparing researchers for this AI era. “In addition to raising awareness, higher education institutions need to encourage researchers to adopt new ways of thinking; AI should be seen not just as a tool but as a collaborative partner in research endeavours.” 

Coffey, L. Inside Higher Ed. April 29th, 2024

The School of the Art Institute of Chicago recently developed a machine learning model that helps gauge which students are most likely to accept its admission offers. “Stacks of data from applicants with offers to attend SAIC are entered into the model, which parses more than 100 factors including the number of SAIC events the applicants attended, the types of programs they are interested in, and where they went to high school. It then spits back two outcomes: the likelihood a student would accept the admissions offer – say, 50 percent chance they would say ‘yes’ to accepting an admissions offer – and then a further ‘yes’ or ‘no’ if a student would actually end up attending the university”. Representatives from the institutions affirmed that the technology is not being used to dictate which students should and should not be accepted into the institution.

Clark, P. Wonkhe. April 23rd, 2024

In this article, the author makes the case for the importance of post-secondary institutions focusing on data stewardship when thinking about using AI tools. “University leaders who truly want to leverage the benefits of data and AI should focus their efforts on good data governance. This starts with the stewards.”

Fox, M. CNBC. April 30th, 2024 

Online education companies Chegg and Coursera appear to have been hit hard by the rapid rise of artificial intelligence. Chegg had dropped by 20% last Tuesday, bringing its year-to-date decline to almost 50%; and Coursera had dropped by 11% on the same day, bringing its overall decline for 2024 to 45%.

Sawahel, W. University World News. April 25th, 2024

At the beginning of the next academic year, Tunisia’s first public AI institute will start its work at the University of Tunis. “The new institution will have sufficient resources to contribute positively to the dissemination of AI in society … and to address the challenges and opportunities associated with the use of AI, in particular in contexts such as those in the Global South where we need to build inclusive systems and this needs engineers and other IT professionals, but also domestic champions aware of local contexts and needs related to the building and deployment of AI systems”.

Bailey, J. AEI. April 30th, 2024

This article shares highlights from the report 2024 Artificial Intelligence Index, recently released by the Stanford’s Institute for Human-Centered Artificial Intelligence. These insights are: 1) AI beats humans on some tasks, but not on all; 2) the US is leading with models; 3) costs of training are increasing exponentially; 4) AI increases productivity, speed, and quality across various sectors; 5) AI is introducing major breakthroughs in science and medicine; and 6) there has been a steady increase in US regulations.

Cucchi, M. University Affairs. April 26th, 2024

This article focuses on Yoshua Bengio, the founder and scientific director of Mila and professor at Université de Montréal, who, last year, became an AI whistleblower by sounding the alarm about AI’s “many potential risks: national security, deepfakes, disinformation, fraud, surveillance, societal destabilization, systemic discrimination, loss of control of AI systems, and more”. Along with other colleagues, he called for an AI moratorium in 2023. “For AI, survival means either controlling humans or getting rid of them. […] If this entity wants to maximize rewards, its best course of action is to take control of its environment. And that includes us.”

More Information

Want more? Consult HESA’s Observatory on AI Policies in Canadian Post-Secondary Education.

This email was forwarded to you by a colleague? Make sure to subscribe to our AI-focused newsletter so you don’t miss the next ones.

Posted in

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.