HESA’s AI Observatory: What’s new in higher education (Oct. 20th, 2023)

Spotlight

Good afternoon,

Next Tuesday, we’ll have the pleasure of discussing with five students from across the country to hear their perspectives on GenAI in higher education: 

  • Beatriz Moya, PhD candidate, Werklund School of Education, University of Calgary
  • Justin Saint, Chair of Computing and Academic, BCIT Student Association
  • Ken Hilton, Vice-President Academic, University of Toronto Engineering Society
  • Laurence Mallette-Léonard, President, Fédération étudiante collégiale du Québec
  • Tala Abu Hayyaneh, Vice-President Academic, Students’ Association of Mount Royal University

Join us on October 24th, from 12:00PM to 1:00PM ET, by registering here.

If you missed our last AI Roundtable on Pedagogy and curriculum, you can watch the recording here.

If you haven’t yet answered our short anonymous survey on our AI Observatory, we’d appreciate if you could take a couple of minutes to do so. We’d love to hear from you to gauge if it’s been helpful, and how we could better support your institution moving forward.

Policies & Guidelines Developed by Higher Education Institutions

Tags: Guidelines, Academic integrity, Operations, Canada

Western University’s (Canada) Generative AI Guidance states that the institution trusts its community to innovate and experiment with GenAI responsibly and ethically. Instructors have autonomy in how AI is integrated into their courses. Under “How should I use AI?”, the guidance recommends experimenting and sharing: “the more time you spend using this technology, and experimenting with it, the greater the payoff to your productivity”. Principles of using AI at Western are the following: transparency, accountability, integrity, privacy and inclusion. Western then provides specific guidance by role, for instructors, students, researchers and employees. 

Tags: Guidelines, Policy, Academic integrity, Pedagogy, Governance, Western Europe

King’s College London’s (UK) Guidance on generative AI for teaching, assessment and feedback “aims to support the adoption and integration of GenAI at different institutional levels – macro (university), meso (department, programs, modules), and micro (individual lecturers, especially those with assessment roles). At the macro level, the guidance covers fundamental areas including AI terminology, sector and horizon scanning, values, governance & policies, academic integrity, equity of access, evaluation mechanisms, and training responsibilities. This is tailored to set the broad framework within which AI can be implemented with due diligence to ethical considerations and sustainability.” The meso-level guidance covers impacts on assessments and how to (re)design them, recommendations for heads of departments and program leads, a definition of appropriate uses of GenAI, and how to deal with inappropriate use. The micro-level guidance includes a list of Dos and Don’ts for instructors with assessment roles. King also developed student guidance. Finally, they developed a 2-week course on GenAI in higher education.

Tags: Statement, Guidelines, Academic integrity, Pedagogy, Research, North America

Brown University’s (US) Office of the Provost released a statement on the Potential impact of AI on their academic mission. This statement refers to guidance developed by the Sheridan Center for Teaching and Learning on how to integrate GenAI in the classroom. It also highlights that any unapproved use of AI to complete assignments would be covered by already existing academic codes. Guidelines have been developed on how to cite AI-generated content, on the protection of information and on using GenAI as a research tool. Finally, Brown developed guidance for researchers which offers information about the intersection of GenAI and intellectual property.

News & Research

Coffey, L. Inside Higher Ed. October 19th, 2023. 

Mark Daley, newly appointed Chief AI Officer at Western University, answers questions about his role. Daley shares that his first priority will be consulting with the community – including students, staff and faculty – to get a better sense of their desires, but also concerns and fears. He mentions that many universities have started reaching out to him, saying they are considering implementing similar roles. He says that any institution that wants to implement a similar role first needs to identify what are their objectives for the institution. There is no one-size-fits-all solution to AI leadership. “The most important piece of advice – is what does the community want? What are the aspirations, what are the concerns and how do we address those?”

Eaton, L. and Waddell, S. Educause. October 3rd, 2023. 

In this article, the authors share ten suggestions for leaders who wish to dive into the GenAI discussion in higher education: 1) Offer short primers on GenAI; 2) Explain how to get started; 3) Suggest best practices for engaging with GenAI; 4) Give recommendations for different groups; 5) Recommend tools; 6) Explain the closed VS open-source divide; 7) Avoid pitfalls; 8) Conduct workshops and events; 9) Spot the fake; 10) Provide proper guidance on the limitations of AI detectors.  

Conroy, G. Nature. October 10th, 2023. 

Many researchers expect that GenAI tools will become regular assistants for writing manuscripts, peer-review reports and grant applications. Researchers surveyed by Nature also state that it would help improve equity in science by helping researchers who do not have English as their first language. Some predict that GenAI could fundamentally transform the nature of the scientific paper: “It’s never really the goal of anybody to write papers – it’s to do science”. However, publishers worry that a rise in the use of GenAI tools might lead to greater numbers of poor-quality or error-strewn manuscripts, and compromise research integrity. There is also a concern that using GenAI for peer review will affect confidentiality. 

Ascione, L. eCampus News. October 13th, 2023.

A survey led by Intelligent.com with 399 education professionals showed that half of educational admissions departments already used AI, mostly for efficiency reasons. AI is most commonly used to review transcripts and letters of recommendation, and to communicate with applicants. Many respondents indicated that AI makes the final decision about whether to admit applicants or not in their institution. Some institutions hope that using AI in admission processes will help reduce biases in admissions. Others fear that it will replicate biases by merely admitting students like the ones they’ve been admitting for decades. Two-thirds of the respondents were concerned about the ethical implications of AI. 

Kramm, N. and McKenna, S. Taylor & Francis. October 11th, 2023.

In this paper, the authors argue that trying to identify AI usage in students’ work is both a waste of time and neglects education responsibilities of instructors. “A police-catch-punish approach to AI, as with the use of this process in relation to plagiarism, ignores the broader purposes of higher education. If higher education is understood as being a space for nurturing transformative relationships with knowledge, AI can be harnessed to enhance learning experiences. Such an approach would also enable a critical understanding of the limitations and ethical deliberations around AI usage”. They highlight that the question should shift from “How will we catch students using AI so that we can punish them?” to “How does this assessment build students’ relationship to knowledge?” and “How might AI enable or constraint this?”. 

More Information

Want more? Consult HESA’s Observatory on AI Policies in Canadian Post-Secondary Education.

This email was forwarded to you by a colleague? Make sure to subscribe to our AI-focused newsletter so you don’t miss the next ones.

Posted in

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.