HESA’s AI Observatory: What’s new in higher education (October 4, 2024)

Spotlight

Good afternoon all, 

About a month ago, the U15 released its guidance on the use of AI in academic teaching and learning (finally, dare I say): Navigating AI in Teaching and Learning: Values, Principles and Leading Practices. While nothing in there is truly groundbreaking – which might make one (me, for starters) wonder why it took them so long, two elements caught my eye. 

1) The U15 emphasizes the importance of Building trust, saying “As we learn and gain experience through responsible use, we can and must build trust across campus communities, Indigenous and other partner organizations.”

Yes, yes, and yes. If there is one thing that I’ve noticed as we’ve been working on various AI projects with multiple institutions over the past 15 months, is that the folks that are the most involved in developing institutional responses around AI are almost always on the ‘AI enthusiast’ side of the spectrum. AI skeptics, on their end, are rarely around the table for those conversations. Whether it’s because they don’t want to be involved or because institutions don’t tend to bring them in is up to the reader, but I’d argue that when putting together an advisory committee or a working group with limited spots, it’s perfectly understandable to invite those who have previously shown interest or have demonstrated experience around the topic.

But where does that leave the rest? 

A recent report from the College Innovation Network (shared in Inside Higher Ed) shines light on a key problem that might explain challenges with trust building: “87 percent of faculty members said their administrative team makes decisions on ed-tech implementation and usage. Fewer than 20 percent reported that their institutions sought their feedback on ed tech once a year or more frequently, and about the same percentage said their institutions involve students in the process.” “Faculty don’t feel they’re involved in the decision-making process…They don’t think their input is valued.”

Institutions will certainly not be able to build trust with AI detractors if they don’t engage them in the discussions. Of course, not everyone can be around the table for policy building – but that’s why it’s crucial that institutions engage in consultation and engagement with the broader community. A year ago, we were already urging postsecondary institutions to engage with faculty, staff and students. This holds true a year later – and I would even say that if your institution has not already done so, it might require even more effort to pick up the slack.

Another standout in the U15’s recommendation about building trust is the specific mention of Indigenous communities. Most institutions that I’ve interacted with over the past year on the topic of AI are not really aware of particular concerns that Indigenous communities might have regarding the integration of AI in postsecondary education – and I am myself not well suited to give recommendations about what should or shouldn’t be done on that front, other than of course paying particular attention to engaging with your Indigenous students, staff, and faculty on the matter.

As we continue collectively reflecting on Orange Shirt Day and the National Day for Truth and Reconciliation, I wanted to share resources related to AI that have been put together by Indigenous scholars around the world. You will find them under More Resources below.

2) The second thing that stood out in the U15 document was their guiding principle Educated: “Users of generative AI have an obligation to learn about the strengths, limitations and responsible use of the technology, both to maximize the effectiveness of their use but also to identify limitations or issues with the output. Institutions should ensure that support to fulfil this obligation is provided in a manner that is flexible and appropriate for different academic contexts.” 

The most important part of that quote, for me, is the last sentence. We see the divide between institutions that invest resources in building AI literacy and enabling faculty, staff and students to experiment with AI, and those that don’t, grow wider and wider by the day.

However, we all know that one of the best ways to learn is by doing. Ethan Mollick, on his famous One Useful Thing blog, shared a few months back the importance of playing with AI:

Playing with AI is ultimately serious – it is a good way to get to see what the AI can and can’t do, where it is “imaginative” and where it is cliched… ’Always invite AI to the table’ is the principle in my book that people tell me had the biggest impact on them. You won’t know what AI can (and can’t) do for you until you try to use it for everything you do.” 

Postsecondary institutions need to do a better job of encouraging all faculty and staff, and not only those that are already curious and excited about the powers of AI, to play around with these tools. Because the truth is that if even if some faculty and staff would prefer not to touch AI with a ten-foot pole, most students certainly will. And this also holds true for senior leadership! How can institutions and instructors know how to better regulate the use of AI within their walls if they do not know what AI is capable of doing? 

We’ll soon be heading into a long weekend, so my recommendation to you is to carve out some time to play with AI. No need to try out the fanciest tools on your first go – baby steps. To start, consider doing a bit of role-playing. Put yourself in the shoes of a student, an instructor, or even your communications and marketing person – and think about what you would ask ChatGPT, CoPilot, or other programs if you were them and were trying to save some time while completing a basic task. 

I’ve heard it time and time again over the past year that many (if not most) instructors feel overwhelmed by the rapid pace of the GenAI (r)evolution and simply do not know where to start. Being already overworked, they also do not have time to spare to try to learn about the tools by themselves. Postsecondary institutions need to dedicate resources – funds, GenAI licenses, workload adjustments, etc. – to support AI upskilling. 

But where do institutions themselves find the funds to do so, in an era of cuts to the sector’s funding? Well, that’s where we hope to see governments step up their game. 

More to come in future AI blogs. See you in two weeks!

Have a great week-end, all.

– Sandrine Desforges, Research Associate
sdesforges@higheredstrategy.com 

Mark Your Calendars

Date: March 6th-7th, 2025

AI-CADEMY: Canada Summit for Post-Secondary Education

Registration is now open for HESA’s AI-CADEMY, and early bird tickets are selling fast! Make sure to get yours now.

AI-CADEMY is Canada’s premier event dedicated to exploring the transformative potential of artificial intelligence in post-secondary education.

Don’t miss out on:

  • Thought provoking keynote speakers;
  • Multiple panels – Presidential Panel, Canada’s Policy Response to AI, Global Perspectives on AI and Education, Student Panel and Industry Panel;
  • Presentations selected from our open Call for Proposals on topics at the intersection of AI and Teaching and Learning, Research, Academic Operations and Services, and more;
  • Tech Demos and AI Marketplace;
  • Interactive Workshops; and
  • Networking opportunities!

Interested in becoming a sponsor or exhibitor? Learn more here

AI-CADEMY is organized in partnership with Bow Valley College and SAIT. 

Additional Resources

Abundant Intelligences

Building culturally-grounded AI systems that support Indigenous sovereignty and ways of knowing

Indigenous data stewardship stands against extractivist AI
Gaertner, D. UBC News. June 18th, 2024.

Experts discuss major risks and opportunities that AI poses to Indigenous cultures, and encourage supporting sustainable AI-powered content generation practices that avoid perpetuating the extractivist logic characteristic of settler colonialism.

Indigenous Data Sovereignty: A Catalyst for Ethical AI in Business
Rana, V. International Association for Business and Society. August 22nd, 2024. 

This commentary delves into the concept of Indigenous data sovereignty as a powerful framework for resisting digital colonialism and promoting ethical AI development.

What we can learn from an Indigenous approach to AI
Kimberly Adams and Amanda Peacher. Marketplace Tech. December 19th, 2022. 

In this podcast episode, McGill University professor Noelani Arista explains how some Indigenous communities are thinking about their relation to technology and the data that feeds artificial intelligence.

New report and guidelines for Indigenous data sovereignty in artificial intelligence developments
UNESCO. December 11th, 2023. 

The report, Indigenous People-Centered Artificial Intelligence: Perspectives from Latin America and the Caribbean, urges the participatory inclusion of local and Indigenous communities, an appropriate data operation respecting their autonomy, proposes public policies to integrate Indigenous peoples’ perspectives in all phases of AI development, and explores best practices, including five from Mexico.

Indigenous perspectives in AI
CIFAR.

Free, self-paced independent learning courses offered by CIFAR, co-authored and co-constructed by a team of Indigenous and non-Indigenous educators. 

More Information

Want more? Consult HESA’s Observatory on AI Policies in Canadian Post-Secondary Education.

This email was forwarded to you by a colleague? Make sure to subscribe to our AI-focused newsletter so you don’t miss the next ones.

Posted in

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.