Spotlight
Good afternoon all,
You might have seen the latest federal government announcement on AI. Too little information is available out there at the moment to be able to assess how this will impact the higher education sector. We will keep an eye out, and keep you posted!
In the meantime, in this week’s newsletter, you will find an example of institutional guidelines for operational use of GenAI, coming from McMaster University. We also share some articles on how universities have started to develop their own GPTs, and a recent study that links the use of chatbots to increased loneliness amongst students. We hope those readings are useful!
Also, ICYMI – HESA recently launched its AI Advisory Services. If your institution is struggling to develop its response to this new technology, or is simply in search of an extra hand, we might be able to help. Reach out to our team to discuss ways in which we could support your efforts.
Wishing you all a great week-end ahead!
Next Roundtable Meeting
Date: Tuesday, April 23rd, 2024
Time: 12h00-1h00PM ET
Join us Tuesday, April 23rd, from 12:00PM to 1:00PM ET, for our next AI Roundtable, which will focus on Pedagogy and Curriculum. In this “community poster” session, we will have the pleasure to welcome guest speakers from different institutions that have been using AI in innovative ways to support them in teaching and learning. They will share, in short presentations, their practical use of the AI tools. The audience will be able to ask questions after each presentation. We will conclude the session with an open discussion. We will have the pleasure of welcoming the following guests: Dr. Erin Aspenlieder, Special Advisor to the Provost on Generative AI and Ben Lee Taylor, Program Coordinator, Academic Skills and Writing at McMaster University to talk about Partnering with AI in Assessment; Warren Apel, Director of Technology at The American School in Japan, to talk about Transforming Feedback in the Classroom; and Lucas Wright, Senior Educational Consultant at the Centre for Teaching, Learning, and Technology at the University of British Columbia, to talk about Critical GenAI Literacy and Digital Skills Development for Faculty, Students, and Staff. This session will be facilitated by Cheryl Kinzel, Dean of Technology and Innovation at Bow Valley College, and Grant Potter, Acting Director at the University of Northern British Columbia’s Centre for Teaching, Learning, and Technology. Register now (it’s free!) to save your spot!
If you missed last month’s AI Roundtable on Governance and Policy, you can watch the recording here.
Policies & Guidelines Developed by Higher Education Institutions
Tags: Guidelines, Operations, Canada
McMaster University recently released its Provisional Guidelines on the Use of Generative AI in Operational Excellence, making it one of the first to release guidelines on operational use of GenAI in higher education. These guidelines notably state the following: “Employees may be able to use generative AI in their work. The use of generative AI should involve a conversation between supervisors and employees and the completion of Appendix A: Employee Generative AI Considerations Checklist, regardless of how its use is initiated, by whom or when.” “Supervisors should ensure employees understand the use of GenAI as required by their role within work hours; supervisors should ensure employee privacy training and information security training is current”. McMaster then provides an Employee Generative AI Considerations Checklist, as well as a Supervisor Generative AI Conversation Guide, and Generative AI Tool Risk Assessment Processes.
News & Research
Coffey, L. Inside Higher Ed. March 21st, 2024
At the beginning of this academic year, the University of Michigan launched U-M GPT, “a homebuilt generative AI tool that now boasts between 14,000 to 16,000 daily users”. “U-M GPT is all free; we wanted to even the playing field.” Other institutions have adopted a similar approach of creating their own versions of ChatGPT for student and faculty to use. These include Harvard University, Washington University, the University of California, Irvine and UC San Diego. “The effort goes beyond jumping on the AI bandwagon – for the universities, it’s a way to overcome concerns about equity, privacy and intellectual property rights”.
Touro University. Inside Higher Ed.
Touro University has appointed its first Associate Provost for Artificial Intelligence, to lead the Touro Initiative on Artificial Intelligence. The newly appointed Associate Provost is Dr. Shlomo Engelson Argamon, a computer scientist and researcher who specializes in AI. “Touro believes that every student must be AI literate — that is, they must understand how to use AI effectively, to know when not to use AI, and to be appropriately critical of AI-produced content. The university is now developing a new course to teach students about fundamental AI tools and concepts and how to become lifelong learners who can adapt to the future evolution of AI.”
Waterloo News. April 10th, 2024
The Ontario Chamber of Commerce just announced the launch of its new AI Hub. “The AI Hub is designed to serve as a unique industry-academic partnership that will drive AI adoption among Ontario businesses while providing support for evidence-based policy making from the government.” The University of Waterloo will provide access to its WatSPEED’s AI upskilling programs to small and medium businesses to advance their AI skills and be better equipped to leverage the productivity and efficiency of AI. WatSPEED already offers several AI-related courses such as “AI and Business Strategy” and “ChatGPT and the Large Language Model Revolution”.
Kakuchi, S. University World News. April 6th, 2024
This article focuses on Japan’s latest initiatives around AI development. In February, Japan’s Cabinet Office earmarked the equivalent of US$79 million for AI development for 2024 “to strengthen Japan’s AI research capabilities in universities”. The article also talks about the Hiroshima AI Process, launched at a G7 meeting last June with the aim to “foster an open and enabling environment where safe, secure, and trustworthy AI systems are designed, developed, deployed, and used to maximise the benefits of the technology while mitigating its risks, for the common good worldwide.” Multiple researchers are also leading research projects that aim at using AI for the public good, by ensuring AI tools align with human values and principles.
MacGregor, K. University World News. April 6th, 2024
This article talks about the study Preparing National Research Ecosystems for AI: Strategies and progress in 2024, conducted by the International Science Council and published at the beginning of the month. “Overall, the study seeks to gather knowledge and information about AI and research issues and current efforts; help countries to develop roadmaps for the uptake of AI in science systems; create regional and global networks of people involved in implementing AI for science; and help to shape a critical AI discussion among scientific and policy communities.” “The ISC study found that very little is known about how governments plan to accelerate the uptake of AI by research institutions, despite AI’s huge implications for national research and development systems.” “Only some of the AI related issues that the study found were important for research are also drivers of country plans for the uptake of AI in science. Rather, current plans are guided by a country’s overall approach to AI and try to support national economic, governance, digital and other ambitions attached to AI more generally.”
Schroeder, R. Inside Higher Ed. April 10th, 2024
Many studies show that AI will soon replace millions of jobs, and that many tasks will soon be automated. While many fear that they will be replaced by AI, the technology can also represent an incredible opportunity for increased efficiency and productivity. What does that mean for higher education? The author of this article reflects on a list of post-secondary jobs that might be first in line for automation through GenAI, including marketing and campus relations positions, admissions and enrollment, finances and administration, administrative assistants, student advising and tutoring, and even faculty for introductory classes. The author concludes that these changes are imminent, and hence institutions should start preparing adequately. “Well-prepared institutions will have considered the challenges and opportunities by the end of this academic year. They will have conducted copious public forums and discussions at the department, college, division and institution-wide levels. A host of counseling services and opportunities for upskilling, reskilling and career-change support will be offered to faculty and staff.”
Williams, T. Times Higher Education. March 27th, 2024
A recent study links student usage of ChatGPT to loneliness and a reduced sense of belonging. For this study, the Australian researchers surveyed 387 universities students from across the globe. They found “evidence that while AI chatbots designed for information provision may be associated with student performance, when social support, psychological well-being, loneliness and sense of belonging are considered it has a net negative effect on achievement”. Universities are already increasingly adopting chatbots for a range of functions such as admissions and student support. One of the authors of the study recommends that “universities should find ways to promote peer networks, social opportunities for students and other ways of building social connections as a way of insulating them from some of the more negative effects of AI use.”
Baule, S. Inside Higher Ed. April 11th, 2024
This author reflects on whether instructors should disclose their use of AI in their teaching. He asked this question to his students, as part of an end-of-course survey in a graduate class about the use of educational technology to transform instruction, and only 25% responded that they felt instructors should disclose the use of GenAI in developing instructional materials. Some students showed a relative indifference towards instructors’ use of AI. The author also reflects on how there might be a double standard between instructors and students, when students are almost always required to disclose their use of AI. “It can create a potential double standard in educational settings and raises questions about fairness and the ethical use of AI. While instructors might use AI to enhance the learning experience, students are often cautioned against relying on AI for their academic work.” The author concludes that disclosing the tools used and how they were used might also be instructional for students.
More Information
Want more? Consult HESA’s Observatory on AI Policies in Canadian Post-Secondary Education.
This email was forwarded to you by a colleague? Make sure to subscribe to our AI-focused newsletter so you don’t miss the next ones.