Spotlight
Good morning,
You might be wondering why you’re hearing from us in August. No, Summer is not over yet… but we wanted to try something new.
As you may know, HESA has now hosted two Roundtable meetings on Artificial Intelligence (AI) policies in higher education. The success of these meetings (177 joined us for the first meeting, and that number climbed above 200 for the second one) proved the need for pan-Canadian inter-institutional collaboration for the development of comprehensive institutional policies. This is why HESA is now launching its Observatory on AI Policies in Canadian Post-Secondary Education. The Observatory will act as a Canadian clearinghouse for post-secondary institutions’ policies and guidelines with respect to AI, hosted on HESA’s website. To keep you posted on what’s new in the Observatory, we’re also reinstating Fridays’ blogs – with a special focus on AI policies in higher education. In these weekly AI-focused newsletters, we’ll keep you informed of the latest developments on AI policies and guidelines in the higher education sector, all around the world.
If you would like your institution’s policy or guidelines to be showcased, please reach out! You can also submit it via our online form. We also welcome any comments or suggestions moving forward.
We hope these resources will be valuable to you. Until the end of September, these emails will be sent to our whole audience. To continue receiving them afterwards, you’ll need to opt-in. If you want to keep receiving these emails every Friday, please click on the button below.
Next Roundtable Meeting
Date: Monday, August 28th, 2023
Time: 12h00-1h30PM ET
In addition to the Observatory, HESA continues to work, in collaboration with a working group made of volunteers from various post-secondary institutions across the country, on hosting AI Roundtable meetings that will focus on the four previously identified areas of interest related to AI in higher education (Governance and policy; Academic integrity; Pedagogy and curriculum; Inclusion).
We are thrilled to invite you to our next AI Roundtable meeting, focused on Governance and policy, which will take place on August 28th, from 12:00PM to 1:30PM ET. This meeting will be facilitated by Simon Bates, Vice-Provost and Associate Vice-President, Teaching and Learning at the University of British Columbia. We’ll also welcome Leeann Waddington, Associate Vice-President, Teaching and Learning at Kwantlen Polytechnic University as a guest speaker. Register now for free to save your spot!
Until then, you can also catch up on what happened in our last AI Roundtable meeting, that focused on Academic integrity, here.
Policies & Guidelines Developed by Higher Education Institutions
Kwantlen Polytechnic University's guidelines
Tags: Policy, Guidelines, Academic integrity, Pedagogy, Governance, Canada
The Kwantlen Polytechnic University (Canada) released its official position on the use of generative AI tools: “the decision to use or permit student use of generative AI tools in their course lies within the faculty member, provided the use falls within the parameters provided in these guidelines”. The guidelines are listed in Generative AI: An Overview for Teaching and Learning and include building digital literacy, and developing various skills such as coding, debating, math, writing and language, and synthesis and analysis.
University of Berkeley Law School's policy
Tags: Policy, Academic integrity, North America
The University of Berkeley Law School (US) released, in April, their Policy on AI (see Reuter’s article) which “allows students to use AI technology to conduct research or correct grammar. But it may not be used on exams or to compose any submitted assignments. And it can’t be employed in any way that constitutes plagiarism, which Berkeley defines as repackaging the ideas of others.”
University of Helsinki's guidelines
Tags: Guidelines, Academic integrity, Pedagogy, Governance, Europe
The University of Helsinki (Finland) released its Guidelines for the use of AI in teaching, which encourage the use of AI as a support for teaching and writing. Teachers are responsible for telling students about principles, benefits, and disadvantages of using generative AI tools. They can restrict its use in situations where it would not promote learning, in which case they should explain and motivate the limits of prohibited use in writing. Any use must be reported in writing, indicating which model was used and in what way. If it is used in a course where it is prohibited in advance, it will be treated as cheating. Individuals should never be required to use a language model that is not available for free.
Hong Kong Polytechnic University's guidelines
Tags: Guidelines, Academic integrity, Governance, East Asia
The Hong Kong Polytechnic University (Hong Kong) developed Guidelines for Students on the Use of Generative Artificial Intelligence, which stipulate that subject and assessment documents should mention if students may use generative AI tools. Students need to declare their use of these tools and how they have been used. AI-generated materials must be properly referenced. When the use of AI-generated materials is not permitted, submitting AI-generated materials as one’s own work constitutes an act of academic dishonesty, leading to disciplinary actions. The guidelines also give examples of how to use generative AI wisely, such as using it for brainstorming, checking factual accuracy of AI-generated content, and using it in conjunction with other sources to ensure the work is reliable and well-informed.
Monash University's policy and guidance
Tags: Guidelines, Academic integrity, Pedagogy, Oceania
The Monash University (Australia) developed Policy and practice guidance around acceptable and responsible use of AI technologies, which specifies what needs to be done by staff to align on the University’s position on generative AI. It includes: specifying the conditions for the use of generative AI in Moodle at the start of the teaching period (template statements are provided); explaining to students how they should acknowledge the use of generative AI in their assessments (notably, include a declaration of use that: explains what technologies, if any, have been used; includes explicit descriptions of how the information was generated; identifies the prompts used; and explains how the output was used in the work); and managing suspected breaches of academic integrity.
News & Research
Mowreader, Ashley. Inside Higher Ed. July 26th, 2023.
Pamela Bourjaily, Associate Professor of Business Communication at the University of Iowa, added ChatGPT to her syllabus and instructed students on how to best use the software for their own writing, following a formula for achieving better results from ChatGPT.
Chan, C.K.Y. Int J Educ Technol High Educ. 20, 38. 2023.
Following data collection from 457 students and 180 teachers and staff across various disciplines in Hong Kong universities, the researcher then proposes an “AI Ecological Education Policy Framework to address the multifaceted implications of AI integration in university teaching and learning”. The framework has three dimensions: pedagogical (using AI to improve teaching and learning outcomes), governance (issues related to privacy, security and accountability), and operational (infrastructure and training).
Foltynek, T. et al. International Journal for Educational Integrity. 19, 12. 2023.
The European Network for Academic Integrity (ENAI) presents recommendations to support institutions on the ethical use of AI tools. They recommend that institutional policies should 1) “define default rules on when and how the students, teachers, researchers and other educational stakeholders are allowed to use different kinds of AI tools”, and 2) “guide the users on how to correctly and transparently acknowledge the use of AI tools”.
Warner, J. Inside Higher Ed. July 19th, 2023.
Triggered by a piece published by Harvard student Maya Bodnick, the author of this article shares his thoughts about how faculty should react to generative AI. He argues that since it is impossible to GPT-proof an assignment, the primary focus of faculty should be to reevaluate how they assess and respond to student writing – by ‘stopping to be polite and starting to get real’.
Tasneem, A. and Panthagani, A. WONKHE. July 3rd, 2023.
This article provides six potential ways to leverage AI (e.g., providing students with AI tutors, supporting educators with AI teaching aids…) and six missteps in responding to the rise of AI (e.g., thinking we can distinguish between AI-generated and human-generated work, not preparing students for an AI-fluent workforce…).
Nerantzi, C. et al. (Eds.) Curated by #creativeHE. 2023.
Collection of 101 ideas, submitted by contributors across 19 countries (Australia, Canada, China, Egypt, Germany, Greece, India, Israel, Italy, Ireland, Jordan, Liberia, Mexico, South Africa, Spain, Thailand, Turkey, United Kingdom and the US), on potential alternative uses and applications of AI in education, both for students and educators. Creative uses include, for example, developing variety in scenario-based assessments, using it for peer review purposes, using it as a debate partner, generating book summaries, and more.
Mollick, E. R. and Mollick, L. SSRN. Wharton School of the University of Pennsylvania & Wharton Interactive. June 11th, 2023.
The authors of this paper propose and detail “seven approaches for utilizing AI in classrooms: AI-tutor, AI-coach, AI-mentor, AI-teammate, AI-tool, AI-simulator, and AI-student, each with distinct pedagogical benefits and risks”. It aims to act as a “guide for educators navigating the integration of AI-assisted learning in classrooms”.
MacGregor, K. University World News. July 22nd, 2023.
In a speech given to the UN Higher Education Sustainability Initiative Global Forum earlier in July, professor Tshilidzi Marwala, rector of the United Nations University, presented five ways of rethinking the role of universities in the digital era, including recognizing the transformative potential of AI in shaping the future of education and preparing students for an AI driven world; building student abilities to discern fake information; personalizing education to cater to diverse learning styles and address specific needs; developing robust ethical guidelines and regulations governing the use of AI; and ensuring equitable access to technological resources, infrastructures and digital skills training around the world.
Hess, F. American Enterprise Institute. July 25th, 2023.
Frederick Hess and Harvard University’s Jal Mehta discuss the impacts of AI on education. “The problem, then, is not the technology, but the kinds of tasks we ask students to do in school. As generations of research have shown, much of what we give to students asks only for fairly low-level comprehension and rote application. And that’s exactly what artificial intelligence can do well. ChatGPT is really good at the five-paragraph essay. And what that tells us is not that we should ban the technology but that we need to change the task.
Coffey, L. Inside Higher Ed. August 4th, 2023.
While some law schools have banned the use of AI generative tools in applications, others, such as the Arizona State University Law School, decided to allow future applicants to use them in their applications (for example, in personal statements). This decision was notably supported by the fact that AI generative tools are increasingly used in the practice of law; and hence, students should be thought how to properly make use of them.
More Information
Want more? Consult HESA’s Observatory on AI Policies in Canadian Post-Secondary Education.