HESA’s AI Observatory: What’s new in higher education (Sept. 8th, 2023)

Spotlight

Good afternoon all,

I hope that the first week back on campus was a good one. 

Earlier in August, we, at HESA towers, sent a survey to provosts and VPAs to get a better sense of where institutions were at with respect to building policies and guidelines regarding AI. (We’re still collecting answers, so if you had forgotten about it and it’s been lying around in your email box, you can still get to it.)

From the early answers we’ve been receiving, the topic that seemed to be least prominent in current campus discussions is how GenAI tools can be used to promote equity. To give a boost to efforts in those areas, we have created a tag named Inclusion on our Observatory on AI Policies in Canadian Post-Secondary Education, specifically targeted at equity and inclusion considerations. You’ll find it both under Policies and Guidelines, and News and Research. We hope to keep populating it as more and more institutions respond to the issue.

Another interesting takeaway from our early work is that not all AI-focused institutional committees involved students in their reflections. Now that students are back on campus, this might be a good time to bring them into the room. Who knows: their perspective might even contribute to identifying ways in which GenAI tools might be used to promote equity!

Until our next Roundtable meeting at the end of the month, please continue sharing your institutional policies and guidelines with us, so we can showcase them on the AI Observatory. You can either send them our way in response to this email, or submit them via our online form. We also welcome any comments or suggestions moving forward. Finally, please be sure to opt-in to the AI-focused emails by clicking here or on the button below if you want to keep receiving these weekly AI-focused emails. After September comes to an end, we’ll shift to only sending these AI-focused emails to those who subscribed to that specific list.

Next Roundtable Meeting

Date: Tuesday, September 26th, 2023
Time: 10h00-11h00AM ET

Join us on September 26th, from 10:00AM to 11:00AM ET, for our next Roundtable meeting focused on Pedagogy and curriculum. This meeting will be facilitated by Grant Potter, Instructional Designer at UNBC’s Centre for Teaching, Learning, and Technology. We’ll also welcome representatives from JISC’s National Centre for AI as guest speakers. Register now for free to save your spot!

We’ll soon upload the recording of our last Roundtable meeting on Governance and policy. Until then, you can also catch up on what happened in our Roundtable meeting that focused on Academic integrity here.

Policies & Guidelines Developed by Higher Education Institutions

Tags: Statement, Guidelines, Academic integrity, Pedagogy, Canada

Durham College’s (Canada) Centre for Teaching and Learning recently launched their Generative AI resources website, which includes the college’s interim statement on using GenAI in T&L, ways in which GenAI can be integrated in T&L, how to develop GenAI literacy, a framework for integrating GenAI, ethical and data privacy considerations, course outline statements samples, and more. Durham’s interim statement mentions that “DC supports faculty in exploring GenAI, and evaluating the benefits of adopting it where it will enhance student learning and align with industry expectations. […] GenAI can create the opportunity to develop critical and creative thinkers when leveraged by faculty in purposeful and meaningful ways. Directives for the permitted or prohibited use of GenAI in course work and assessments must be communicated with students clearly, in writing, ahead of any learning activity or assessment. […] It is the responsibility of the student to adhere to GenAI directives on a course-by-course basis and seek clarification in the case of uncertainty. Principles of academic integrity, as outlined in [existing academic integrity policy], apply to all instances of GenAI use. 

Tags: Statement, Academic integrity, Canada

Humber College’s (Canada) Statement on artificial intelligence states that Humber: 1) embraces the integration of AI generative tools in ethical, equitable, and constructive ways in support of T&L; 2) commits to supporting students, faculty, and staff to develop the digital fluency skills to participate effectively, responsibly, and ethically in AI-enhanced workplaces; 3) recognizes that integration of AI will vary across disciplines and will require context-responsive approaches; 4) acknowledges that professors have discretion to decide how AI can be applied in a particular course in ways that enhance student learning – this involves the provision of explicit guidance for students in assessment and assignment instructions on how AI tools are to be used and cited; 5) contends that un-cited and/or other unauthorized use of AI in assessments and assignments constitutes academic misconduct as defined in Humber’s Academic Regulations; 6) commits to college-wide consultation to develop supports for students and professors in the use of AI that is grounded in research/evidence-based best practices; and 7) will continue to adapt and innovate in response to the rapid changes we will face as artificial intelligence continues to evolve. 

Tags: Guidelines, Academic integrity, Pedagogy, Inclusion, Canada

OCAD University (Canada)’s Recommendations for Navigating Generative Artificial Intelligence (GAI) in Your Teaching aims to develop critical AI literacy amongst its community; that is, enabling its community to leverage the opportunities provided by GenAI tools while recognizing their risks and limitations. Their recommendations include: 1) engaging with GenAI through commitments to equity and ethics; 2) create classroom spaces where students can begin to develop critical AI literacy; 3) clarify expectations in both course outlines and assignment instructions about when and how GenAI applications may be used; 4) have students submit attribution statements whenever they use GenAI applications; and 5) consider issues of data privacy and how these affect students. The guidelines also provide examples of statements for course outlines and assignment instructions.

Tags: Guidelines, Academic integrity, Pedagogy, Canada

Université de Montréal (Canada) recently published Guidelines for the use of GenAI in Teaching and Learning (in French). These guidelines notably mention that each instructor has the responsibility to decide whether the use of GenAI is permitted or not in the context of their course. This decision must be made explicit (e.g., in syllabi). If it is not explicitly mentioned that GenAI tools are allowed, the use of these tools is considered prohibited. When allowed, the use of GenAI tools must be clearly cited. Instructors must also make explicit their own use of GenAI tools, for example if it has been used to develop pedagogical materials. Instructors cannot upload materials produced by students into GenAI tools without their prior consent. UdeM is gradually implementing Turnitin, which includes an AI detection tool that instructors can use. UdeM’s Centre de pédagogie universitaire has developed many resources to support instructors make a creative use of GenAI tools. 

Tags: Guidelines, Academic integrity, Pedagogy, Canada

The University of British Columbia (Canada) provides a flowchart on the use of generative AI tools in classes (adapted from UNESCO) to help students in their decision-making when considering the use of generative AI tools to complete academic work. UBC’s Q&A on generative AI tools also clarifies situations where unauthorized use of AI tools might be considered academic misconduct. While GenAI tools like ChatGPT are not named in the academic misconduct regulation (Academic Misconduct by UBC Students, Vancouver and Okanagan), their use could be considered unauthorized means to complete an assignment or assessment, the accessing of a website that is not permitted, or other, depending on the specific case. UBC defines academic misconduct as any conduct by which a student gains or attempts to gain an unfair academic advantage or benefit, thereby compromising the integrity of the academic process.

News & Research

Sharma, Y. University World News. August 29th, 2023.

China is the first country to have developed a GenAI law, which came into effect mid-August. “Universities and research organizations will need to ensure they train LLMs on datasets that do not contain information deemed sensitive by the authorities”. All R&D organizations must “adhere to core socialist values and not generate any content that “incites the subversion of state power and the overthrow of the socialist system”. More comprehensive regulations are expected at the end of the year. 

Leung, M. University World News. September 1st, 2023.

China’s new draft Degree Law, which is currently before China’s National People’s Congress, “proposes that degrees should be revoked for a raft of infractions related to misconduct, cheating and fraud”. This includes using GenAI to ‘ghostwrite’ academic work. “The penalties will also apply to those who have already obtained bachelor, masters or doctoral-level degrees if they are later found to have used ‘illegal means’, including the use of AI tools, to falsify data or use ghostwritten essays.”

Leung, M. and Sharma, J. University World News. August 23rd, 2023. 

After initially banning student use of ChatGPT last February, the University of Hong Kong recently announced that it would allow GenAI to be incorporated into university teaching, with some restrictions (e.g., a limit of 20 prompts per user), starting this September. Hong Kong’s Lingnan University, on its end, has bought a license for ChatGPT 3.5 and will train its students and staff on how to use the tool. “Under Lingnan’s new AI policy, lecturers can specify whether ChatGPT should be used for assignments and students could be required to submit a list of their prompts to ChatGPT along with their work”. 

Schroeder, R. Inside Higher Ed. August 30th, 2023. 

By 2025, 1.5B jobs will have significantly changed due to GenAI. This is a cause for concern for many faculty members, that notably fear they won’t be able to keep up with the new technology. The author of this article elaborates on this fear and recommends 1) implementing a university-wide study to determine how GenAI will impact faculty and staff positions; 2) provide training to help deans, directors, chairs and health staff discuss these issues with employees; 3) and developing workshops and support materials to examine career futures and provide opportunities for upskilling and reskilling. 

Peters, D. University Affairs. August 29th, 2023.

This article touches on the challenges of GenAI in the realm of research and academic publishing. Whilst various publishers have already issued policies about GenAI, post-secondary institutions are still reflecting on how to adequately respond to this issue. Many elements must be considered: the value of AI-produced research, potential biases, risks of loss of ownership, automation taking work and training opportunities from grad students, etc. Some scholars discourage the adoption of blanket policies, which would “fail at covering the wide range of attitudes around academic writing”. 

Heleta, S. University World News. August 23rd, 2023.

“Without strict rules and oversight, generative AI platforms can enable the spread of disinformation that ‘promotes and amplifies incitement to hatred, discrimination or violence on the basis of race, sex, gender and other characteristics’. […] Researchers at the University of Copenhagen have found that ChatGPT ‘is heavily biased when it comes to cultural values’. It promotes American norms and values and often presents them as ‘universal’ when asked to provide responses about concepts and other countries and cultures. This way, the researchers argue, ChatGPT acts as a ‘cultural imperialist tool that the United States, through its companies, uses to promote its values’.”

Bobrow, E. The Guardian. August 27th, 2023.

Some argue that GenAI tools could help “democratize admissions by giving students who don’t have tutors or counselors a leg up”. Allowing AI in admission essays might “help improve the essays of those who can’t afford outside assistance”. “When some students get their personal statements sculpted by handsomely paid English PhDs, it seems unfair to accuse those who use AI as simply “outsourcing” the hard work.” However, not all students have access to proper training on prompt engineering and how to use GenAI to their advantage. Here again, the question is “whether AI widens access to real help or simply reinforces the privileges of the lucky few”. 

Baron, N. S. Inside Higher Ed. September 6th, 2023.

The author provides five touch points that faculty can discuss with students to help them better decide when, and how, to use GenAI tools: 1) Do you trust what AI writes? 2) How much effort are you willing to expend when you write? 3) Does AI improve or weaken writing skills, in the short and in the long run? 4) Does AI compromise your own writing voice? 5) How much do you care about the product? What are the stakes of having your name on an AI-produced piece of text?

Aikins, R. and Kuo, A. Inside Higher Ed. September 7th, 2023. 

Inside Higher Ed interviewed dozens of students across disciplines who used ChatGPT for academic work in order to get a better sense of 1) what students were doing with AI tools, 2) whether faculty were endorsing it or not, and 3) how students felt ethically about it. Key takeaways include the fact that more and more students will turn to AI for academic work; the lack of acknowledgement of AI by many faculty members, which leaves students confused about what they can and cannot do; and how this can make group dynamics particularly challenging when group members have different views on whether or not they can use AI.

Yu, H. and Guo, Y. Frontiers in Education. June 1st, 2023.

This article “conducts an in-depth analysis of the current application of GenAI in the field of education, and identifies problems in four aspects: opacity and unexplainability, data privacy and security, personalization and fairness, and effectiveness and reliability. Corresponding solutions are proposed, such as developing explainable and fair algorithms, upgrading encryption technology, and formulating relevant laws and regulations to protect data, as well as improving the quality and quantity of datasets. The article also looks ahead to the future development trends of generative artificial intelligence in education from four perspectives: personalized education, intelligent teaching, collaborative education, and virtual teaching.”

Posted in

Leave a Reply

Your email address will not be published. Required fields are marked *

Search the Blog

Enjoy Reading?

Get One Thought sent straight to your inbox.
Subscribe now.