Imagine a class with a student who doesn’t speak unless spoken to. They do not raise their hand to participate willingly but have to be prodded for responses. Several weeks go by as you test your hypothesis that this student only takes input and responds mechanically. You prompt them with questions that have direct answers. You observe the encyclopaedic quality, wondering what the boundaries or limits are of their ability to reference information. You test these limits by asking them, “What is the capital of _?”, to which their reply is quick and accurate. Finally, you ask, “What is the capital of the Middle East?” to which they respond that there is no such capital, for the Middle East is composed of several countries. This answer astonishes you because you thought that this student could only respond to direct-answer questions that have been front-loaded.
Today, technology has taken a leap in capability that resembles the way our imagined student answered the trick question. Earlier forms of our experience with artificial intelligence (AI) would have redirected us to a Google search, or perhaps suggested alternative questions that it knows the answers to, like those we surmised our student to only be capable of answering.
The technological breakthrough that we are experiencing today is of such magnitude that the above analogy appears inadequate. No educator has experienced a student who is suddenly and effortlessly able to produce entire stories, or compose entire working computer code, given only basic instructions. Nor have we experienced the ability of students to parse a series of input resulting in the creation of digital illustrations. There is now a widely available service that does this in an instant, by using probabilistic algorithms that mimic reasoning skills, can address assumptions, and can produce works that exhibit creativity, complexity, and contextual understanding.
This unprecedented capability presents both immense opportunities and real challenges for educators. Likewise, it raises questions about the role that technology in general, and AI specifically, plays in pedagogy and our practices. We value academic integrity, but students may submit work with an increasing degree of input from artificial intelligence. We value equity, but students who are using AI-enabled tools to complete assignments may have a demonstrable advantage over those who cannot. Or, it may be that an overreliance on AI actually is a disadvantage, as dependency could result in a net negative.
In this context, we aim to offer the following guidelines and practical tips on how a school’s Academic Integrity policy can be adapted, expanded or restructured to accommodate this new technical marvel. This assumes, in line with the IB, that failing to update our practices and policies for generative AI technologies would be “an inadequate response to innovation”. [link]
Emphasising Teachers as Facilitators
A core issue raised with the advent of generative AI is the currently and commonly held expectation that the students’ work is their own. Especially for formally assessed tasks, schools need to provide a supportive network who helps to ensure students’ work is a true representation of their learning. Part of that network are teachers who are intimately familiar with their students’ voices, how they process information and make creative decisions, and how their development is progressing.
While providing ongoing feedback ensures that students are not solely relying on AI-generated content or other external sources without proper attribution, it also reinforces the teacher’s role as a facilitator. By working with students on the creative process itself, academic work can be affirmed as a dynamic and iterative process — rather than a static product produced in a few seconds with minimal effort.
Educators, by reviewing their school’s Academic Integrity policy to expand upon, or more explicitly require, the use of a drafting process, check-ins or milestones, can create a supportive learning environment that values originality, integrity, and intellectual growth while preparing students to responsibly utilise advanced technologies like generative AI.
Generative AI as a Derivative Source
Given that, as educators, we are striving to instill a sense of lifelong learning, embracing innovations and guiding our students in the appropriate use of technologies with staying power is a priority. In addition, given that an expectation of students is that they engage in increasingly extensive projects, generative AI can make research, idea generation and preliminary drafts creation more streamlined. Just as other derivative technologies, such as Wikipedia and web searches, have been integrated into the academic process, so will Generative AI as it builds upon existing information to generate output.
Reframing an Academic Policy to emphasise nurturing an intellectual connection with all available resources appears to be essential. By fostering a deeper understanding and effective utilisation of various tools, including generative AI technologies, students can enhance their learning experiences and develop a comprehensive skill set. Encouraging such intellectual engagement with diverse resources helps students navigate the ever-evolving educational landscape and prepares them to thrive in a rapidly changing world.
Establishing an Academic Integrity policy that characterizes the use of generative AI as a supportive resource requiring attribution where necessary not only enriches the learning experience but also encourages a growth mindset. By establishing guidelines for responsible and ethical use, students can better understand their role as collaborators while also gaining a deeper appreciation for the role derivatives play in attaining originality.
Establishing Conventions for Citations
An Academic Integrity policy in an educational context that is proactive about the use of artificial intelligence tools can address the challenges they introduce by including guidelines that acknowledge their unique interactive nature. Whereas there are well-understood guidelines that establish norms in the citation for tools we are deeply familiar with, generative AIs recent inception into our productivity workflows may introduce additional challenges that an Academic Integrity policy can address.
Consider that a student developing an understanding of the use of authority may wish to directly quote a response from a language model, which is straight-forward and well defined. The student, and the guiding teacher, will already have a mental model of how to cite in this straight-foward case. Questions will inevitably arise, however, when artificial intelligence has been used to correct grammar for writing for a language acquisition class, or when it has been used to build a list of sources. To what extent is citation required, or is that approach even acceptable?
Furthermore, an existing Academic Integrity policy already will require the student to provide the name of the software and the version used, the date and time the content was generated, and any other relevant information that would allow the reader to locate the source. However, unique to technology that uses probabilistic algorithms, the actual input the student provided to the AI also could be considered essential. Including guidance on how this can be made transparent will guide students to acknowledge the nature of the interaction.
Finally, the policy may focus on encouraging students to use AI tools ethically and effectively, by stressing that they are consultative resources, and not a substitute for their own critical thinking and analysis. An Academic Integrity policy could specify conventions that establish routines and measures to ensure that there is transparency about the process that students follow in composing their assessments. Establishing a framework that addresses how students can cite generative AI as a source of authority, and when to cite it as a consultative tool, will lead students to use it resourcefully and ethically.
School leaders may wish to review these guidelines with a view to re-evaluating or updating the current Academic Integrity policy. In doing so, they may well arrive at the conclusion that, in fact, no revision is necessary if the current version already addresses the above complexities. Nonetheless, a policy is made stronger having put through these paces, as it provides an opportunity to examine assumptions and reaffirm practices.
Irrespective of any particular takeaway, we hope this article and others like it foster a deeper understanding of the role that generative AI can play in the learning process, as well as the necessity for thoughtful discussions surrounding its ethical and responsible use.
In conclusion, as generative AI continues to reshape our educational landscape, it is vital that schools proactively adapt their Academic Integrity policies and practices to ensure they remain relevant, effective and comprehensive.
Responsible Use Policy for Artificial Intelligence in International Schools
APA: How to Cite ChatGPT with Text
Common Sense Media: Guide to ChatGPT for Parents and Caregivers
ChatGPT Citations | Formats & Examples
How do I cite generative AI in MLA style?
“What are the most common headers in works that discuss academic integrity” prompt. ChatGPT, 27 Mar. 3.5, OpenAI, 27 Mar. 2023, chat.openai.com/chat.
“Rephrase this portion” prompt. ChatGPT, 30 Mar. 3.5, OpenAI, 30 Mar. 2023, chat.openai.com/chat.
“Reword to remove personification” prompt. ChatGPT, 1 Apr. 3.5, OpenAI, 1 Apr. 2023, chat.openai.com/chat.
“Summarize this article” prompt. ChatGPT, 30 Mar. 4.0, OpenAI, 5 Apr. 4.0, OpenAI, 4 Apr. 2023, chat.openai.com/chat