We recognize that many school districts and higher education institutions do not currently account for generative AI in their policies on academic honesty. We also understand that some students may have used these tools for assignments without disclosing their use of AI. In addition to potentially violating school honor codes, such cases may be against our terms of use: users must be at least 13 years old and users between the ages of 13 and 18 must have parental or guardian permission to use the platform.
We will continue to provide resources and insights in this area, understanding that ultimately each institution will decide how to address these issues in a way and on a timeline that makes sense for their educators and students.
In the past year, different school districts and universities have created new policies around AI-generated content. We encourage educators to do their own research on these different approaches to find what works best for them.
Do AI detectors work?
In short, no, not in our experience. Our research into detectors didn't show them to be reliable enough given that educators could be making judgments about students with potentially lasting consequences. While other developers have released detection tools, we cannot comment on their utility.
Additionally, ChatGPT has no “knowledge” of what content could be AI-generated. It will sometimes make up responses to questions like “did you write this [essay]?” or “could this have been written by AI?” These responses are random and have no basis in fact.
To elaborate on our research into the shortcomings of detectors, one of our key findings was that these tools sometimes suggest that human-written content was generated by AI.
When we at OpenAI tried to train an AI-generated content detector, we found that it labeled human-written text like Shakespeare and the Declaration of Independence as AI-generated.
There were also indications that it could disproportionately impact students who had learned or were learning English as a second language and students whose writing was particularly formulaic or concise.
Even if these tools could accurately identify AI-generated content (which they cannot yet), students can make small edits to evade detection.
However, there are some approaches that others have found helpful:
One technique some teachers have found useful is encouraging students to share specific conversations from ChatGPT (instructions here). This can have many benefits:
Showing their work and formative assessment:
Educators can analyze student interactions with ChatGPT to observe critical thinking and problem-solving approaches.
Shared links can enable students to review each other's work, fostering a collaborative environment.
By keeping a record of their conversations with AI, students can reflect on their progress over time. They can see how their skills in asking questions, analyzing responses, and integrating information have developed. Teachers can also use these records to provide personalized feedback and support individual growth.
Information and AI literacy:
Students can demonstrate their ability to interact with AI and their understanding of the shortcomings of AI systems. Educators can assess the quality of the questions asked, the relevance of the information obtained, and how well the student understood to challenge, double-check, and consider potential biases in that information.
We anticipate a future where the use of AI tools like ChatGPT is commonplace. Encouraging responsible use helps students prepare for a future where they may be expected to leverage AI in different contexts.
Creating accountability: Sharing interactions with the model ensures that students are held accountable for the way they use AI in their work. Educators can verify that students are engaging with the tool responsibly and meaningfully, rather than simply copying answers.