In the video and blog below I describe what to do if you are falsely accused of using AI tools like ChatGPT to cheat. This information is going to be really important for both students (both those who have been falsely accused and those that want to avoid it), and academics (so they can see the flaws in the current detection tools).
As I find more research and information on this topic I will be adding it to the page.
As I describe in the video, there are three important steps to take if you have been falsely accused of cheating with an AI tool. These are things that will help for the disciplinary meeting/hearing that you will face if you have been accused of cheating.
1. Contact your student advocacy service
Being accused of cheating and having to attend a disciplinary hearing is extremely stressful. Most institutions will have some form of student advocacy service that you can turn to for support. These might be university support people or staff from a student union depending on your country and what type of institution you attend. They will be able to discuss your case and potentially provide a support person to come with you (depending on your institution’s rules).
2. Collate false positive related evidence
Sadly the numeracy levels (or more accurately their statistics knowledge) of some academics is lacking and they don’t understand that these tools have false positive rates and that means that there will be times where the checking software says that a piece of writing was AI generated when it was not.
Although they claim their false positive rate is less than 1%, Turnitin on their website highlight that this is not zero and that when Turnitin flags a piece of work, you should not immediately assume that the student has used AI.
In my own tests with the AICheatCheck tool, I found that some of my published writing which was written long before AI tools was flagged as 98.51% likely to have been written by an AI.
For a more technical examination of whether it is even possible to always correctly identify a piece of work written by a Large Language Model (LLM), the leading reference is currently:
3. Collect evidence that you actually did the work
The most important thing you can do is collect evidence that you actually did the work. If you have used Google Docs then you could provide the version history. This will show the progressive development of your document (rather than it all being generated by ChatGPT). Word has a similar facility if you are using OneDrive or SharePoint but not if you have just been saving locally.
If this is the case you might try to compile other evidence of your working such as notes, draft versions if you saved them with different file names, collections of reference PDFs, data, code, etc. Basically anything you can use to demonstrate that you actually did the work.
This habit of having draft versions and a record of doing the work will be important for students going forward as more and more institutions start using AI detection tools.
Subscribe to stay up to date on my latest videos, courses, and content
Further articles and resources
- Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. arXiv preprint arXiv:2304.02819.
- Sadasivan, V. S., Kumar, A., Balasubramanian, S., Wang, W., & Feizi, S. (2023). Can AI-Generated Text be Reliably Detected?. arXiv preprint arXiv:2303.1
- We pitted ChatGPT against tools for detecting AI-written text, and the results are troubling [The Conversation]
- We tested a new ChatGPT-detector for teachers. It flagged an innocent student [Washington Post]
- Professors are using ChatGPT detector tools to accuse students of cheating. But what if the software is wrong? [USA Today]