Several available tools can detect whether something has been generated by ChatGPT, though none of them is fully reliable, with some false negatives and positives. Longer text samples usually work better, but some tools limit the amount of text they test. Some examples:
There are also some red flags to look for to suggest ChatGPT as a source:
- Repetitive answers: ChatGPT can answer the same question in different ways, but often not very different; similar answers from several different students may mean they are using AI generated content.
- Fluffy verbosity: It has a tendency to use filler language.
- Overly biased toward neutrality: It provides factual (or at least factual sounding) answers but refrains from offering opinions or judgments.
- That part about "factual sounding": It can write plausible sounding answers that are in fact counter to fact.
- Fake citations: For an example of false facts, it can fabricate citations, such as using a real author's name and a real journal title but make up a title of an article the author plausibly could have written but never actually wrote.
Source: ChatGPT Libguide of Wesleyan University