Over the summer, I began to see the first suspicious cases of ai use in the introductory college writing courses I teach online. Since then, ai-generated essays have become a more common element of these classes.
Fortunately, I've gotten a lot better at spotting ai articles instantly thanks to some telltale signs from the writing styles of ChatGPT and other ai generators. Below I discuss the techniques I learned to detect ai writing in my classes.
However, before addressing these strategies, it is important to remember that suspected ai use is not immediate grounds for disciplinary action. These cases should be used to start a conversation with students and even – pardon the cliché – as a teaching moment to explain the problems with using ai-generated work.
To that end, I have previously written about How I handled these suspected ai cases, The worrying limitations and discriminatory tendencies of existing ai detectors.and about What happens when educators incorrectly accuse students of using ai?.
With those caveats firmly established, these are the signs I look for to spot my students' use of ai.
<h2 id="1-how-to-detect-ai-writing-the-submission-is-too-long-xa0″>1. How to detect ai writing: sending is too long
When an assignment asks students for a paragraph and a student turns in more than one page, my spider sense goes off.
Almost every class has a high achieving student who will do this without ai, but that student usually sends 14 emails the first week and turns in all assignments early and, most importantly, even though they are too long, their homework is often really well written. A student who suddenly overproduces raises a red flag.
2. The answer misses the mark and at the same time is too long
Being long in itself is not enough to identify ai use, but it is often overly long tasks that have additional strange features that can make it suspicious.
For example, the assignment may be four times the required length but does not include the required citations or cover page. Or it goes on and on about something related to the topic but doesn't get into the details of the actual question asked.
<h2 id="3-ai-writing-is-emotionless-even-when-describing-emotions-xa0″>3. ai writing is emotionless even when describing emotions
If ChatGPT were a musician, he would be Kenny G or Muzak. As it stands now, ai writing is the equivalent of verbal smooth jazz or gray noise. ChatGPT, for example, has a very upbeat positive vibe that somehow doesn't convey any real emotion.
One assignment I have asks students to reflect on important memories or favorite hobbies. You immediately feel the emptiness of ChatGPT's response to this type of notice. For example, I just told ChatGPT that I loved skateboarding as a kid and asked for an essay describing it. This is how ChatGPT started:
When I was a kid, there was nothing more exhilarating than the feeling of riding a skateboard. The rhythmic sound of wheels hitting the pavement, the wind caressing my hair and the freedom of exploring the world on four wheels: skateboarding wasn't just a hobby; It was a source of unbridled joy.
You understand. It's like an extended elevator jazz sax solo but with words.
4. Excessive use of lists and bullets
Here are some reasons why I suspect students are using ai if their papers have a lot of lists or bullet points:
1. ChatGPT and other ai generators often present information in list form, although human authors generally know that this is not an effective way to write an essay.
2. Most human writers will not inherently write this way, especially new writers who often have difficulty organizing information.
3. While lists can be a good way to organize information, presenting more complex ideas this way can be…
4… annoying.
5. See what I mean?
6. (Yes, I know, it's ironic that I'm complaining about this here since this story is also a list.)
5. It is error free
I've criticized ChatGPT's writing here, but to be fair, it produces very clean prose that is, on average, more error-free than what many of my students submit. Even experienced writers omit commas, have long, awkward sentences, and make small mistakes, which is why we have editors. ChatGPT's writing is not too “perfect”, but it is too clean.
6. The writing does not match the student's other work
Writing teachers inherently know this and have long been on the lookout for changes in voice that could be an indicator that a student is plagiarizing work.
ai writing doesn't really change that. When a student submits new work that is wildly different from previous work, or when their discussion forum comments are riddled with errors not found in their formal assignments, it's time to take a closer look.
7. Something is fair. . . Off
The lines between these different ai writings blur, and sometimes it's a combination of a few things that makes me suspicious of a writing. Other times it's harder to tell what's wrong with the writing and I just feel like a human didn't do the work in front of me.
I have learned to trust these instincts to a certain extent. When faced with these more subtle cases, I often ask a fellow instructor or my department chair to take a quick look (I remove student identifying information when necessary). Getting a second opinion helps me make sure I don't fall into the paranoia of “my students are all robots and nothing I read is real.” Once a colleague agrees that something is likely going on, I feel comfortable moving forward with my ai hypothesis based solely on suspicion, in part because, as mentioned above, I use suspicious ai cases to start conversations. instead of making accusations.
Again, it is difficult to prove that students are using ai and accusing them of doing so is problematic. Even ChatGPT knows this. When I asked why it is bad to accuse students of using ai to write articles, the chatbot responded: “Accusing students of using ai without proper evidence or understanding can be problematic for several reasons.”
Then he launched into a list.