“Show your work” has taken on a new meaning (and importance) in the age of ChatGPT.
As teachers and professors look for ways to protect against the use of ai to cheat on assignments, many have begun asking students to share their document history online to check for signs that a robot wrote . In some cases, that means asking students to grant access to a document's version history in a system like Google Docs, and in others it means turning to new web browser extensions that have been created precisely for this purpose.
Many educators who use this approach, which is often called “process tracing,” do so as an alternative to passing student work through artificial intelligence detectors, which are prone to falsely accusing students, especially those who do not speak English as their first language. Even companies that sell ai detection software admit that the tools can misidentify student-written material as ai. <a target="_blank" href="https://www.turnitin.com/blog/understanding-the-false-positive-rate-for-sentences-of-our-ai-writing-detection-capability” target=”_blank” rel=”noopener nofollow”>about 4 percent of the time. Since teachers grade so many papers and assignments, many educators view this as an unacceptable level of error. And some students have responded in viral posts on social media or even <a target="_blank" href="https://arstechnica.com/tech-policy/2024/11/school-did-nothing-wrong-when-it-punished-student-for-using-ai-court-rules/” target=”_blank” rel=”noopener nofollow”>schools sued for what they say are false accusations of ai cheating.
The idea is that a quick look at the version history can reveal if a lot of writing was suddenly pasted from ChatGPT or another chatbot, and that the method may be more reliable than using an ai detector.
But as process tracking has become more widely adopted, a growing number of writing teachers are raising objections, arguing that the practice amounts to surveillance and violates students' privacy.
“It instills suspicion in everything,” argues Leonardo Flores, professor and chair of the English department at Appalachian State University in North Carolina. He was one of several professors who voiced their objections to the practice in a <a target="_blank" href="https://aiandwriting.hcommons.org/2024/11/25/what-is-process-tracking-and-how-is-it-used-to-deter-ai-misuse/” target=”_blank” rel=”noopener nofollow”>blog post last month of a joint working group on artificial intelligence and writing organized by two prominent academic groups: the Modern Language Association and the Conference on College Composition and Communication.
Can process tracing become the answer to checking the authenticity of student work?
Time lapse history
Anna Mills, an English professor at the College of Marin in Oakland, California, has used process tracing in her writing classes.
For some assignments, he asked students to install an extension for their web browser called Revision History and then grant it access. With the tool, you can view an information ribbon on top of the documents students turn in that shows how much time was spent and other details of the writing process. The tool can even generate a time-lapse video of everything written in the document that the teacher can view, providing a rich behind-the-scenes look at how the essay was written.
Mills has also had students use a similar browser plugin feature that Grammarly launched in October, called Authorship. Students can use this tool to generate a report about the creation of a given document that includes details about how many times the author pasted material from another website and whether the pasted material is likely to be generated by ai. You can also create a time-lapse video of the document creation.
The instructor tells students that they can opt out of the follow-up if they have concerns about the approach, and in those cases she would find an alternative way to verify the authenticity of their work. However, no students have accepted this yet and he wonders if they are worried that asking for it will seem suspicious.
Most of his students seem open to shadowing, he says. In fact, in the past some students even called for stricter verification of ai cheating. “Students know that there is a lot of cheating on the part of ai and that, as a result, there is a risk of their job and degree being devalued,” he says. And while he believes the vast majority of his students are doing their own work, he says he has caught students submitting ai-generated work as their own. “I think some responsibility makes sense,” he says.
Other educators, however, maintain that having students show the full story of their work will make them self-conscious. “If as a student I knew that I had to share my process or, worse yet, see that it was being tracked and that the information was somehow in the purview of my professor, I would probably be too self-conscious and worried that my process was judging. my writings,” Kofi Adisa, an associate professor of English at Howard Community College in Maryland, wrote in the academic committee’s blog post on ai in writing.
Of course, it's possible that students are entering a world where they use these ai tools in their jobs and also have to show employers what part of the job they created. But for Adisa, “as more and more students use artificial intelligence tools, I think some teachers may rely too much on monitoring writing than on teaching it itself.”
Another concern raised about process tracing is that some students may do things that seem suspicious to a process tracing tool but are innocent, such as drafting a section of a document and then pasting it into a Google Doc.
For Appalachian State's Flores, the best way to combat ai plagiarism is to change the way instructors design assignments, so that they accept the fact that ai is now a tool that students can use instead. of something forbidden. Otherwise, he says, there will simply be an “arms race” of new tools to detect ai and new ways that students will devise to circumvent those detection methods.
In theory, Mills doesn't necessarily disagree with that argument. She says she sees a big gap between what experts suggest teachers do (totally revamp the way they teach) and the more pragmatic approaches educators are struggling to take to ensure they do something to eradicate rampant cheating using ai.
“We're in a time where there are a lot of potential compromises to be made and a lot of conflicting forces that teachers don't have much control over,” Mills says. “The biggest factor is that the other things we recommend require a lot of institutional support or professional development, work and time” that most educators don't have.
Product arms race
Grammarly officials say they are seeing high demand for process tracking.
“It's one of the fastest-growing features in Grammarly's history,” says Jenny Maxwell, the company's director of education. She says customers have generated more than 8 million reports using the process tracking tool since its launch about two months ago.
Maxwell says the tool was inspired by the story of a college student who used Grammarly's spell-checking features for a paper and says her professor falsely accused her of using an ai robot to write it. The student, who says she lost a scholarship due to the cheating accusation, shared details of her case in a series of TikTok videos that went viral, and the student eventually became a paid consultant for the company.
“Marley is kind of a north star for us,” Maxwell says. The idea behind Authorship is that students can use the tool while they write, and then if they are ever falsely accused of using ai inappropriately, as Marley says, they can file the report as a way to make the case. to the teacher. “It's really like an insurance policy,” Maxwell says. “If some ai detection software flags you, you actually have proof of what you've done.”
As for student privacy, Maxwell emphasizes that the tool is designed to give students control over whether they use the feature and that students can view the report before passing it on to an instructor. This contrasts with the model of teachers passing student work through ai detectors; Students rarely see reports about which sections of their work were supposedly written by ai.
The company that makes one of the most popular ai detectors, Turnitin, is considering adding process tracking features as well, says Annie Chechitelli, Turnitin's chief product officer.
“We're looking at what elements make sense to show that a student did this on their own,” he says. The best solution might be a combination of ai detection and process monitoring software, he adds.
He argues that letting students activate a process tracking tool may not do much to protect academic integrity. “In this situation, opting in makes no sense,” he argues. “If I'm a cheater, why should I use this?”
Meanwhile, other companies are already selling tools that claim to help students defeat both ai detectors and process trackers.
Mills, of the College of Marin, says he recently heard about a new tool that allows students to paste an ai-generated article into a system that simulates writing the article in a process tracking tool like Authorship, character by character, even adding false data. keystrokes to make it look more authentic.
Chechitelli says his company is closely watching a growing number of tools that aim to “humanize” ai-generated writing so that students can submit it as their own work without detection.
She says she's surprised by the number of students posting videos on TikTok bragging about having found a way to subvert ai detectors.
“It helps us, are you kidding me? It's great,” says Chechitelli, who finds these types of social media posts the easiest way to learn about techniques and modify his products accordingly. “We can see which ones are gaining ground.”