Since the sudden rise of ChatGPT and other ai chatbots, many teachers and professors have started using ai detectors to check their students' work. The idea is that the detectors detect if a student has had a robot do their work for them.
However, the approach is controversial, as these ai detectors have been shown to return false positives, claiming in some cases that the text is generated by ai even when the student did all the work themselves without the help of a chatbot. . The false positives seem technology/another-ai-issue-for-schools-to-know-about-bias-against-non-native-english-speakers/2023/08″ target=”_blank” rel=”noopener nofollow”>happen more frequently with students who do not speak English as their first language.
So some instructors are trying a different approach to guard against ai traps, one that borrows a page from criminal investigations.
It is called “linguistic fingerprinting”, where linguistic techniques are used to determine whether a text has been written by a specific person based on the analysis of their previous writings. The technology, sometimes called “authorship identification,” helped catch Ted Kaczynski, the terrorist known as the Unabomber for his deadly series of mail bombs, when an analysis of Kaczynski's 35,000-word anti-technology manifesto was compared to his writings. above to help identify it.
Mike Kentz is an early adopter of the idea of bringing this fingerprinting technique to the classroom and maintains that the approach “ai-cheating-flipping-the” target=”_blank” rel=”noopener nofollow”>flip the script”In the usual way of checking for plagiarism or ai. He is a professor of English at the Benedictine Military School in Savannah, Georgia, and also writes a newsletter on the problems posed by ai in education.
Kentz shares his experience with this approach and talks about the pros and cons on this week's EdSurge podcast.
Hear the full story in this week's episode. Listen ai-cheating/id972239500?i=1000654836054″ target=”_blank” rel=”noopener nofollow”>Apple Podcasts, Cloudy, Spotify, or wherever you listen to podcasts, or use the player on this page. Or read a partial transcript below, lightly edited for clarity.
EdSurge: What is linguistic fingerprinting?
Mike Kentz: It's a lot like a regular fingerprint, except it has to do with the way we write. And it's the idea that each of us has a unique way of communicating that can be modeled, tracked, and identified. If you have a known document written by someone, you can model their written fingerprint.
How is it being used in education?
If you have a document known to have been written by a student, you can run a more recent essay submitted with the original fingerprint and see whether or not the linguistic style matches the syntax, word choice, and lexical density. …
And there are tools that produce a report. And it's not about saying, 'Yes, this kid wrote this' or 'No, the student didn't write it.' It's on a spectrum and there are tons of vectors within the system that are sort of on a pendulum. It will give you a percentage chance that the author of the first article also wrote the second.
I understand that there was a time recently at your school when this approach was helpful. Can you share that?
The first-year science teacher came up to me and said, 'Hey, we have a student who wrote a piece that doesn't really sound like him.' Does he have other writings so I can compare them and make sure I'm not accusing him of something he doesn't deserve? And I said, 'Yeah, sure.'
And we ran it through a (linguistic fingerprinting tool) and it produced a report. The report confirmed what we thought was unlikely to have been written by that student.
The biology teacher approached the mother (and she didn't even have to use the report) and told her that it doesn't look like the student wrote it. And it turned out that her mother wrote it to her, more or less. And in this case she was not ai, but the truth is that he did not write her.
Some critics of the idea have pointed out that a student's writing should change as he or she learns, and therefore a fingerprint based on a previous writing sample might no longer be accurate. Shouldn't student writing change?
If you've ever taught high school writing, like I have, or if you taught high school writing, your writing doesn't change that much in eight months. Yes, it improves, hopefully. Yes, it gets better. But we're talking about a very sophisticated algorithm, and while there are excellent writing teachers out there, not much is going to change in eight months. And you can always run a new assignment to get a new “known document” of your writing later in the semester.
Some people might be concerned that since this technique comes from law enforcement, it has a sort of criminal justice vibe.
If I have a situation next year where I think a child may have used ai, I am not going to immediately go through the fingerprinting process. That won't be the first thing I do. I will have a conversation with them first. Hopefully, there's enough trust there and we can figure it out. But I think this is just a good sort of backup, just in case.
We have a system of rewards and consequences in a school, and it is necessary to have a system to enforce the rules and discipline children if they step out of line. For example, (many schools) have cameras in the hallways. I mean, we do that to make sure we have documented evidence in case something goes wrong. We have all kinds of disciplinary measures that are backed by mechanisms to ensure that that is actually delayed.
How optimistic are you that this and other approaches you're experimenting with can work?
I think we're in for a bumpy next five years, maybe even longer. I think the Department of Education or local governments should establish ai literacy as a core competency in schools.
And we need to change our assessment strategies and change what we care about children producing, and recognize that written work really isn't going to be that anymore. You know that what is new is also verbal communication. So when a kid finishes an essay, I do it a lot more now, where I'm like, okay. Everyone will come up without their paper and just talk about their argument for three to five minutes, or whatever, and your job is to verbally communicate what they were trying to argue and how they demonstrated it. Because that's something ai can't do. Therefore, my optimism lies in rethinking evaluation strategies.
My biggest fear is that there will be a loss of trust in the classroom.
I think schools are going to have a big problem next year, where there will be a lot of conflicts between students and teachers where a student says, 'Yes, I used (ai), but it's still my job.' and the teacher says, “Any use is too much.”
Or what is too much and what is too little?
Because any teacher can tell you it's a delicate balance. Classroom management is a delicate balance. You're always managing the kids' emotions, and where they are that day, and your own emotions as well. And you are trying to develop trust, maintain it, and foster it. We have to make sure this delicate, beautiful, important thing doesn't fall to the ground and break into a million pieces.
ai-cheating” target=”_blank” rel=”noopener nofollow”>Listen to the full conversation on the EdSurge podcast.