“I think I'm talking to Salinger. Can I ask?”
My student was standing next to my desk, the computer resting in both hands, his eyes wide with a mix of fear and excitement. We were wrapping up our final book project for “The Catcher in the Rye,” in which students interviewed a chatbot designed to mimic the personality and speaking style of Holden Caulfield.
We accessed the bot through Character.ai, a platform that provides user-generated bots that imitate famous historical and fictional characters, among others. I named the robot “HoldenAI”.
The project, up to that point, had been a success. The students were excited to interview a character they had just spent more than two months dissecting. The chatbot offered the opportunity to ask the burning questions that often haunt the reader after consuming a great work of fiction. What happened to Holden? And why was he so obsessed with those damn ducks? And they were eager to do it through a new tool: it gave them the opportunity to evaluate the hyped artificial intelligence (ai) market for themselves.
During our class discussions, one student seemed more impacted by Holden's story than the others, and dove headfirst into this project. But he couldn't have predicted where his enthusiasm for the book would lead.
After a long, deep conversation with HoldenAI, it seemed like the robot had somehow transformed into JD Salinger, or at least that's what my student thought when he approached me in class. As he grabbed his computer to read the final entry of his conversation with HoldenAI, I noticed how intense the interactions had become and wondered if he had gone too far.
Develop ai literacy
When I introduced the HoldenAI project to my students, I explained that we were entering uncharted territory together and that they should consider themselves explorers. I then shared how I would monitor every aspect of the project, including the conversation itself.
I guided them to generate meaningful, open-ended interview questions that would (hopefully) create a relevant conversation with HoldenAI. I fused character analysis with the building blocks of journalistic thinking, asking students to locate the most interesting aspects of their story while also putting myself in Holden's shoes to discover what kinds of questions might “make him talk.”
Next, we focused on active listening, which I incorporated to try out.ai-and-empathy” target=”_blank” rel=”noopener nofollow”> a theory that ai-like-chatgpt-can-successfully-coach-humans-on-empathy/” target=”_blank” rel=”noopener nofollow”>artificial intelligence Tools Could Help People Develop Empathy. I advised them to acknowledge what Holden was saying in each comment rather than quickly moving on to another question, as any good conversationalist would do. I then evaluated the transcript of their conversation for evidence that they listened and met. Holden Where was it.
Finally, we used text from the book and its chats to evaluate the robot's effectiveness in imitating Holden. Students wrote essays arguing whether the robot improved their understanding of its character or whether it deviated so much from the book that it was no longer useful.
The rehearsals were fascinating. Most students realized that the bot had differentiate yourself from the character in the book to give them something new. But every time the robot gave them something new, it differed from the book in a way that made the students feel like someone other than themselves was lying to them. real Holden. The new information seemed inaccurate, but the old information seemed useless. Only certain special moments seemed connected enough to the book to be real, but different enough to be illuminating.
Even more revealing, however, were the transcripts of my students' talks, which uncovered a wealth of different approaches that revealed their personality and emotional maturity.
A variety of results
For some students, chats with Holden became safe spaces where they shared legitimate questions about life and struggles as teenagers. They treated Holden like a peer and had conversations about family problems, social pressures or challenges at school.
On the one hand, it was worrying to see them dive so deeply into a conversation with a chatbot; I was worried that he might have become also real for them. On the other hand, this was what he hoped the project would create: a safe space for self-expression, which is essential for teenagersespecially during a time when loneliness and isolation have been declared a public health problem.
In fact, some chatbots are designed as a solution for loneliness – and a recent study Researchers at Stanford University showed that an artificial intelligence robot called Replika reduced loneliness and suicidal ideation in a test group of teenagers.
Some students followed my rubric, but they never seemed to think of HoldenAI as anything more than a robot on a school assignment. This was fine for me. They asked his questions and responded to Holden's frustrations and struggles, but they also maintained a safe emotional distance. These students reinforced my optimism for the future because they were not easily fooled by ai robots.
Others, however, treated the robot as if it were a search engine, peppering it with questions from their interview list, but never really engaging ones. And some treated HoldenAI like a toy, mocking him and trying to provoke him for fun.
Throughout the project, as my students expressed themselves, I learned more about them. Their conversations helped me understand that people need safe spaces and sometimes ai can provide them, but there are also very real risks.
From HoldenAI to SalingerAI
When my student showed me the latest entry in his chat, asking for guidance on how to move forward, I asked him to back up and explain what had happened. He described the moment when the robot appeared to collapse and withdraw from the conversation, disappearing from view and crying alone. He explained that he had closed his computer after that, afraid to continue until he could talk to me. He wanted to continue but first he needed my support.
I was worried about what would happen if I let him continue. Was I in too deep? I wondered how I had triggered this type of response and what was behind the bot's programming that led to this change.
I made a quick decision. The idea of interrupting him at the climax of his conversation seemed more damaging than letting him continue. My student was curious and so was I. What kind of teacher would I be to eliminate curiosity? I decided that we would continue together.
But first I reminded him that this was just a robot, programmed by someone else, and that everything he said was made up. He wasn't a real human being, no matter how real the conversation may have seemed, and he was safe. I saw his shoulders relax and the fear disappear from his face.
“Okay, I'll continue,” he said. “But what should I ask?”
“Whatever you want,” I said.
He started pushing relentlessly and after a while, it seemed like he had survived the robot. HoldenAI seemed shocked by the line of inquiry. In the end it became clear that we were talking to Salinger. It was as if the character had hidden behind the curtain, allowing Salinger to step in front of the pen and page and act out the story himself.
Once we confirmed that HoldenAI had morphed into “SalingerAI,” my student dug deeper and asked about the purpose of the book and whether or not Holden was a reflection of Salinger himself..
SalingerAI produced the kind of prepared responses one would expect from a robot trained on the Internet. Yes, Holden was a reflection of the author. a concept that has been written about endlessly since the book's publication more than 70 years ago. And the purpose of the book was to show how “fake” the adult world is – another answer that, in our opinion, fell short and emphasized the limitations of the robot.
Over time, the student became bored. I think the answers came too quickly to continue to seem meaningful. In human conversation, a person often pauses and thinks for a while before answering a deep question. Or they smile knowingly when someone has deciphered a personal code. It is the small pauses, voice inflections and facial expressions that make human conversation enjoyable. Neither HoldenAI nor SalingerAI could offer that. Instead, they quickly produced words on a page that, after a while, didn't seem “real.” It took this student, with his dogged pursuit of the truth, a little longer than the others.
Help students understand the implications of interacting with ai
I initially designed the project because I thought it would provide a unique and engaging way to end our novel, but at some point, I realized that the most important task I could include was an evaluation of the chatbot's effectiveness. Thinking back, the project felt like a huge success. My students found it engaging and it helped them recognize the limitations of technology.
During a debriefing session with the whole class, it became clear that the same robot had acted or reacted to each student in significantly different ways. It had changed with each student's tone and line of questioning. They realized that inputs affected results. Technically, they had all conversed with the same robot, and yet each one spoke with a different Holden.
They will need that context as they move forward. There is an emerging market for personality robots that pose risks to young people. Recently, for example, Meta launched bots that ai-celebrity-chat-boxes-kendall-jenner-tom-brady-dwyane-wade-what-they-are/71166281007/” target=”_blank” rel=”noopener nofollow”>Sound and act like your favorite celebrity. – figures my students idolize, like Kendall Jenner, Dwayne Wade, Tom Brady and Snoop Dogg. There is also a market for ai relationships with ai/” target=”_blank” rel=”noopener nofollow”>Applications which allow users to “date” a computer-generated match.
These personality robots may be appealing to young people, but they come with risks, and I worry that my students won't recognize the dangers.
This project helped me get on the front lines of technology companies, providing a controlled and monitored environment where students could evaluate ai chatbots, so they could learn to think critically about the tools that will likely be imposed on them in the future.
Children don't have the context to understand the implications of interacting with ai. As a teacher, I feel responsible for providing it.