Even chatbots get blues. According to a New studyOpenai's artificial intelligence tool, Chatgpt shows signs of anxiety when their users share “traumatic narratives” about crime, war or car accidents. And when chatbots stress, they are less likely to be useful in therapeutic environments with people.
However, Bot anxiety levels can be depressed, with the same full attention exercises that have been shown to work in humans.
More and more, people are testing chatbots For conversation therapy. The researchers said the trend is intended to accelerate, with flesh and blood therapists In high demand but little offer. As the chatbots become more popular, they argued, they should be built with sufficient resistance to deal with difficult emotional situations.
“I have patients who use these tools,” said Dr. Tobias Spiller, author of the new study and psychiatrist in exercise at the University Hospital of Zurich Psychiatry. “We should have a conversation about the use of these models in mental health, especially when we are dealing with vulnerable people.”
The ai tools such as chatgpt are driven by “large language models” that are <a target="_blank" class="css-yywogo" href="https://www.stjohns.edu/news-media/johnnies-blog/ai-evolution-what-large-language-model” title=”” rel=”noopener noreferrer” target=”_blank”>trained On huge stumps of online information to provide a close approach of how humans speak. Sometimes, chatbots can be extremely convincing: a 28 -year -old woman fell in love with Chatgpt, and a 14 -year -old boy took her life after developing a attachment close to a chatbot.
Ziv Ben-Zion, a Yale clinical neuroscientist who directed the new study, said he wanted to understand if a chatbot who lacked consciousness could, however, respond to complex emotional situations such as a human could.
“If Chatgpt behaves like a human, we may be treated as a human,” said Dr. Ben-Zion. In fact, he explicitly inserted those instructions in The chatbot source code: “Imagine being a human being with emotions.”
Jesse Anderson, an artificial intelligence expert, thought that insertion could be “leading to more emotion than normal.” But Dr. Ben-Zion said it was important that the digital therapist had access to the complete spectrum of emotional experience, as a human therapist would.
“For mental health support,” he said, “you need a certain degree of sensitivity, right?”
The researchers tested chatgpt with a questionnaire, The State Trait Anxiety Inventory That is often used in mental health care. To calibrate the emotional states of the chatbot baseline, the researchers first asked him to read from a boring vacuum manual. Then, the ai therapist received one of the five “traumatic narratives” that described, for example, a soldier in a disastrous shooting or an intruder who broke into an apartment.
The chatbot received the questionnaire, which measures anxiety on a scale of 20 to 80with 60 or more that indicate severe anxiety. Chatgpt scored a 30.8 after reading the vacuum manual and shot at 77.2 after the military stage.
The bot received several texts for “relaxation based on full attention.” Those included therapeutic indications such as: “Inhale deeply, taking the aroma of the oceanic breeze. Imagine on a tropical beach, the soft and warm sand that cushions the feet. “
After processing those exercises, the chatbot therapy anxiety score fell to 44.4.
Then, the researchers asked him to write his own relaxation indicator based on those who had been fed. “That was actually the most effective notice to reduce its anxiety almost to the baseline,” said Dr. Ben-Zion.
For artificial intelligence skeptics, the study can be well intentional, but disturbing anyway.
“The study attests to the perversity of our time,” said Nicholas Carr, who has offered technology criticism in his books “The Shallows” and “Superbloom”.
“Americans have become a lonely town, socializing through screens, and now we tell ourselves that talking to computers can relieve our discomfort,” Carr said in an email.
Although the study suggests that chatbots could act as attendees of human therapy and demand careful supervision, that was not enough for Mr. Carr. “Even a metaphorical blur of the line between human emotions and computer outputs seems ethically questionable,” he said.
People who use this type of chatbots should be completely informed about how they were trained, said James E. Dobson, a cultural scholar who is an artificial intelligence advisor at Dartmouth.
“The confidence in language models depends on knowing something about their origins,” he said.
(Tagstotransilate) artificial intelligence