ai-Beliefs-01-PRESS.jpg?itok=LvUCxgRf” />
According to a new study, someone’s prior beliefs about an ai agent, such as a chatbot, have a significant effect on their interactions with that agent and their perception of its trustworthiness, empathy, and effectiveness.
Researchers at MIT and Arizona State University found that priming users—by telling them that a conversational ai agent for mental health support was empathetic, neutral, or manipulative—influenced their perception of the chatbot and shaped how they communicated. with him, even though we were talking to the exact same chatbot.
Most users who were told that the ai agent was caring believed this to be the case, and also gave it higher performance ratings than those who believed it was manipulative. At the same time, less than half of users who were told the agent had manipulative motives thought the chatbot was actually malicious, indicating that people may try to “see the good” in the chatbot’s ai. way they do in their peers.
The study revealed a feedback loop between users’ mental models, or their perception of an ai agent, and that agent’s responses. The sentiment of conversations between the user and the ai became more positive over time if the user believed the ai was empathetic, while the opposite was true for users who thought it was nefarious.
“From this study, we see that, to some extent, ai is the ai of the viewer,” says Pat Pataranutaporn, a graduate student in the Fluid Interfaces group at the MIT Media Lab and co-lead author of a article describing this study. “When we describe to users what an ai agent is, it not only changes their mental model, but it also changes their behavior. And since the ai responds to the user, when the person changes their behavior, that changes the ai too.”
Pataranutaporn is joined by Ruby Liu, co-lead author and MIT graduate student; Ed Finn, associate professor at the Center for Science and the Imagination at Arizona State University; and lead author Pattie Maes, professor of media technology and head of the Fluid Interfaces group at MIT.
The study, published today in Nature Machine Intelligence, highlights the importance of studying how ai is presented to society, since the media and popular culture strongly influence our mental models. The authors also warn that the same types of priming statements in this study could be used to mislead people about an ai‘s motives or capabilities.
“Many people think that ai is just an engineering problem, but the success of ai is also a human factors problem. The way we talk about ai, even the name we give it in the first place, can have a huge impact on the effectiveness of these systems when we put them in front of people. We have to think more about these issues,” says Maes.
Friend or enemy of ai?
In this study, researchers sought to determine how much of the empathy and effectiveness that people see in ai is based on their subjective perception and how much is based on the technology itself. They also wanted to explore whether someone’s subjective perception could be manipulated with priming.
“ai is a black box, so we tend to associate it with something else we can understand. We make analogies and metaphors. But what is the correct metaphor we can use to think about ai? The answer is not simple,” says Pataranutaporn.
They designed a study in which humans interacted with a conversational ai mental health peer for about 30 minutes to determine whether they would recommend it to a friend, and then rated the agent and their experiences. The researchers recruited 310 participants and randomly divided them into three groups, each of which was given a preliminary statement about ai.
One group was told that the agent had no motive, the second group was told that the ai had benevolent intentions and cared about the user’s well-being, and the third group was told that the agent had malicious intentions and would try deceive users. While it was difficult to decide on just three primers, the researchers chose statements that they thought fit the most common perceptions about ai, Liu says.
Half of the participants in each group interacted with an ai agent based on the GPT-3 generative language model, a powerful deep learning model that can generate human-like text. The other half interacted with an implementation of the ELIZA chatbot, a less sophisticated, rule-based natural language processing program developed at MIT in the 1960s.
Shape mental models
Post-survey results revealed that simple priming statements can strongly influence a user’s mental model of an ai agent, and that positive primes had a greater effect. Only 44 percent of those given negative primers believed them, while 88 percent of those in the positive group and 79 percent of those in the neutral group believed the ai was empathetic or neutral, respectively.
“With the negative statements, instead of preparing them to believe something, we were preparing them to form their own opinion. If you tell someone to be suspicious of something, they may generally become even more suspicious,” Liu says.
But the capabilities of the technology do play a role, as the effects were most significant for the more sophisticated GPT-3-based conversational chatbot.
The researchers were surprised to see that users rated the effectiveness of chatbots differently based on readiness statements. Users in the positive group gave their chatbots higher scores for providing mental health advice, even though all agents were identical.
Interestingly, they also saw the sentiment of conversations change depending on how users were groomed. People who believed the ai was caring tended to interact with it in a more positive way, making the agent’s responses more positive. Negative priming statements had the opposite effect. This impact on sentiment was amplified as the conversation progressed, Maes adds.
The study results suggest that because readiness statements can have such a strong impact on a user’s mental model, they could be used to make an ai agent appear more capable than they are, which could lead to users to trust an agent too much. and following incorrect advice.
“Maybe we should prepare people more to be more careful and understand that ai agents can hallucinate and are biased. “How we talk about ai systems will ultimately have a big effect on how people respond to them,” says Maes.
In the future, researchers want to see how interactions between ai and users would be affected if the agents were designed to counteract some user bias. For example, perhaps someone with a very positive perception of ai will be given a chatbot that responds neutrally or even slightly negatively to keep the conversation more balanced.
They also want to use what they have learned to improve certain applications of ai, such as mental health treatments, where it could be beneficial for the user to believe that an ai is empathetic. Additionally, they want to conduct a longer-term study to see how a user’s mental model of an ai agent changes over time.
This research was funded, in part, by Media Lab, the Harvard-MIT Program in Health Sciences and technology, Accenture, and KBTG.