The rise of artificial intelligence (AI) in recent years is closely related to the improvement of human life due to the ability of AI to get jobs done faster and with less effort. Today, there are hardly any fields that do not make use of AI. For example, AI is everywhere, from AI agents in voice assistants like the Amazon Echo and Google Home to the use of machine learning algorithms to predict the structure of proteins. Therefore, it is reasonable to believe that a human being working with an AI system will produce decisions that are superior to each one acting alone. Is that really the case, though?
Previous studies have shown that this is not always the case. In various situations, the AI does not always produce the correct answer and these systems need to be retrained to correct for bias or any other issues. However, another associated phenomenon that poses a danger to the effectiveness of human-AI decision-making teams is the over-reliance on AI, which states that people are influenced by AI and often accept wrong decisions without Check if the AI is correct. This can be quite damaging when performing critical and essential tasks like identifying bank fraud and providing medical diagnoses. Researchers have also shown that explainable AI, which is when an AI model explains at each step why it made a certain decision rather than just providing predictions, does not reduce this problem of over-reliance on AI. Some researchers have even claimed that cognitive biases or misguided confidence are the main cause of overconfidence, attributing overconfidence to the inevitable nature of human cognition.
However, these findings do not fully support the idea that AI explanations should reduce overconfidence. To further explore this, a team of researchers from Stanford University’s Human-Centered Artificial Intelligence (HAI) Lab claimed that people strategically choose whether or not to engage in an AI explanation, demonstrating that there is situations where AI explanations can help people to be less overly dependent. According to his article, people are less likely to rely on AI predictions when the related AI explanations are easier to understand than the activity in question, and when there is a greater benefit to doing so (which may be in the form of financial reward). . They also showed that over-reliance on AI could be greatly reduced when we focus on engaging people with the explanation rather than just the target providing it.
The team formalized this tactical decision into a cost-benefit framework to test their theory. In this framework, the costs and benefits of actively engaging in the task are weighed against the costs and benefits of relying on AI. They prompted the online collective workers to work with an AI to solve a maze challenge at three different levels of complexity. The corresponding AI model offered the answer and either no explanation or one of various degrees of justification, ranging from a single instruction for the next step to step-by-step instructions for getting out of the entire maze. The results of the trials showed that costs, such as task difficulty and explanation difficulties, and benefits, such as monetary compensation, substantially influenced overconfidence. Overconfidence did not decrease at all for complex tasks where the AI model provided step-by-step instructions because deciphering the generated explanations was just as challenging as clearing the maze alone. Also, most of the justifications had no impact on overconfidence when it was easy to escape the maze by yourself.
The team concluded that if the job at hand is challenging and the associated explanations are clear, they can help prevent overconfidence. However, when the work and the explanations are both difficult and simple, these explanations have little effect on overconfidence. Explanations do not matter much if the activities are easy to perform because people can just as easily perform the task themselves rather than rely on explanations to draw conclusions. Also, when jobs are complex, people have two options: complete the task manually or examine the AI-generated explanations, which are often just as complicated. The main cause of this is that there are few explainability tools available to AI researchers that require much less effort to verify than to do the task manually. Therefore, it is not surprising that people tend to trust the judgment of AI without questioning it or seeking an explanation.
As a further experiment, the researchers also introduced the monetary benefit facet into the equation. They offered co-workers the option of working independently through mazes of varying degrees of difficulty for a sum of money, or receiving less money in exchange for AI assistance, either without explanation or with complicated step-by-step instructions. . The findings showed that workers value AI assistance more when the task is challenging and prefer a simple explanation to a complex one. Furthermore, overconfidence was found to decrease as the long-term advantage of using AI (in this example, the financial reward) increases.
The Stanford researchers have high hopes that their discovery will provide some comfort to academics who have been perplexed by the fact that explanations do not lessen overconfidence. In addition, they want to inspire explainable AI researchers with their work by providing them with a compelling argument for improving and simplifying AI explanations.
review the Paper and Stanford Article. All credit for this research goes to the researchers of this project. Also, don’t forget to join our 16k+ ML SubReddit, discord channeland electronic newsletterwhere we share the latest AI research news, exciting AI projects, and more.
Khushboo Gupta is a consulting intern at MarktechPost. He is currently pursuing his B.Tech at the Indian Institute of Technology (IIT), Goa. She is passionate about the fields of machine learning, natural language processing, and web development. She likes to learn more about the technical field by participating in various challenges.