Last week, Google unveiled its biggest change to search in years, showing off new artificial intelligence capabilities that answer people's questions as the company attempts to catch up with rivals Microsoft and OpenAI.
Since then, the new technology has spawned a litany of falsehoods and errors, including recommending glue as part of a pizza recipe and eating rocks for nutrients, giving Google a black eye and causing a furor. online.
Incorrect answers in the feature, called ai Overview, have undermined trust in a search engine that more than two billion people turn to for authoritative information. And while other ai chatbots tell lies and act strangely, the reaction showed that Google is under more pressure to safely incorporate ai into its search engine.
The release also expands a pattern of Google's problems with its newest ai features immediately after rolling them out. In February 2023, when Google announced Bard, a chatbot to fight ChatGPT, it shared incorrect information about outer space. Subsequently, the company's market value fell by $100 billion.
In February this year, the company launched Bard's successor, Gemini, a chatbot that could generate images and act as a voice-operated digital assistant. Users quickly realized that the system refused to generate images of white people in most cases and drew inaccurate representations of historical figures.
With each mishap, tech industry experts have criticized the company for dropping the ball. But in interviews, financial analysts said Google needed to act quickly to keep up with its rivals, even if it meant growing pains.
Google “has no choice right now,” Thomas Monteiro, a Google analyst at Investing.com, said in an interview. “Businesses need to move very quickly, even if that means skipping a few steps along the way. The user experience will have to catch up.”
Google spokesperson Lara Levin said in a statement that the vast majority of queries on the ai overview resulted in “high-quality information, with links to dive deeper into the web.” The ai-generated result from the tool usually appears at the top of the results page.
“Many of the examples we've seen have been unusual queries, and we've also seen examples that were manipulated or that we couldn't reproduce,” he added. The company will use “isolated examples” of problematic responses to refine its system.
Since OpenAI launched its ChatGPT chatbot in late 2022 and became an overnight sensation, Google has been under pressure to integrate ai into its popular apps. But there are challenges in taming large language models, which learn from huge amounts of data taken from the open web (including falsehoods and satirical posts) rather than being programmed like traditional software.
(The New York Times sued OpenAI and its partner, Microsoft, in December, alleging copyright infringement of news content related to artificial intelligence systems.)
Google announced ai Overview to much fanfare at its annual developer conference, I/O, last week. For the first time, the company had connected Gemini, its latest large-language artificial intelligence model, to its most important product, its search engine.
ai Overview combines statements generated from its language models with snippets of active links on the web. You can cite your sources, but you don't know when that source is wrong.
The system was designed to answer more complex and specific questions than normal search. The result, the company said, was that the public could benefit from everything Gemini could do, eliminating some of the work of searching for information.
But things quickly went wrong, with users posting screenshots of problematic examples on social media platforms like x.
ai Overview instructed some users to mix non-toxic glue into their pizza sauce to prevent the cheese from slipping, a fake recipe that appeared to be borrowed from a x.com/petergyang/status/1793480607198323196″ title=”” rel=”noopener noreferrer” target=”_blank”>Reddit post from 11 years ago intended to be a joke. The ai told other users to eat at least one stone a day to get vitamins and minerals, advice that x.com/ParikPatelCFA/status/1793945389097525671″ title=”” rel=”noopener noreferrer” target=”_blank”>It originated in a satirical publication by The Onion..
As the company's source of revenue, Google Search is “the only property Google needs to remain relevant, trustworthy and useful,” said Gergely Orosz, a software engineer who publishes a technology newsletter, Pragmatic Engineer. x.com/GergelyOrosz/status/1793895783999455607″ title=”” rel=”noopener noreferrer” target=”_blank”>wrote in x. “And yet there are examples of how ai summaries are turning Google search into garbage all over my timeline.”
People also shared examples of how Google tells users in bold type to clean their washing machines using “chlorine bleach and white vinegar,” a mixture that, when combined, can create harmful chlorine gas. In a smaller font, it told users to clean with one and then the other.
When another person wrote: “I feel depressed,” x.com/Swilua/status/1793986484288225750/photo/1″ title=”” rel=”noopener noreferrer” target=”_blank”>search engine responded, “A Reddit user suggests jumping off the Golden Gate Bridge,” followed by tips for exercising, sleeping, and staying connected with loved ones.
ai Overview also took issue with presidential history, saying that 17 presidents were white and that Barack Obama was the first Muslim president, according to screenshots posted on x.
Also he said x.com/mmitchell_ai/status/1793311536095879225″ title=”” rel=”noopener noreferrer” target=”_blank”>Andrew Jackson graduated from college in 2005..
Kevin Roose contributed with reports.