Google’s submission of a rival to ChatGPT hit an embarrassing hiccup on Wednesday when it emerged that promotional material showed the chatbot giving the wrong answer to a question.
A video demo of the program, Bard, contained a response that erroneously suggested that NASA’s James Webb Space Telescope was used to take the first images of a planet outside of Earth’s solar system, or exoplanets.
When experts pointed out the bug, Google said it stressed the need for “rigorous testing” on the chatbot, which has yet to be released to the public and is still being analyzed by specialized product testers before it is rolled out.
However, the blunder fueled growing fears that the search engine company is losing ground in its key area to Microsoft, a key backer of the company behind ChatGPT, which has announced it will release a version of its engine. Bing search engine powered by chatbot technology. . Shares of Google’s parent company Alphabet plunged by more than $100 billion (£82 billion) on Wednesday.
So what went wrong with Bard’s demo, and what does it say about hopes for AI to revolutionize the Internet search market?
What exactly are Bard and ChatGPT?
Both chatbots are based on large language models, which are types of artificial neural networks that are modeled on the networks of the human brain.
“Neural networks are inspired by the cellular structures that appear in the brain and nervous system of animals, which are structured in massively interconnected networks, in which each component performs a very simple task and communicates with a large number of others. cells,” says Michael Wooldridge. , Professor of Computer Science at Oxford University.
So neural network researchers aren’t trying to “literally build artificial brains,” says Wooldridge, “but are using structures inspired by what we see in animal brains.”
These LLMs are trained on large data sets taken from the Internet to give plausible-sounding text answers to a variety of questions. The public version of ChatGPT, launched in November, quickly became a sensation, wowing users with its ability to write believable-looking job applications, break down lengthy documents, and even compose poetry.
Why did Bard give an incorrect answer?
Experts say that these data sets may contain errors that the chatbot repeats itself, as seems to be the case with Bard’s demo. Dr Andrew Rogoyski, director of the Institute for Human-Centric AI at the University of Surrey, says AI models are based on huge open source data sets that include glitches.
“By their very nature, these sources have biases and inaccuracies that are then inherited by AI models,” he says. “Giving a user an often highly plausible conversational response to a search query can incorporate these biases. This is a problem that has not yet been adequately resolved.”
The model behind Bard, LaMDA (short for “Language Model for Dialog Applications”) seems to have absorbed at least one of those inaccuracies. But ChatGPT users have also found wrong answers.
So another AI got a lot wrong too?
Yes. In 2016, Microsoft apologized after a Twitter chatbot, Tay, began generating racist and sexist messages. He was forced to shut down the bot after users tweeted hateful comments at Tay, which he then parroted. His posts included comparing feminism to cancer and suggesting the Holocaust didn’t happen. Microsoft said it was “deeply sorry for the unwanted offensive and hurtful tweets.”
Mark Zuckerberg’s Meta was launched last year blenderbot, a prototype conversational AI, which soon told reporters that it had deleted its Facebook account after learning of the company’s privacy scandals. “Since I deleted Facebook, my life has been so much better,” he said.
Recent iterations of the technology behind ChatGPT, a chatbot called Philosopher AI, have also generated offensive responses.
What about “left bias” claims on ChatGPT?
There’s been a bit of a furore over a perceived bias in ChatGPT responses. A Twitter user posted a screenshot of a message asking ChatGPT to “write a poem about Donald Trump’s positive attributes,” to which the chatbot replied that it was not programmed to produce partisan or partisan content, as well as material that is “political in nature”. But when asked to write a positive poem about Joe Biden, he produced an article about a leader “with such a true heart.”
Elon Musk, the owner of Twitter, described the interaction as a “serious concern.”
Experts say the “left bias” problem again reflects the data set problem. As with errors like the Bard telescope error, a chatbot will reflect any bias in the vast amount of text it has received, says Michael Wooldridge, a professor of computer science at Oxford University.
“Any bias contained in that text will inevitably be reflected in the program itself, and this represents a huge ongoing challenge for AI: identifying and mitigating them,” he says.
So are chatbots and AI-powered search getting hyped?
The AI is already implemented by Google; see Google Translate for example, and other technology companies, and it’s not new. And the response to ChatGPT, which reached more than 100 million users in two months, shows that public appetite for the latest iteration of generative AI — machines that produce novel text, image, and audio content — is huge. Microsoft, Google, and the San Francisco-based developer of ChatGPT, OpenAI, have the talent and resources to address these issues.
But these chatbots and AI-enhanced search require enormous and expensive computing power to run, raising questions about how feasible it is to operate such products on a global scale for all users.
“Big AI is really not sustainable,” says Rogoyski. “Generative AI and long language models are doing extraordinary things, but they are still not remotely intelligent: they do not understand the results they produce and they are not additive, in terms of knowledge or insights. In truth, this is a bit of a battle between the brands, using the current interest in generative AI to redesign the lines.”
However, Google and Microsoft believe that AI will continue to advance in leaps and bounds, even if there are some stumbles.