Microsoft released a new version of its Bing search engine last week that, unlike current search engines, includes a chatbot that can answer questions in clear, concise prose.
Since then, people have noted that some of what the Bing chatbot outputs is inaccurate, misleading, and downright weird, raising fears that it’s aware of or may experience the world around it.
But it is not like that. And to understand why, it’s important to know how chatbots work.
Is the chatbot alive?
No. Let’s be clear: no!
In June, a Google engineer, Blake Lemoine, claimed that similar chatbot technology being tested within Google had a conscience. That is false. Chatbots are unaware and not intelligent… at least not in the way that humans are intelligent.
So why does it seem alive?
Let’s back up a bit. The Bing chatbot is powered by a type of artificial intelligence called a neural network. That might sound like a computerized brain, but the term is misleading.
A neural network is just a mathematical system that learns skills by analyzing vast amounts of digital data. As a neural network examines, say, thousands of photos of cats, it can learn to recognize a cat.
Most people use neural networks every day. It is the technology that identifies people, pets and other objects in images published on Internet services such as Google Photos. It lets Siri and Alexa, the chatty voice assistants from Apple and Amazon, recognize the words you say. In addition, it is what translates between English and Spanish in services such as Google Translate.
Neural networks are very good at mimicking the way humans use language, and that can confuse us into thinking the technology is more powerful than it actually is.
How do neural networks mimic human language, exactly?
About five years ago, researchers at companies like Google and OpenAI, a San Francisco-based start-up that recently launched the popular ChatGPT chatbot, began creating neural networks that learned from vast amounts of digital text, including books, news articles, and more. Wikipedia, chat logs and all sorts of other stuff posted on the internet.
These neural networks are known as large language models. They’re able to use those reams of data to put together what you might call a mathematical map of human language. Using this map, neural networks can do many things, including writing their own tweets, writing speeches, generating computer programs, and, yes, holding a conversation.
These great language models have proven to be useful. Microsoft offers a tool, Copilot, that is built with a great language model and can suggest the next line of code as computer programmers create software applications, much like autocompletion tools suggest the next word when You write text messages or emails.
Other companies offer similar technology that can generate marketing materials, emails, and other text. This type of technology is also known as generative artificial intelligence.
Now companies are releasing versions of this that you can chat with?
Exact. In November, OpenAI released ChatGPT, the first time the general public was able to try this out. The people marveled… and with good reason.
These chatbots don’t exactly chat like a human, but they often appear to. They can also write term papers, poetry, and converse on almost any topic that comes their way.
Why are they wrong?
Because they learn from the internet. Think about how much misinformation and other rubbish there is on the web.
These systems also do not repeat what is on the internet word for word. Based on what they have learned, they produce new text on their own, something artificial intelligence researchers call a “hallucination.”
This is why chatbots can give you different answers if you ask the same question twice. They answer anything, whether it’s based on reality or not.
If chatbots ‘hallucinate’, doesn’t that mean they have a conscience?
Artificial intelligence researchers love to use terms that make these systems sound human. However, hallucinating is just a catchy term for “they make things up”.
That sounds creepy and dangerous, but it doesn’t mean the technology is alive or aware of its surroundings in any way. You are just generating text using patterns you found on the internet. In many cases, he mixes and matches patterns in surprising and disturbing ways. However, you are not aware of what you are doing. He is not capable of reasoning like humans do.
Companies can’t stop chatbots from acting weird?
They are trying.
With ChatGPT, the OpenAI company attempted to control the behavior of the technology. When a small group of people privately tested the system, OpenAI asked them to rate their responses. Were they helpful? Were they true? OpenAI then used these qualifications to refine the system and further define what the technology would and would not do.
However, such techniques are not perfect. Scientists at present do not know how to build systems that are completely true. They can limit inaccuracies and weird responses, but they can’t stop them. One of the ways to curb strange behavior is to keep chats short.
However, chatbots will continue to say things that are not true. Also, as other companies begin to deploy these types of bots, not all of them will be good at controlling what they can and cannot do.
The bottom line: don’t believe everything a chatbot tells you.
Cade Metz is a technology reporter and author of the book Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World. It covers artificial intelligence, autonomous cars, robotics, virtual reality, and other emerging areas. @cademetz