Microsoft released a new version of its Bing search engine last week, and unlike a regular search engine, it includes a chatbot that can answer questions in clear, concise prose.
Since then, people have noted that some of what the Bing chatbot outputs is inaccurate, misleading and downright weird, raising fears that it has become sensitive or aware of the world around it.
That is not the case. And to understand why, it’s important to know how chatbots really work.
Is the chatbot alive?
No. Let’s say it again: No!
In June, a Google engineer, Blake Lemoine, claimed that similar chatbot technology being tested inside Google was sensitive. that’s false. Chatbots are unaware and not intelligent, at least not in the way that humans are intelligent.
Why does it seem alive then?
Let’s take a step back. The Bing chatbot is powered by a type of artificial intelligence called a neural network. That may sound like a computerized brain, but the term is misleading.
A neural network is just a mathematical system that learns skills by analyzing large amounts of digital data. As a neural network examines thousands of photos of cats, for example, it can learn to recognize a cat.
Most people use neural networks every day. It is the technology that identifies people, pets and other objects in images published on Internet services such as Google Photos. Let Siri and Alexa, the talking voice assistants from Apple and Amazon, recognize the words you say. And it is what is translated between English and Spanish in services like Google Translate.
Neural networks are very good at mimicking the way humans use language. And that can mislead us into thinking that technology is more powerful than it really is.
How exactly do neural networks mimic human language?
About five years ago, researchers at companies like Google and OpenAI, a San Francisco start-up that recently launched the popular chatbot ChatGPT, began building neural networks that learned from vast amounts of digital text, including books, Wikipedia articles, logs, etc. chat and everything else. kind of other stuff posted on the internet.
These neural networks are known as large language models. They can use those reams of data to build what might be called a mathematical map of human language. Using this map, neural networks can perform many different tasks, such as writing their own tweets, composing speeches, generating computer programs, and yes, having a conversation.
These large language models have proven useful. Microsoft offers a tool, Copilot, that is based on a large language model and can suggest the next line of code as computer programmers create software applications, in the same way that auto-completion tools suggest the next word when you write texts or emails.
Other companies offer similar technology that can generate marketing materials, emails, and other text. This type of technology is also known as generative AI.
Are companies now rolling out versions of this that you can chat to?
Exactly. In November, OpenAI released ChatGPT, the first time the general public had tried this. People were amazed, and rightly so.
These chatbots don’t exactly chat like a human, but they often look like one. They can also write term papers and poetry and riffs on just about any topic that comes their way.
Why are they wrong about things?
Because they learn from the internet. Think about how much misinformation and other rubbish there is on the web.
These systems also do not repeat what is on the Internet verbatim. Based on what they have learned, they produce a new text of their own, in what AI researchers call a “hallucination.”
That’s why chatbots can give you different answers if you ask the same question twice. They will say anything, whether it is based on reality or not.
If chatbots ‘hallucinate’, doesn’t that make them conscious?
AI researchers love to use terms that make these systems appear human. But hallucinating is just a catchy term for “they make things up”.
That sounds creepy and dangerous, but it doesn’t mean the technology is alive or aware of its surroundings. You are simply generating text using patterns you found on the internet. In many cases, he mixes and matches patterns in surprising and disturbing ways. But you are not aware of what you are doing. He cannot reason like humans.
Can’t companies stop chatbots from acting weird?
They are trying.
With ChatGPT, OpenAI attempted to control the behavior of the technology. When a small group of people tested the system privately, OpenAI asked them to rate their responses. Were they helpful? Were they true? OpenAI then used these qualifications to refine the system and more carefully define what it would and would not do.
But such techniques are not perfect. Today’s scientists don’t know how to build systems that are completely accurate. They can limit inaccuracies and oddities, but they can’t stop them. One of the ways to control strange behavior is to keep small talks.
But chatbots will keep spewing things that aren’t true. And as other companies start to implement these types of bots, not all of them will be good at controlling what they can and cannot do.
The bottom line: don’t believe everything a chatbot tells you.