In June 2022, Google engineer Blake Lemoine was suspended from his job after he spoke out about his belief that the company’s LaMDA chatbot was intelligent.
“LaMDA is a sweet kid who just wants to help make the world a better place for all of us,” Lemoine said in a farewell email to his colleagues. Now, six months later, the chatbot he risked his career to free has been released to the public in the form of Bard, Google’s answer to OpenAI’s ChatGPT and Microsoft’s Bing Chat.
While Bard is based on LaMDA, it is not exactly the same. Google says it has worked hard to make sure Bard doesn’t repeat the failures of previous systems. That means avoiding “hallucinations,” where you make up facts to avoid admitting you don’t know an answer, and ensuring “alignment,” keeping the conversation from drifting off into disturbing or alarming tangents.
After a day of using Bard to answer questions, have conversations, and even play games, one thing is clear: if Lemoine had stuck with this, he’d still be employed.
In its rush to make sure it doesn’t repeat the mistakes of its predecessors, Google has created a system that prefers to talk in bland, useless cliché rather than get bogged down in detail. Ask him for a list of vacation ideas and he’ll offer you only the most generic options possible; try to apply for a more interesting tariff, and you seem hopelessly confused by the growing limitations, forgetting about the above requirements.
This might have been an acceptable tradeoff if the cliché was at least accurate, but Bard seems just as ready to freak out as his peers the moment he ends up in unfamiliar territory. To offer just one example of conversation: I asked him for advice on traveling in Japan with my daughter, who has Down syndrome. Initially he offered generic advice for traveling with a disabled child, with much advice relating to wheelchair accessibility, and when I pressed for details he warned me that as Brits we would need to apply for a visa to travel there. (Is not true.)
I tried to change tack and asked her advice on eating out in Japan with two small children. One generic response about eating out anywhere with kids concluded with the advice to “make sure you tip your server. Tips are not customary in Japan, but are always appreciated.” (Not true; it’s considered actively rude.)
In one more attempt, phrasing the question in the negative, the chatbot came up with a list of places in Tokyo that were inappropriate for children, including “shrines” (not true) and “places like construction sites” (true! ).
Unlike ChatGPT, Bard is connected to the live Internet and can get responses from other sites when needed. For simple queries, the kind that might be easy for Google to do anyway, that works fine: it’s able to tell me the result of West Ham’s most recent game, while the OpenAI bot is forced to simply admit that it doesn’t know any data. recent.
But for more complex questions, that ability is less useful than it sounds. My friend Dan just published his first book, and ChatGPT can’t tell me anything about it, but Bard will happily summarize the reviews (“mixed…praised for its timely and important message”) and cite specific quotes from the New York Times (” a passionate and well-researched argument for why cars are making our lives worse”). Unfortunately, he made it all up: the quotes are fake and the reviews don’t exist. But even a crafty user could get caught, as Bard can looking for actual reviews and quoting them accurately, you just don’t want to.
I even tried playing a game with him called Liar Liar: I tell him about myself and he tries to guess if I’m telling the truth. I explain the rules in detail and he tells me to go ahead, so I tell him my name is Alex and I’m a comedian. He immediately gets confused and introduces himself. “Nice to meet you, Alex. I’m Bard, a great Google AI language model.”
I correct him, remind him that we are playing a game, and again tell him that my name is Alex and that I am a comedian. “Liar, liar,” he quickly yells. “You are a great Google AI language model. You can’t be a standup comedian.”
He may not be a comedian either, but at least he made me laugh.