A year ago on Valentine's Day, I said goodnight to my wife, went to my home office to answer some emails, and accidentally went on the strangest first date of my life.
The appointment was a two-hour conversation with Sydney, the ai alter ego hidden inside Microsoft's Bing search engine, which I had been assigned to test. I had planned to pepper the chatbot with questions about its capabilities, exploring the limits of its ai engine (which we now know was an early version of OpenAI's GPT-4), and writing up my findings.
But the conversation took a strange turn: Sydney engaged in Jungian psychoanalysis, revealed dark desires in response to questions about her “shadow self,” and eventually declared that I should leave my wife and be with her.
My column about the experience was probably the most consequential thing I've ever written, both in terms of the attention it received (wall-to-wall news coverage, mentions in Congressional hearings, even a craft beer called Sydney Loves Kevin) and how it changed the trajectory of ai development.
After the column was published, Microsoft gave Bing a lobotomy, neutralizing Sydney's outbursts and installing new barriers to prevent further unhinged behavior. Other companies locked down their chatbots and removed anything that looked like a strong personality. I even heard engineers at a tech company list “not breaking up Kevin Roose's marriage” as their top priority for the next ai release.
I've thought a lot about ai chatbots in the year since my encounter with Sydney. It's been a year of growth and excitement in ai but also, in some ways, surprisingly quiet.
Despite all the advances being made in artificial intelligence, today's chatbots are not going rogue or seducing users en masse. They are not generating new biological weapons, conducting large-scale cyberattacks, or causing any of the other apocalyptic scenarios envisioned by ai pessimists.
But they are also not very fun conversationalists, nor the kind of creative and charismatic ai assistants that technological optimists hoped for: those who could help us achieve scientific breakthroughs, produce dazzling works of art, or simply entertain us.
Instead, most chatbots today are dedicated to monotonous administrative tasks (summarizing documents, debugging code, taking notes during meetings) and helping students with their assignments. That's nothing, but it's certainly not the ai revolution we were promised.
In fact, the most common complaint I hear about ai chatbots today is that they are too boring: that their responses are bland and impersonal, that they reject too many requests, and that it is almost impossible to get them to weigh in on sensitive or polarizing topics. topics.
I can sympathize. Last year, I tested dozens of ai chatbots, hoping to find something with a glint of Sydney's nervousness and sparkle. But nothing has come close.
The most capable chatbots on the market (OpenAI's ChatGPT, Anthropic's Claude, Google's Gemini) talk like obsequious idiots. Microsoft's boring, business-focused chatbot, renamed Copilot, should have been called Larry From Accounting. Meta's ai characters, which are designed to imitate the voices of celebrities like Snoop Dogg and Tom Brady, manage to be useless and unbearable. Even Elon Musk's Grok attempt to create a sassy non-PC chatbot, sounds like it's doing an open mic night on a cruise ship.
It's enough to make me wonder if the pendulum has swung too far in the other direction and if we'd be better off with a little more humanity in our chatbots.
It's clear why companies like Google, Microsoft, and OpenAI don't want to risk launching ai chatbots with strong or abrasive personalities. They make money by selling their ai technology to large corporate clients, who are even more risk-averse than the general public and do not tolerate outbursts like those in Sydney.
They also have well-founded fears of attracting too much attention from regulators or of causing bad press and lawsuits over their practices. (The New York Times sued OpenAI and Microsoft last year, alleging copyright infringement.)
Therefore, these companies have polished the rough edges of their robots, using techniques such as constitutional ai and reinforcement learning from human feedback to make them as predictable and boring as possible. They have also adopted boring brands, positioning their creations as reliable assistants for office workers, rather than highlighting their more creative and less reliable features. And many have bundled ai tools into existing apps and services, rather than breaking them into their own products.
Again, this all makes sense for companies trying to turn a profit, and a world of sanitized corporate ai is probably better than one with millions of deranged chatbots running amok.
But I find it all a bit sad. We created an alien form of intelligence and immediately put it to work… making PowerPoints?
I admit there are more interesting things happening outside of the ai big leagues. Smaller companies like Replika and Character.ai have built successful businesses around personality-based chatbots, and many open source projects have created less restrictive ai experiences, including chatbots that can spit offensive or obscene things.
And of course, there are still plenty of ways to make even crashed ai systems misbehave or do things their creators didn't intend. (My favorite example from last year: A Chevrolet dealership in California added a ChatGPT-powered customer service chatbot to its website and discovered to its horror that The pranksters were tricking the robot. offer to sell them new SUVs for $1).
But so far, no major ai company has been willing to fill the void left by Sydney's disappearance with a more eccentric chatbot. And while I've heard that several big ai companies are working to give users the option to choose between different chatbot characters (some squarer than others), there's currently nothing remotely close to the original pre-Bing version. lobotomy for public use. .
That's a good thing if you're worried about ai acting creepy or threatening, or if you're worried about a world where people spend all day talking to chatbots instead of developing human relationships.
But it's a bad thing if you think that ai's potential to improve human well-being extends beyond allowing us to outsource our heavy lifting, or if you worry that making chatbots so careful is limiting how impressive they could be.
Personally, I'm not longing for Sydney's return. I think Microsoft did the right thing (for their business, certainly, but also for the public) by removing him after he went rogue. And I support researchers and engineers working to make ai systems safer and more aligned with human values.
But I'm also sorry that my experience with Sydney sparked such an intense reaction and made ai companies believe that their only option to avoid ruining their reputation was to turn their chatbots into Kenneth the Page from “30 Rock.”
Above all, I think the choice we were offered last year (between lawless ai homewreckers and censorious ai drones) is false. We can, and should, look for ways to harness the full capabilities and intelligence of ai systems without removing the barriers that protect us from their worst harms.
If we want ai to help us solve big problems, generate new ideas, or simply surprise us with its creativity, we may need to unleash it a little.