Releasing it, despite potential blemishes, was a critical example of Microsoft’s “frantic pace” in incorporating generative AI into its products, he said. Executives at a news conference on Microsoft’s campus in Redmond, Washington, repeatedly said it was time to take the tool out of the “lab” and into the hands of the public.
“I feel especially in the West, there’s a lot more like, ‘Oh my gosh, what’s going to happen because of this AI?’” Mr. Nadella said. “And it’s better to really say, ‘Hey look, is this really helping you or not?'”
Oren Etzioni, a professor emeritus at the University of Washington and founding executive director of the Allen Institute for AI, a leading lab in Seattle, said Microsoft “took a calculated risk, trying to control technology as much as it can be controlled.”
He added that many of the most troubling cases involved taking technology beyond ordinary behavior. “It can be very surprising how clever people are at getting inappropriate responses from chatbots,” she said. Referring to Microsoft officials, he continued: “I don’t think they expected how bad some of the responses would be when the chatbot was solicited in this way.”
To guard against trouble, Microsoft gave access to the new Bing to a few thousand users, though it said it planned to expand to millions more by the end of the month. To address concerns about accuracy, he provided hyperlinks and references in his answers so that users could verify the results.
The caution was informed by the company’s experience nearly seven years ago when it introduced a chatbot called Tay. Users almost immediately found ways to make it spew racist, sexist, and offensive language. The company removed Tay within a day, never to release it again.
Much of the training on the new chatbot focused on protecting against such harmful responses or scenarios that invoked violence, such as planning an attack on a school.