Microsoft laid off its entire ethics and society team within the artificial intelligence organization as part of recent layoffs that affected 10,000 employees across the company, platforms has learned.
The move leaves Microsoft without a dedicated team to ensure its AI principles are closely tied to product design at a time when the company is leading the charge in making AI tools available to the mainstream, they said. current and former employees.
Microsoft still maintains an active Office of Responsible AI, which is tasked with creating rules and principles to govern the company’s AI initiatives. The company says its overall investment in responsibility work is rising despite recent layoffs.
“Microsoft is committed to developing AI products and experiences in a safe and responsible manner, and it does so by investing in people, processes and partnerships that prioritize this,” the company said in a statement. “Over the past six years, we have increased the number of people on our product teams and within the Office of Responsible AI who, along with all of us at Microsoft, are responsible for ensuring that we put our AI principles into practice. […] We appreciate the pioneering work that Ethics and Society did to help us on our continued journey of responsible AI.”
But employees said the ethics and society team played a critical role in ensuring that the company’s principles of responsible artificial intelligence are truly reflected in the design of the products that ship.
“Our job was … to create rules in areas where there weren’t any.”
“People would look at the principles that come out of the responsible AI office and say, ‘I don’t know how this applies,’” says one former employee. “Our job was to show them and create rules in areas where there weren’t any.”
In recent years, the team designed an RPG called Judgment Call that helped designers visualize the potential harms that could result from AI and discuss them during product development. It was part of a larger”responsible innovation toolkit” which the team posted publicly.
More recently, the team has been working to identify the risks posed by Microsoft’s adoption of OpenAI technology across its range of products.
The ethics and society team reached its largest size in 2020, when it had approximately 30 employees, including engineers, designers and philosophers. In October, the team was reduced to approximately seven people as part of a reorganization.
In a post-reorganization team meeting, John Montgomery, AI’s corporate vice president, told employees that company leaders had instructed them to act quickly. “The pressure of [CTO] Kevin [Scott] and [CEO] satya [Nadella] it’s very, very high to take these latest OpenAI models and the ones that come after them and put them in the hands of customers at a very high speed,” he said, according to meeting audio obtained by platforms.
Because of that pressure, Montgomery said, much of the team would move on to other areas of the organization.
Some team members backed away. “I am going to be brave enough to ask them to reconsider this decision,” an employee said on the call. “While I understand there are business issues at stake… what this team has always been deeply concerned with is how we impact society and the negative impacts we have had. And they are significant.”
Montgomery refused. “Can I reconsider? I don’t think it will,” he said. “Because unfortunately the pressures remain the same. You don’t have the eyesight that I do, and you can probably be thankful for that. A lot of things are being ground into the sausage.”
However, in response to questions, Montgomery said that the team would not be eliminated.
“It’s not that it’s going to disappear, it’s that it’s evolving,” he said. “It’s evolving towards putting more energy into the individual product teams that are building the services and software, which means that the lynchpin that has been doing some of the work is delegating their skills and responsibilities.”
Most of the team members were transferred elsewhere within Microsoft. Subsequently, the remaining members of the ethics and society team said that the smaller crew made it difficult to implement their ambitious plans.
The move leaves a fundamental gap in the holistic design of AI products, says an employee
About five months later, on March 6, the remaining employees were told to join a Zoom call at 11:30 a.m. PST to hear a “business-critical update” from Montgomery. During the meeting, they were told that their team was being eliminated after all.
An employee says the move leaves a fundamental gap in the user experience and holistic design of AI products. “The worst thing is that we have exposed the business to risk and human beings to risk by doing this,” they explained.
The conflict underscores an ongoing tension for tech giants building divisions dedicated to making their products more socially responsible. At their best, they help product teams anticipate potential misuse of technology and fix any issues before shipping.
But they also have the job of saying “no” or “slowing down” within organizations that often don’t want to hear it, or detailing risks that could lead to legal headaches for the business if they arise in legal discovery. And the resulting friction sometimes spills over into the public eye.
In 2020, Google fired AI ethics researcher Timnit Gebru after he published an article criticizing the grand linguistic models that would explode in popularity two years later. The resulting furor resulted in the departures of several top leaders within the departmentand diminished the company’s credibility on issues of responsible AI.
Microsoft focused on shipping AI tools faster than rivals
Members of the ethics and society team said they generally tried to support product development. But they said that as Microsoft focused on shipping AI tools faster than rivals, the company’s leadership became less interested in the kind of long-term thinking the team specialized in.
It is a dynamic that deserves close examination. For one, Microsoft may now have a once-in-a-generation opportunity to gain significant traction against Google in search, productivity software, cloud computing and other areas where the giants compete. When it relaunched Bing with AI, the company told investors that every 1 percent of market share he could take away from Google in search would generate $2 billion in annual revenue.
That potential explains why Microsoft has so far invested $11 billion in OpenAI, and is currently racing to integrate the startup’s technology into all corners of its empire. It seems to be seeing some early success: The company said last week that Bing now has 100 million daily active users, with a third of them new since the search engine was relaunched using OpenAI technology.
On the other hand, all those involved in the development of AI agree that the technology presents potent and possibly existential risks, both known and unknown. The tech giants have gone out of their way to signal that they are taking those risks seriously – only Microsoft has. three different groups working on the issue, even after the removal of the ethics and society team. But given the stakes, any cuts to teams focused on responsible work seem noteworthy.
The removal of the ethics and society team came just as the group’s remaining employees had trained their focus on arguably their biggest challenge yet: anticipating what would happen when Microsoft released OpenAI-powered tools to a global audience.
Last year, the team wrote a memo detailing the brand risks associated with the Bing Image Creator, which uses the OpenAI DALL-E system to create images based on text prompts. the image tool released in a handful of countries in Octobermaking it one of Microsoft’s first public collaborations with OpenAI.
While text-to-image technology has proven very popular, Microsoft researchers correctly predicted that it could also threaten artists’ livelihoods by allowing anyone to easily copy their style.
“When testing Bing Image Creator, it was found that with a simple message that included only the name of the artist and a medium (painting, print, photography, or sculpture), the generated images were nearly impossible to distinguish from the original works,” the researchers wrote. researchers in the note
“The risk of brand damage…is real and significant enough to require remediation.”
They added: “The risk of brand damage, both to the artist and their financial shareholders, and negative public relations for Microsoft as a result of artist complaints and public backlash is real and significant enough to require a repair before it damages the Microsoft brand.”
Also, last year, OpenAI updated its terms of service to give users “full ownership rights to the images you create with DALL-E.” The move left Microsoft’s ethics and society team concerned.
“If an AI image generator mathematically replicates images of works, it is ethically suspect to suggest that the person who sent the message has full ownership rights to the resulting image,” they wrote in the memo.
Microsoft researchers created a list of mitigation strategies, including blocking Bing Image Creator users from using the names of living artists as prompts and creating a marketplace to sell an artist’s work that would appear if someone look for his name.
Employees say that neither of these strategies was implemented and that Bing Image Creator was released to test countries anyway.
Microsoft says the tool was modified before launch to address concerns raised in the document and prompted additional work by its team responsible for artificial intelligence.
But legal questions about the technology remain unresolved. In February 2023, Getty Images filed a lawsuit against Stability AI, creators of the Stable Diffusion AI art generator. Getty accused the AI startup of misusing more than 12 million images to train its system.
The allegations echoed concerns raised by Microsoft’s own AI ethicists. “Few artists are likely to have consented to allowing their works to be used as training data, and many are likely still unaware of how generative technology enables online image variations of their work to be produced in seconds,” they wrote. employees last year.