Today we are sharing publicly Microsoft Responsible AI Standarda framework to guide how we build AI systems. It’s an important step in our journey to build better, more reliable AI. We’re launching our latest Responsible AI Standard to share what we’ve learned, invite feedback from others, and contribute to the discussion around creating better standards and practices around AI.
Guide product development towards more responsible results
AI systems are the product of many different decisions made by those who develop and implement them. From system purpose to how people interact with AI systems, we must proactively guide these decisions toward more beneficial and equitable outcomes. That means keeping people and their goals at the center of system design decisions, and honoring enduring values like fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.
The Responsible AI Standard sets out our best thinking on What we will build AI systems to uphold these values and earn the trust of society. That provides practical, team-specific guidance that goes beyond the high-level principles that have dominated the AI landscape to date.
The Standard details concrete goals or outcomes that teams developing AI systems should strive to achieve. These goals help to break down a broad principle like ‘accountability’ into its key enablers, such as impact evaluations, data governance, and human oversight. Each goal is then made up of a set of requirements, which are steps teams must take to ensure AI systems meet the goals throughout the system lifecycle. Finally, the standard maps available tools and practices to specific requirements so that Microsoft teams implementing it have resources to help them succeed.
The need for this type of practical guidance is increasing. AI is becoming more and more a part of our lives, and yet our laws are falling behind. They have not caught up with the unique risks of AI or the needs of society. While we see signs that government action on AI is expanding, we also recognize our responsibility to act. We believe that we must work to ensure that AI systems are responsible by design.
Refining our policy and learning from our product experiences
Over the course of a year, a multidisciplinary group of researchers, engineers, and policy experts produced the second version of our Responsible AI Standard. it’s based on our previous responsible AI efforts, including the first version of the Standard that was released internally in the fall of 2019, as well as the latest research and some important lessons learned from our own experiences with the products.
Equity in speech-to-text technology
The potential for AI systems to exacerbate social biases and inequities is one of the most widely recognized harms associated with these systems. In March 2020, an academic study revealed that speech-to-text across the tech industry produced error rates for members of some black and African-American communities that were nearly double those for white users. We took a step back, considered the study’s findings, and found that our pre-launch tests had not satisfactorily accounted for the rich diversity of speech among people with different backgrounds and from different regions. After the study was published, we hired an expert sociolinguist to help us better understand this diversity, and we are looking to expand our data collection efforts to narrow the performance gap in our speech-to-text technology. In the process, we found that we needed to grapple with challenging questions about how best to collect data from communities in a way that appropriately and respectfully engages them. We also learned the value of bringing experts into the process early on, including to better understand the factors that could explain variations in system performance.
The Responsible AI standard records the pattern we follow to improve our speech-to-text technology. As we continue to roll out the Standard company-wide, we hope that the Fairness Goals and requirements identified in it will help us stay ahead of potential harm to fairness.
Appropriate usage controls for personalized neural voice and facial recognition
Azure AI custom neural voice is another innovative voice technology from Microsoft that enables the creation of a synthetic voice that sounds nearly identical to the original source. AT&T has brought this technology to life with an award-winning in-store bugs bunny experience, and Progressive has brought the voice of Flo to online interactions with customers, among the uses of many other customers. This technology has exciting potential in education, accessibility and entertainment, and yet it’s also easy to imagine how it could be used to inappropriately impersonate speakers and mislead listeners.
Our review of this technology through our Responsible AI program, including the sensitive use review process required by the Responsible AI standard, led us to adopt a tiered framework of control: We restricted customer access to the service, we ensured that acceptable use cases were proactively defined and communicated. through a transparency note Y Code of conductand established technical guardrails to help ensure the active participation of the announcer when creating a synthetic voice. Through these and other controls, it helped protect against misuse, while maintaining the beneficial uses of the technology.
Building on what we learned from Custom Neural Voice, we’ll apply similar controls to our facial recognition. services. After a transition period for existing customers, we are limiting access to these services to managed customers and partners, reducing use cases to acceptable pre-defined ones, and leveraging technical controls built into the services.
Fit for Purpose and Azure Face capabilities
Finally, we recognize that for AI systems to be reliable, they must be adequate solutions to the problems for which they were designed. As part of our work to align our Azure Face service with the requirements of the Responsible AI Standard, we are also withdrawal capabilities that infer emotional states and identity attributes such as gender, age, smile, facial hair, hair, and makeup.
Taking emotional states as an example, we have decided that we will not provide open API access to technology that can scan people’s faces and claim to infer their emotional states based on their facial expressions or movements. Experts inside and outside the company have highlighted the lack of scientific consensus on the definition of “emotions,” the challenges in how inferences generalize across use cases, regions, and demographics, and the heightened privacy concerns surrounding this. capacity type. We also decided that we need to look carefully all AI systems that claim to infer people’s emotional states, whether the systems use facial analysis or any other AI technology. The Fit for Purpose goal and requirements in the Responsible AI standard now help us perform system-specific validity assessments up front, and our sensitive uses process helps us provide nuanced guidance for high-impact use cases, based on science.
These real-world challenges informed the development of Microsoft’s Responsible AI Standard and demonstrate its impact on the way we design, build, and deploy AI systems.
For those who would like to dive deeper into our approach, we have also made available some key resources that support the Responsible AI Standard: our Impact Assessment Template Y guide, and a collection of Transparency Notes. Impact assessments have proven valuable at Microsoft to ensure that teams explore the impact of their AI system, including stakeholders, anticipated benefits, and potential harms, in depth in the early design stages. Transparency Notes are a new form of documentation in which we reveal to our customers the capabilities and limitations of our core building block technologies, so they have the knowledge to make responsible deployment decisions.
A multidisciplinary and iterative journey
Our updated Responsible AI standard reflects hundreds of inputs across Microsoft technologies, professions, and geographies. It’s a significant step forward for our responsible AI practice because it’s much more practical and concrete: it lays out practical approaches to identify, measure, and mitigate harm early, and requires teams to adopt controls to secure and protect beneficial uses. against misuse. You can obtain more information about the development of the Standard in this
While our Standard is an important step on Microsoft’s responsible AI journey, it is only one step. As we progress through implementation, we expect to encounter challenges that require us to stop, reflect, and adapt. Our Standard will continue to be a living document, evolving to address new research, technology, law, and learning inside and outside the company.
There is a rich and active global dialogue on how to create standards based on principles and practices to ensure that organizations develop and deploy AI responsibly. We have benefited from this discussion and will continue to contribute to it. We believe that industry, academia, civil society, and government must collaborate to advance the state of the art and learn from each other. Together, we must answer open research questions, close measurement gaps, and design new practices, patterns, resources, and tools.
Better and more equitable futures will require new barriers to AI. Microsoft’s Responsible AI Standard is a contribution toward this goal, and we’re participating in the hard and necessary implementation work across the company. We are committed to being open, honest and transparent in our efforts to achieve meaningful progress.