Globalized technology has the potential to create large-scale societal impact, and having a research approach grounded in existing international civil and human rights standards is a critical component of ensuring responsible and ethical AI development and deployment. The Impact Lab team, part of Google Responsible AI team, employs a variety of interdisciplinary methodologies to ensure a rich and critical analysis of the potential implications of technological development. The team’s mission is to examine the human rights and socioeconomic impacts of AI, publish critical research, and incubate novel mitigations that enable machine learning (ML) practitioners to advance global equity. We study and develop scalable, rigorous, and evidence-based solutions using data analytics, human rights, and participatory frameworks.
The uniqueness of Impact Lab’s goals is its multidisciplinary approach and diversity of expertise, including both applied and academic research. Our goal is to expand the epistemic lens of responsible AI to center the voices of historically marginalized communities and overcome the practice of unsubstantiated impact analysis by offering a research-based approach to understanding how different perspectives and experiences should affect development. of technology.
what we do
In response to the increasing complexity of ML and the increased coupling between large-scale ML and people, our team critically examines traditional assumptions about how technology affects society to deepen our understanding of this interaction. We collaborate with academics in the areas of social sciences and philosophy of technology and publish fundamental research that focuses on how ML can be helpful and useful. We also offer research support for some of our organization’s most challenging efforts, including the 1,000 Languages Initiative and ongoing work on the testing and evaluation of language and generative models. Our work gives weight to Google AI Principles.
To that end, we:
- Conduct fundamental and exploratory research with the goal of creating scalable sociotechnical solutions.
- Create research-based data sets and frameworks to evaluate ML systems
- Define, identify and assess the negative social impacts of AI
- Create responsible solutions for data collection used to build large models
- Develop novel methodologies and approaches that support the responsible implementation of ML models and systems to ensure safety, fairness, robustness, and user accountability.
- Translate feedback from the external community and experts into empirical insights to better understand user needs and impacts.
- Seek equitable collaboration and strive for mutually beneficial partnerships
We strive to not only reimagine existing frameworks for assessing the adverse impact of AI to answer ambitious research questions, but also to promote the importance of this work.
Current research efforts
Understanding social problems
Our motivation for providing rigorous analytical tools and approaches is to ensure that social and technical impact and equity are well understood relative to cultural and historical nuances. This is very important, as it helps build incentive and capacity to better understand the communities experiencing the greatest burden and demonstrates the value of rigorous and focused analysis. Our goals are to proactively partner with external thought leaders in this problem space, reframe our existing mental models when assessing potential harms and impacts, and avoid relying on unfounded assumptions and stereotypes in ML technologies. We collaborate with researchers from Stanford, the University of California Berkeley, the University of Edinburgh, the Mozilla Foundation, the University of Michigan, the Naval Postgraduate School, Data and Society, EPFL, Australian National University, and McGill University.
We examine systemic social issues and generate useful artifacts for responsible AI development. |
<!–
We examine systemic social issues and generate useful artifacts for responsible AI development. |
We examine systemic social issues and generate useful artifacts for responsible AI development. |
–>
Center underrepresented voices
We also develop the AI Fair Research Roundtable (EARR), an innovative community-based research coalition created to build ongoing partnerships with outside nonprofit and research organization leaders who are equity experts in the fields of education, law, social justice, AI ethics and economic development. These partnerships offer the opportunity to engage with multidisciplinary experts on complex research questions related to how we focus on and understand equity using lessons from other domains. Our partners include PolicyLink; The Education Trust – West; notley; Partnership in AI; Institute of Otherness and Belonging at the University of California at Berkeley; The Michelson Institute for Intellectual Property, HBCU IP Futures Collaborative at Emory University; Center for Research in Information Technologies in the Interest of Society (CITRIS) at the Banatao Institute; and the Charles A. Dana Center at the University of Texas, Austin. The goals of the EARR program are: (1) to focus knowledge on the experiences of historically marginalized or underrepresented groups, (2) to qualitatively understand and identify potential approaches to studying social harms and their analogies within the context of technology, and (3) ) broaden the perspective of experience and knowledge relevant to our work on responsible and safe approaches to AI development.
Through workshops and semi-structured discussions, EARR has provided critical perspectives and feedback on how to conceptualize equity and vulnerability in relation to AI technology. We’ve partnered with EARR collaborators on a variety of topics, from generative AI, algorithmic decision-making, transparency, and explainability, with results ranging from contradictory queries to frameworks and case studies. Certainly, the process of translating insights from cross-disciplinary research into technical solutions is not always easy, but this research has been a rewarding partnership. We present our initial assessment of this commitment in this paper.
EARR: Components of the ML development lifecycle in which multidisciplinary knowledge is key to mitigate human biases. |
Foundation in civil values and human rights
In association with our Civil and Human Rights ProgramOur investigative and analytical process is based on internationally recognized human rights frameworks and standards, including the Universal Declaration of Human Rights and the UN Guiding Principles on Business and Human Rights. Using civil and human rights frameworks as a starting point allows for a context-specific approach to research that takes into account how a technology will be implemented and its impact on the community. Most importantly, a rights-based approach to research allows us to prioritize conceptual and applied methods that emphasize the importance of understanding the most vulnerable users and the most salient harms in order to better inform daily decision-making, the design of products and long-term results. strategies.
Work in progress
Social context to aid in the development and evaluation of data sets
We seek to employ an equity-based approach to dataset curation, model development, and evaluation that avoids expeditious but potentially risky approaches, such as using incomplete data or failing to consider cultural, historical, and social factors associated with a data set. Responsible data collection and analysis requires a additional level of careful consideration of context in which the data is created. For example, one can see differences in results across demographic variables that will be used to build models and should question the structural and system-level factors at play, as some variables might ultimately be a problem. reflection of historical, social and political factors. By using proxy data, such as race or ethnicity, gender, or zip code, we are systematically merging the lived experiences of a whole group of diverse people and use it to train models that can recreate and sustain damage and inaccurate character profiles of entire populations. Critical data analysis also requires a careful understanding that correlations or relationships between variables do not imply causation; he association we often witness caused by multiple additional variables.
Relationship between social context and model results
Building on this expanded and nuanced social understanding of data and the construction of data sets, we also address the problem of anticipate or enhance the impact of ML models once they have been implemented for real world use. There are myriad ways in which the use of ML in various contexts, from education to healthcare, has exacerbated existing inequity because developers and decision-making users of these systems lacked relevant social understanding, historical context and did not involve relevant stakeholders. . This is a research challenge for the ML field in general, and one that is critical to our team.
Globally Responsible AI Focused Community Experts
Our team also recognizes the importance of understanding the global socio-technical context. In keeping with Google’s mission to “organize the world’s information and make it universally accessible and useful,” our team participates in research partnerships globally. For example, we are collaborating with The Natural Language Processing Team and the Human-Centered Team at the Makerere Artificial Intelligence Lab in Uganda to investigate the cultural and linguistic nuances related to the development of the language model.
Conclusion
We continue to address the impacts of ML models implemented in the real world by conducting further sociotechnical research and engaging outside experts who are also part of historically and globally disenfranchised communities. The Impact Lab is pleased to offer an approach that contributes to the development of solutions to applied problems by utilizing epistemologies from the social sciences, evaluation, and human rights.
Thanks
We would like to thank each member of the Impact Lab team: Jamila Smith-Loud, Andrew Smart, Jalon Hall, Darlene Neal, Amber Ebinama, and Qazi Mamunur Rashid. — for all the hard work they do to ensure that ML is more accountable to its users and society in all communities and around the world.