To give academics and others focused on ai their well-deserved (and long-awaited) time in the spotlight, TechCrunch is launching an interview series focused on notable women who have contributed to the ai revolution. We will publish several articles throughout the year as the rise of ai continues, highlighting key work that often goes unnoticed. Read more profiles here.
Miranda's book is the founding director of the ai Governance Lab at the Center for Democracy and technology, where she works to help create solutions that can effectively regulate and govern ai systems. She helped guide responsible ai strategies at Meta and previously worked as a senior policy analyst at the organization Uptown, which seeks to use technology to promote equity and justice.
Briefly, how did you get started in ai? What attracted you to the field?
I was drawn to working in machine learning and artificial intelligence by seeing the way these technologies collided with fundamental conversations about society: values, rights, and which communities are left behind. My early work exploring the intersection of ai and civil rights reinforced for me that ai systems are much more than technical artifacts; They are systems that shape and are shaped by their interaction with people, bureaucracies and policies. I've always been an expert at translating technical and non-technical contexts, and I was excited by the opportunity to help break through the appearance of technical complexity to help communities with different types of expertise shape the way ai is built from the ground up. . .
What work are you most proud of (in the field of ai)?
When I started working in this space, many people still needed to be convinced that ai systems could have a discriminatory impact on marginalized populations, let alone that anything needed to be done to remedy those harms. While there is still too wide a gap between the status quo and a future in which bias and other harms are systematically addressed, I am pleased that the research my collaborators and I conducted on personalized online discrimination Advertising and my work within the industry on algorithmic fairness helped lead to significant changes to the Meta ad delivery system and progress toward reducing disparities in access to important economic opportunities.
How do you address the challenges of the male-dominated tech industry and, by extension, the male-dominated ai industry?
I've been fortunate to work with phenomenal colleagues and teams who have been generous with opportunities and sincere support, and we tried to bring that energy to whatever space we found ourselves in. In my most recent career transition, I was delighted that almost all of my options involved working on teams or within organizations led by phenomenal women, and I hope that the field continues to lift the voices of those who have not traditionally been focused on career-oriented conversations. technology.
What advice would you give to women looking to enter the field of ai?
The same advice I give to anyone who asks: Find managers, advisors, and teams who support you, who energize and inspire you, who value your opinion and perspective, and who take risks to defend you and your work.
What are some of the most pressing issues facing ai as it evolves?
The impacts and harms that ai systems are already causing in people are well known at this point, and one of the biggest pressing challenges is to go beyond describing the problem and develop robust approaches to systematically address those harms and incentivize their adoption. . We launch the ai-governance-lab/” target=”_blank” rel=”noopener” data-saferedirecturl=”https://www.google.com/url?q=https://cdt.org/cdt-ai-governance-lab/&source=gmail&ust=1707949330359000&usg=AOvVaw1vorfsHXYbxvMwxGpbsRmi”>ai Governance Lab at CDT to drive progress in both directions.
What are some of the issues that ai users should consider?
For the most part, ai systems are still missing seat belts, airbags, and traffic signals, so proceed with caution before using them for important tasks.
What's the best way to build ai responsibly?
The best way to build ai responsibly is with humility. Consider how success has been defined for the ai system you are working on, who that definition serves, and what context it may be missing. Think about who might fail the system and what will happen if it fails. And build systems not only with the people who will use them but with the communities that will be subject to them.
How can investors better drive responsible ai?
Investors must create space for technology creators to act more deliberately before bringing half-baked technologies to market. The intense competitive pressure to release the newest, biggest, and brightest ai models is driving a worrying lack of investment in responsible practices. While uninhibited innovation sings a tempting siren song, it is a mirage that will leave everyone worse off.
ai is not magic; It is just a mirror that is shown to society. If we want it to reflect something different, we have work to do.