To give academics and others focused on ai their well-deserved (and long-awaited) time in the spotlight, TechCrunch is launching an interview series focused on notable women who have contributed to the ai revolution. We will publish several articles throughout the year as the rise of ai continues, highlighting key work that often goes unnoticed. Read more profiles here.
Irene Solaiman began her career in ai as a researcher and public policy manager at OpenAI, where she led a new approach to the launch of GPT-2, a predecessor to ChatGPT. After working as an ai Policy Manager at Zillow for almost a year, she joined Hugging Face as Head of Global Policy. Her responsibilities there range from developing and leading the company's global ai policy to conducting sociotechnical research.
Solaiman also advises the Institute of Electrical and Electronics Engineers (IEEE), the professional association for electronic engineering, on ai issues, and is a recognized ai expert at the Intergovernmental Organization for Economic Co-operation and Development (OECD).
Irene Solaiman, Head of Global Policy at Hugging Face
Briefly, how did you get started in ai? What attracted you to the field?
A completely non-linear career path is common in ai. My budding interest began the same way many teenagers with awkward social skills find their passions: through science fiction media. I originally studied human rights policy and then took computer science courses, as I saw ai as a means to work on human rights and build a better future. Being able to conduct technical research and lead policy in a field with so many unanswered questions and untrodden paths keeps my work exciting.
What work are you most proud of (in the field of ai)?
I am most proud when my experience resonates with people across the ai field, especially my writing on release considerations in the complex landscape of releases and openness of ai systems. Seeing my article in a Technical implementation of ai launch gradient framework The rapid debates among scientists and used in government reports are affirming, and a good sign that I am working in the right direction! Personally, some of the work that motivates me most is cultural value alignment, which is dedicated to ensuring that systems work best for the cultures in which they are implemented. With my amazing co-author and now dear friend, Christy Dennison, working on a Process of adaptation of linguistic models to society It was a wholehearted project (and many hours of debugging) that has shaped current security and alignment work.
How to address the challenges of the male-dominated technology industry and, by extension, the male-dominated artificial intelligence industry?
I have found, and continue to find, my people: from working with incredible business leaders who care deeply about the same issues I prioritize to great research co-authors with whom I can start every work session with a mini therapy session. . Affinity groups are a great help for creating a community and sharing advice. It is important to highlight intersectionality here; My communities of Muslim and BIPOC researchers are continually inspiring.
What advice would you give to women looking to enter the field of ai?
Have a support group whose success is your success. In terms of youth, I think this is a “girl's girl.” The same women and allies I entered this field with are my favorite coffee dates and late-night panic calls before a deadline. One of the best career tips I've ever read was from Arvind Narayan on the platform formerly known as Twitter who states the “Liam Neeson Principle” of not being the smartest of everyone, but having a particular set of skills.
What are some of the most pressing issues facing ai as it evolves?
The most pressing issues evolve, so the meta-answer is: international coordination for safer systems for all people. People who use and are affected by systems, even in the same country, have different preferences and ideas about what is safest for them. And the problems that arise will depend not only on how ai evolves, but also on the environment in which it is implemented; Security priorities and our definitions of capability differ regionally, such as a greater threat of cyberattacks on critical infrastructure in more digitalized economies.
What are some of the issues that ai users should consider?
Technical solutions rarely, if ever, address risks and harms comprehensively. While there are steps users can take to increase their knowledge of ai, it is important to invest in a multitude of safeguards for risks as they evolve. For example, I'm excited about more research on watermarking as a technical tool, and we also need coordinated guidance from policymakers on the distribution of generated content, especially on social media platforms.
What's the best way to build ai responsibly?
With the affected towns and constantly reevaluating our methods of evaluating and implementing security techniques. Both beneficial applications and potential harms are constantly evolving and require iterative feedback. The means by which we improve ai safety must be examined collectively as a field. The most popular model evaluations in 2024 are much stronger than the ones I did in 2019. Today, I am much more bullish on technical evaluations than I am on the red team. I find human assessments to have extremely high utility, but as more evidence emerges of the mental burden and disparate costs of human feedback, I am increasingly optimistic about standardizing assessments.
How can investors better drive responsible ai?
They already are! I'm glad to see many investors and venture capital firms actively participating in policy and security conversations, including through open letters and congressional testimony. I'm eager to hear more from investor experience on what's energizing small businesses across sectors, especially as we're seeing greater use of ai in fields outside of core tech industries.