To give academics and others focused on ai their well-deserved (and long-awaited) time in the spotlight, TechCrunch is launching an interview series focused on notable women who have contributed to the ai revolution. We will publish several articles throughout the year as the rise of ai continues, highlighting key work that often goes unnoticed. Read more profiles here.
francine bennett She is a founding board member of the Ada Lovelace Institute and currently serves as the organization's interim director. Prior to this, she worked in biotechnology, using ai to find medical treatments for rare diseases. She also co-founded a data science consultancy and is a founding trustee of DataKind UK, which helps British charities with data science support.
Briefly, how did you get started in ai? What attracted you to the field?
I started in pure mathematics and wasn't that interested in anything applied; he enjoyed playing with computers, but thought that any applied mathematics was just calculus and not very intellectually interesting. I came to ai and machine learning later, when it started to become obvious to me and everyone else that because data was becoming much more abundant in many contexts, it opened up interesting possibilities for solving all kinds of data problems. new ways using ai. and machine learning, and they were much more interesting than I thought.
What work are you most proud of (in the field of ai)?
I'm very proud of work that isn't the most technically elaborate, but unlocks some real improvements for people; for example, using ML to try to find previously unnoticed patterns in patient safety incident reports in a hospital to help medical professionals improve going forward. patient outcomes. And I'm proud to represent the importance of putting people and society rather than technology at the center of events like this year's ai Safety Summit in the UK. I think it's only possible to do that authoritatively because I have experience working with technology and getting excited about it and delving into how it actually affects people's lives in practice.
How do you address the challenges of the male-dominated tech industry and, by extension, the male-dominated ai industry?
Mainly choosing to work in places and with people who are interested in the person and their abilities rather than the gender and trying to use the influence I have to make that the norm. I also work in diverse teams whenever I can – being on a balanced team rather than an exceptional “minority” creates a really different atmosphere and makes it much more possible for everyone to reach their potential. More broadly, because ai is so multifaceted and likely to impact so many walks of life, especially those in marginalized communities, it is obvious that people from all walks of life must participate in its construction and configuration, if desired. It's going to work fine.
What advice would you give to women looking to enter the field of ai?
Enjoy it! This is a very interesting, intellectually challenging and ever-changing field: you will always find something useful and challenging to do, and there are many important applications that no one has thought of yet. Also, don't worry too much about needing to know all the technical aspects (literally, no one knows all the technical aspects); Just start with something that intrigues you and work from there.
What are some of the most pressing issues facing ai as it evolves?
Right now, I think there is a lack of a shared vision of what we want ai to do for us and what it can and cannot do for us as a society. There are many technical advances occurring today, which are likely to have very high environmental, financial and social impacts, and much enthusiasm for implementing those new technologies without a well-informed understanding of the potential risks or unintended consequences. Most of the people who build the technology and talk about the risks and consequences are from a fairly small demographic. We now have a window of opportunity to decide what we want to see from ai and work to make that happen. We can remember other types of technology and how we handled its evolution or what we wish we had done better: what our equivalents are for ai products from crash testing new cars; holding a restaurant responsible that accidentally gives you food poisoning; consult affected people during planning permission; appeal an ai decision as you would with a human bureaucracy.
What are some of the issues that ai users should consider?
I would like people who use ai technologies to be confident in what the tools are and what they can do, and to talk about what they want from ai. It's easy to see ai as unknowable and uncontrollable, but it's really just a set of tools, and I want humans to feel like they can take charge of what they do with those tools. But it shouldn't just be the responsibility of the people who use the technology: government and industry should create the conditions so that the people who use ai can have confidence.
What's the best way to build ai responsibly?
We ask this question a lot at the Ada Lovelace Institute, which aims to make data ai work for people and society. It's difficult and there are hundreds of angles you can take, but there are two really important ones from my perspective.
The first is to be willing sometimes to neither build nor stop. All the time, we see high-boost ai systems where builders try to add “guardrails” later to mitigate problems and damage, but they don't put themselves in a situation where stopping is a possibility.
The second is to really get involved and try to understand how all kinds of people will experience what you're building. If you can really immerse yourself in their experiences, then you have a much better chance of having responsible, positive ai: building something that actually solves a problem for people, based on a shared vision of what good would look like, as well as avoiding the negative, not accidentally making someone's life worse because their daily existence is so different from yours.
For example, the Ada Lovelace Institute partnered with the NHS to develop an algorithmic impact assessment that developers would need to undertake as a condition of access to healthcare data. This requires developers to evaluate the potential social impacts of their ai system before implementation and incorporate the lived experiences of people and communities that could be affected.
How can investors better drive responsible ai?
When asking questions about your investments and possible futures, for this artificial intelligence system, what does it mean to work brilliantly and be responsible? Where could things go off the rails? What are the possible knock-on effects for people and society? How would we know if we need to stop building or change things significantly and what would we do then? There's no one-size-fits-all recipe, but by simply asking the questions and pointing out that being responsible is important, investors can change where their companies are putting attention and effort.