To give academics and others focused on ai their well-deserved (and long-awaited) time in the spotlight, TechCrunch is launching an interview series focused on notable women who have contributed to the ai revolution. We will publish several articles throughout the year as the rise of ai continues, highlighting key work that often goes unnoticed. Read more profiles here.
Heidy Khlaaf is director of engineering at cybersecurity company Trail of Bits. She specializes in evaluating software and artificial intelligence implementations within “safety-critical” systems, such as nuclear power plants and autonomous vehicles.
Khlaaf received his PhD in computer science. from University College London and his bachelor's degree in computer science and philosophy from Florida State University. She has led security and safety audits, provided consultation and assurance case reviews, and contributed to the creation of standards and guidelines for security-related applications and their development.
Questions and answers
Briefly, how did you get started in ai? What attracted you to the field?
I was drawn to robotics from a young age and started programming at age 15 because I was fascinated by the prospects of using robotics and artificial intelligence (as they are inexplicably linked) to automate workloads where they are needed most. Just like in manufacturing, I saw robotics being used to help seniors and automate dangerous manual work in our society. However, I received my Ph.D. in a different subfield of computer science, because I believe that having a solid theoretical foundation in computer science allows for informed and scientific decisions about where ai may or may not be suitable and where obstacles may lie.
What work are you most proud of (in the field of ai)?
Utilize my strong background and experience in security engineering and safety-critical systems to provide context and critique where necessary in the new field of ai “safety.” Although the field of ai security has attempted to adapt and cite well-established security techniques, several terminologies have been misinterpreted in terms of their use and meaning. There is a lack of consistent or intentional definitions that compromise the integrity of the security techniques that the ai community is currently using. I am particularly proud of “Towards comprehensive risk assessments and assurance of ai-based systems” and “A Hazard Analysis Framework for Code Synthesis Large Language Models” where I deconstruct false narratives around ai security and assessments, and provide concrete steps to close the security gap within ai.
How to address the challenges of the male-dominated technology industry and, by extension, the male-dominated artificial intelligence industry?
Recognizing how little the status quo has changed is not something we discuss often, but I think it's actually important for me and other technical women to understand our position within the industry and have a realistic view on the changes needed. Retention rates and the proportion of women in leadership roles have remained largely the same since I joined the field, and that was more than a decade ago. And as TechCrunch has rightly pointed out, despite the tremendous advances and contributions of women within ai, we remain on the margins of the conversations that we ourselves have defined. Recognizing this lack of progress helped me understand that building a strong personal community is much more valuable as a source of support than relying on DEI initiatives that unfortunately have not achieved change, given that the bias and skepticism towards technical women is still quite high. widespread in technology.
What advice would you give to women looking to enter the field of ai?
Don't appeal to authority and find a line of work you truly believe in, even if it contradicts popular narratives. Given the power that ai labs have politically and economically right now, there is an instinct to take as fact whatever ai “thought leaders” say, when it is often the case that many ai claims are marketing speeches. that exaggerate the capabilities of ai to benefit. a final result. However, I see significant hesitation, especially among young women in the field, to express skepticism in the face of claims made by their male peers that cannot be substantiated. Impostor syndrome has a strong hold on women within the technology sector and leads many to doubt their own scientific integrity. But it is more important than ever to question claims that exaggerate the capabilities of ai, especially those that are not falsifiable according to the scientific method.
What are some of the most pressing issues facing ai as it evolves?
Regardless of the advances we see in ai, they will never be the only solution, technologically or socially, to our problems. There is currently a trend to introduce ai into every possible system, regardless of its effectiveness (or lack thereof) in numerous domains. ai should augment human capabilities rather than replace them, and we are witnessing a complete disregard for the pitfalls and failure modes of ai that are causing real, tangible harm. Recently, an ai system Trigger Recently, an officer shot a child.
What are some of the issues that ai users should consider?
How unreliable ai is. ai algorithms are notoriously flawed, and high error rates are seen in applications that require precision, accuracy, and security. The way ai systems are trained incorporates human bias and discrimination into their results, which become “de facto” and automated. And this is because the nature of ai systems is to provide results based on statistical and probabilistic inferences and correlations from historical data, and not on any kind of reasoning, factual evidence or “causation.”
What's the best way to build ai responsibly?
Ensure that ai is developed in a way that protects people's rights and safety by making verifiable claims and holding ai developers accountable to them. These claims must also have a regulatory, safety, ethical or technical application and must not be falsifiable. Otherwise, there is a significant lack of scientific integrity to properly evaluate these systems. Independent regulators should also evaluate ai systems based on these claims, as is currently required for many products and systems in other industries, for example, those evaluated by the FDA. ai systems should not be exempt from standard audit processes that are well established to ensure the protection of the public and consumers.
How can investors better drive responsible ai?
Investors should collaborate and fund organizations seeking to establish and promote audit practices for ai. Currently, most of the funds are invested in the ai labs themselves, with the belief that their security teams are sufficient for the advancement of ai testing. However, independent auditors and regulators are key to public trust. Independence allows the public to have confidence in the accuracy and integrity of assessments and the integrity of regulatory results.