To give academics and others focused on ai their well-deserved (and long-awaited) time in the spotlight, TechCrunch is launching an interview series focused on notable women who have contributed to the ai revolution. We will publish several articles throughout the year as the rise of ai continues, highlighting key work that often goes unnoticed. Read more profiles here.
Anna Korhonen is a professor of natural language processing (NLP) at the University of Cambridge. she is also senior researcher in Churchill Universitymember of the Association for Computational Linguistics, and member of the European Laboratory for Learning and Intelligent Systems.
Korhonen previously worked as a member of the Alan Turing and has a PhD in computer science and a master's degree in computer science and linguistics. She researches NLP and how Develop, adapt and apply computational techniques to meet the needs of ai. she has a particular interest in responsible, “human-centered” NLP that, in their own words, “is based on the understanding of human cognitive, social and creative intelligence.”
Questions and answers
Briefly, how did you get started in ai? What attracted you to the field?
I was always fascinated by the beauty and complexity of human intelligence, particularly in relation to human language. However, my interest in STEM subjects and their practical applications led me to study engineering and computer science. I chose to specialize in ai because it is a field that allows me to combine all of these interests.
What work are you most proud of in the field of ai?
While the science of building intelligent machines is fascinating and one can easily get lost in the world of language modeling, the fundamental reason we are building ai is its practical potential. I am very proud of the work in which my fundamental research on natural language processing has led to the development of tools that can support social and global good. For example, tools that can help us better understand how diseases such as cancer or dementia develop and can be treated, or applications that can support education.
Much of my current research is driven by the mission to develop ai that can improve human life. ai has enormous positive potential for social and global good. A big part of my job as an educator is to encourage the next generation of ai scientists and leaders to focus on realizing that potential.
How do you address the challenges of the male-dominated tech industry and, by extension, the male-dominated ai industry?
I am lucky to work in an area of ai where we have a sizeable female population and established support networks. I have found these immensely useful in facing professional and personal challenges.
For me, the biggest issue is how the male-dominated industry sets the agenda for ai. The current arms race to develop ever larger ai models at any price is a great example. This has an enormous impact on the priorities of both academia and industry, and has wide-ranging socioeconomic and environmental implications. Do we need larger models and what are their overall costs and benefits? I feel like we would have asked these questions much earlier in the game if we had a better gender balance on the field.
What advice would you give to women looking to enter the field of ai?
ai desperately needs more women at all levels, but especially at the leadership level. The current leadership culture is not necessarily attractive to women, but active participation can change that culture and, ultimately, the culture of ai. It is infamous that women are not always good at supporting each other. I would really like to see a change of attitude in this regard: we must actively network and help each other if we want to achieve a better gender balance in this field.
What are some of the most pressing issues facing ai as it evolves?
ai has developed incredibly quickly: it has evolved from an academic field to a global phenomenon in less than a decade. During this time, most of the effort has gone into scaling through big data and computing. Little effort has been devoted to thinking about how this technology should be developed so that it can better serve humanity. People have good reason to worry about the safety and reliability of ai and its impact on jobs, democracy, the environment and other areas. We urgently need to put human needs and safety at the center of ai development.
What are some of the issues that ai users should consider?
Today's ai, even when it appears very fluid, ultimately lacks humans' knowledge of the world and the ability to understand the complex contexts and social norms within which we operate. Even the best of today's technology makes mistakes, and our ability to prevent or predict those mistakes is limited. ai can be a very useful tool for many tasks, but I wouldn't trust it to educate my children or make important decisions for me. We humans should remain in charge.
What's the best way to build ai responsibly?
ai developers tend to think of ethics as an afterthought, once the technology has already been developed. The best way to think about it is before any development begins. Questions like: “Do I have a diverse enough team to develop a fair system?” or “Is my data truly free to use and representative of all user populations?” or “Are my techniques sound?” should really be asked from the beginning.
While we can address some of this problem through education, we can only enforce it through regulation. The recent development of national and global regulations on ai is important and should continue to ensure that future technologies are safer and more reliable.
How can investors better drive responsible ai?
ai regulations are emerging, and companies will ultimately have to comply. We can think of responsible ai as sustainable ai that is really worth investing in.