To give exceptional women in artificial intelligence and other fields the deserved time and spotlight, TechCrunch launches a series of interviews focusing on extraordinary women contributing to the AI revolution. We will publish several pieces throughout the year as the wave of artificial intelligence continues, highlighting key works that are often unrecognized. Read more profiles here.
Anna Korhonen is a Professor of Natural Language Processing (NLP) at Cambridge University. She is also a senior research fellow at Churchill College and a fellow at the Association for Computational Linguistics, and a fellow at the European Laboratory for Learning and Intelligent Systems.
Korhonen previously served as a fellow at the Alan Turing Institute and holds a Ph.D. in Computer Science and a Master's degree in Computer Science and Linguistics. She researches NLP and how to develop, adapt, and implement computational techniques to meet the needs of AI. She has a special interest in NLP responsible for and "focused on humans," which – in her words – "relies on the understanding of human cognitive, social, and creative intelligence."
Questions and Answers
In brief, how did you start in AI? What attracted you to the field?
I have always been captivated by the beauty and complexity of human intelligence, especially regarding human language. However, my interest in STEM subjects and practical applications led me to study engineering and computer science. I chose to specialize in AI because it is a field that allows me to combine all these areas of interest.
What work are you most proud of in the field of artificial intelligence?
While the science of building smart machines is fascinating, and it is easy to get lost in a world of language models, the ultimate reason we build AI is its practical potential. I am most proud of work where my basic research in natural language processing has led to the development of tools that can support social and global good. For example, tools that can help us better understand how diseases like cancer or dementia develop and can be treated, or applications that can support education.
A large part of my current research is driven by the task to develop artificial intelligence that can improve human lives for the better. AI has enormous positive potential for social and global good. A significant part of my role as an educator is to encourage the next generation of AI scientists and leaders to focus on realizing this potential.
How do you navigate the challenges of the male-dominated technology industry, and in extension, the male-dominated AI industry?
I am fortunate to work in an area of AI where we have a large population of women and supportive networks. I have found these to be extremely helpful in navigating my career and personal challenges.
For me, the biggest challenge is how the male-dominated industry sets the agenda for AI. The current race to develop larger AI models at all costs is an excellent example. This has a tremendous influence on the priorities of academia and industry alike, and far-reaching socio-economic and environmental implications. Do we need larger models, and what are the global costs and benefits of them? I feel we would be asking these questions much earlier in the game if we had better gender balance in the field.
What advice would you give to women looking to enter the field of artificial intelligence?
Artificial intelligence desperately needs more women at all levels, but especially in leadership positions. The current leadership culture may not be inherently appealing to women, but active involvement can change this culture – and ultimately the AI culture. Women are not always great at supporting each other. I would really like to see a change in approach from this perspective: we need to actively connect and help each other if we want to achieve a better gender balance in this field.
What are some of the most pressing issues facing AI as it develops?
Artificial intelligence has developed remarkably fast: it has evolved from an academic discipline to a global phenomenon in less than a decade. During this period, most efforts have been focused on scale change through massive data and intensive computation. Little effort has been devoted to thinking about how to develop this technology to best serve humanity. People have a good reason to be concerned about the safety and reliability of artificial intelligence and its impact on workplaces, democracy, the environment, and other areas. We need to prioritize human needs and safety in the development of artificial intelligence.
What are some of the issues AI users need to be aware of?
Current artificial intelligence, even when it appears highly sophisticated, ultimately lacks the global knowledge of human beings and the ability to understand the complex social contexts and norms we operate with. Even the best technology today makes mistakes, and our ability to prevent or predict these errors is limited. AI can be a very useful tool for many tasks, but I would not trust it to educate my children or make important decisions for me. We, as humans, need to remain responsible.
What is the best way to build artificial intelligence responsibly?
Artificial intelligence developers tend to think about ethics as an afterthought – after the technology has already been built. The best way to think about it is before any development begins. Questions like, "Do I have a diverse enough team to develop a fair system?" or "Are my data really free to use and represent all user populations?" or "Are my techniques robust?" should really be asked at the outset.
Although we can address some of this issue through education, we can only enforce it through regulation. The recent development of national and global AI regulations is important and needs to continue to ensure that future technologies are safer and more reliable.
How can investors push for more responsible AI?
AI regulations are emerging, and companies will ultimately need to comply. We can think of responsible artificial intelligence as an existing AI worth investing in.