5 min read

The impact of AI on (in)equality: interview with Ajuna Soerjadi

10.07.24 | Jorein Hendriksen

At the age of 23, Ajuna Soerjadi has already built an impressive career as a philosopher, with a special focus on data ethics. During her philosophy studies, she specialized in this field, leading to the establishment of the Expertise Center for Data Ethics in her second year. Here, she assists government agencies in getting a grip on the AI revolution, a task that is becoming increasingly important in our rapidly digitizing world.

That is not the only area Soerjadi is committed to. Her dedication to diversity and inclusion in the STEM (Science, Technology, Engineering and Mathematics) sector has made her a runner-up for the ECHO award, a Dutch prize that recognizes successful students with a non-Western migration background who distinguish themselves through their entrepreneurial spirit, active social involvement and constructive approach to challenges surrounding exclusion. It is therefore no surprise that Soerjadi's work has received international recognition. In 2024, she was voted one of the "100 Brilliant Women in AI Ethics" worldwide, a prestigious title that confirms her position as a leading voice in her field.

What inspired you to focus on the impact of artificial intelligence (AI) on equality and inclusion?

"My involvement with the impact of AI on equality and inclusion began with the Dutch Childcare Benefits scandal. The use of algorithms resulted in discrimination against people with dual nationality, with devastating consequences for numerous families. You would expect that lessons would be drawn from this, but recently it has come to light that DUO (Dienst Uitvoering Onderwijs) is also more likely to suspect students with a migration background of fraud. My background in philosophy motivates me to ask critical questions about this. How were the data collected, and is it representative? In short, I am concerned with how the inequality and injustice that exist in the world reflect in data."

What impact do you think AI has on equality and inclusiveness in society?

"AI has contributed to inclusion and equality in many ways. For example, a multilingual chatbot that assists non-Dutch speakers in formulating aid requests at Voedselbanken (food banks). AI can also help detect racism on social media by searching for specific keywords. Finally, you can think of self-driving cars, which increase the mobility of people with disabilities.

At the same time, there is a downside to these developments. Chatbots are trained on language containing inevitable stereotypes, people of color face a higher risk of being hit by self-driving cars or being misidentified by facial recognition systems trained on predominantly white datasets. Content moderation to combat racism, for example, is done by people in countries like Kenya and Venezuela, where working conditions are often poor and they are continuously exposed to traumatic content. In summary, AI can contribute to equality and inclusion, while simultaneously promoting inequality and exclusion."

What do you see as the biggest challenges in ensuring equality and inclusiveness in the development and application of AI?

"The biggest challenge is the lack of awareness. Without an understanding of the importance of inclusivity in AI, people will not invest in it. This is evident in, for example, facial recognition systems. Datasets mostly consist of white faces because there are simply more photos of them available and they cost less. As a result, systems have difficulty recognizing dark faces. In the US, this has led to multiple innocent black men being incarcerated because the facial recognition system mistook them for other black men."

To what extent do you think there is a 'trade-off' between innovation and ensuring equality and inclusivity in the development and application of AI?

"People often think that innovation is incompatible with equality and inclusivity, and to some extent that is true. If efficiency is paramount, it can come at the expense of inclusivity. But why are we developing AI in the first place? Isn't it to make our lives better and to ensure a meaningful existence for everyone? Inclusion is a prerequisite for innovation. If not everyone can benefit, you are not engaging in meaningful innovation."

Who do you think should be responsible for ensuring equality and inclusiveness in the use of AI?

"This reminds me of the 'social connection model' of philosopher Iris Marion Young. She suggests that, in a sense, everyone is responsible for, in our case, equality and inclusion in AI, but not to the same extent. Our responsibility depends on our power, privilege and resources. The greatest responsibility should therefore lie with CEOs of Big Tech companies, the government and the legal sector. In addition, responsibility can also be linked to your role in society. If you are a teacher, you can teach your students how to critically engage with AI. If you are a policy maker, you can establish guidelines for the responsible use of AI."

What changes do you think need to take place to motivate Big Tech to engage with the ethics of AI?

"It would be nice if the users of AI became more critical of what kind of AI they use and looked for ethical alternatives. Big Tech would notice such a change and adapt to it. Motivation comes when they start to feel it in their wallets. You can also see this in the field of sustainability: more and more people are aware of the conditions in which 'fast fashion' is produced and are looking for ethical alternatives. "Big brands, in turn, are responding to this.”

How do you see the future of AI in relation to equality and inclusiveness?

"The future of AI in relation to equality and inclusion requires bold political choices that differ from those currently being made. Currently, for example, a significant amount of money and effort is being invested into automated fraud detection, while the most vulnerable groups bear the brunt of this. It would be much better if AI were used to detect not just welfare fraud, but 'white-collar crime' such as money laundering and political corruption, or to identify people who are entitled to benefits but are afraid to ask for help. Also, I hope that in the future people will see that discriminatory AI is not a technical problem, but an ethical one. When you datafy something, it always impoverishes reality. The algorithm is optimized from a certain perspective; someone has decided what information is more important than the other and what can be omitted."

What steps do you think should be taken to better deal with AI?

"It would help to set up ethical committees to discuss issues surrounding the ethics of AI. It would be beneficial for higher education to focus on critical thinking about AI. Students are going to have to deal with AI, so they need to know how to use it responsibly. Finally, assessments can also help determine whether algorithms are responsible, such as the Impact Assessment Human Rights and Algorithms (IAMA). The most important thing, however, is and remains taking a step back and engaging in ethical dialogue about how AI can contribute to a society where everyone can thrive."

Next article:

2019-11-12
Generations in conversation
2019-12-04
Disproportionate voting rights