Redesigning the Algorithm: Building Feminist AI for a More Inclusive Future
Professor Nagla Rizk unpacks the principles of feminist AI and the importance of inclusion in technology and data application.
Can data be sexist? Does artificial intelligence have the ability to discriminate?
As AI has developed rapidly over the past decade, researchers have discovered the real-world harm of potential bias in the data and the ways it disproportionately affects women and marginalized groups. Through the Access to Knowledge for Development Center (A2K4D) at the Onsi Sawiris School of Business, and its flagship initiative the MENA Observatory on Responsible AI, Nagla Rizk ’83, ’87, professor of economics and founding director of A2K4D, is leading the Feminist AI Research Network’s MENA hub. The network aims to develop AI systems and algorithms in a way that is inclusive, creating new opportunities and innovative solutions to correct inequalities.
So what is feminist AI? Rizk explained, “Feminist AI refers to the act of deconstructing oppressive systems, dismantling historic biases and engrained inequalities, then building inclusive AI structures that are based on principles of justice, transparency, agency, pluralism and more.” In short, it is the development and maintenance of artificial intelligence systems that ensure fairness across genders. AI has the potential to amplify biases and generate new ones. Feminist AI works to deconstruct these biases and create innovative solutions from within the data and algorithm design, addressing these inequalities.
Feminist AI is closely linked to the principle of “intersectionality” which refers to the “interconnected nature of social categorizations such as race, class and gender as they apply to a given individual or group, regarded as creating overlapping and interdependent systems discrimination or disadvantage.” Rizk added, “It is, in short, when oppression is linked.”
AI: Friend or Foe?
Humans have implicit biases, and when we create algorithms and AI models that rely on big data, those biases can unintentionally be amplified. Rizk seeks to find places where there may be data blur, data bias and data invisibility—and address these issues from the root.
“Technology has the potential to advance development, inclusion and achieving the Sustainable Development Goals. At the same time, there is also a peril,” Rizk stated. “As humans build AI models–with data and algorithms at their core—in every link of this chain lies a trigger for potential inequality. This could negatively impact women and marginalized groups. So it's important to think of inclusion when designing AI models.”
Data can be biased against women on both the micro and macro scales. For example, if you do an image search for the word ‘doctor’ on Google, 36% of results are women, whereas if you search up ‘domestic helper,’ 96% of results are women. Expanding out, Amazon’s AI hiring tools were more likely to prefer male candidates, as they were trained on male-dominated data from the tech industry. Apple-approved credit cards for candidates based on a biased data set would grant men 10-20 times higher credit than their wives. “These structural flaws in the data compound systemic issues that women already face, such as gender-based hiring, pay gaps and lack of financial security,” said Rizk.
Data also has ways of forgetting women. For example, the first iteration of Apple’s health app did not include women’s monthly health cycles. Additionally, there have been cases when diagnosing cardiovascular diseases, AI models have reproduced gender biases that exist in the real world and are less likely to take women’s symptoms seriously. “If women are invisible in the data, they will be invisible in the policy,” Rizk warned.
“If women are invisible in the data, they will be invisible in the policy."
“If we don't adopt a feminist sensitive approach to technology, we risk leaving behind a key part of the population. We also risk running into problems that will need to be fixed later after they’ve already caused damage,” explained Rizk. “The important point is that feminist AI is proactive. It is transformational.”
The Feminist AI MENA hub is working within the larger network now labelled as “Catalyzing Inclusive AI Research Network” with support from Canada’s International Development Research Centre (IDRC). Feminist AI research strives to take forward-looking steps that dismantle patriarchal structures, oppressive systems and historical inequalities inherent in technology and society in both the digital and analogue worlds. The hub’s work seeks to support the construction of inclusive systems that overcome biases, based on feminist principles, addressing intersectionality, and ensuring diversity in representation and justice in the building, deployment and impact of AI.
A MENA-Specific Approach to AI and Gender
From research to large scale collaborations with NGOs and government partners, the Feminist AI MENA hub is working to catalyze inclusive AI for development. Rizk emphasized the importance of looking at AI and gender inequality in the MENA-specific context, “The MENA region has its own nuances which require a region-specific response.”
One example of the work supported by the hub is research developing Arabic feminist data sets as part of a larger project to apply data feminism principles to assess bias in English and Arabic Natural Language processing. Another is work supported by the hub to develop an AI tutoring system to assist teachers to teach math in Arabic to girls of different ages in underprivileged community schools in Upper Egypt (Sa’eed). There, girls unfortunately do not receive the same schooling opportunities as boys and require additional support. In both examples, AI is used as a tool that, if properly controlled for potential biases, promotes equal opportunity between the genders.
Encouraging STEM education for women is crucial to increase the gender balance in the design of technology. In the MENA region, the gender gap is much more pronounced in the area of STEM work than it is in STEM education. This is termed “the gender paradox.” The absence of women in STEM work creates a “feedback loop” where the algorithm is not gender sensitive and ends up discriminating against women. This is both a product of the culture and cycles back into it.
“If we don't adopt a feminist sensitive approach to technology, we risk leaving behind a key part of the population."
Examples of algorithmic biases in MENA can be found in implicit biases in gig work app algorithms evidenced by the hub’s research on gig work, following earlier research on women in ride sharing in Egypt, and work with research partners in the region. In ride sharing apps, the fact that bonuses are determined by algorithms based on the number of hours of work automatically means that women will be discriminated against as they put in less work hours due to their home care responsibilities. To make up for that, women end up driving at odd surge hours, subjecting themselves to safety hazards, especially in remote areas with limited connectivity. Because they carry the labor of being care givers, women are likely to be punished by ride-sharing app algorithms. This compounds the challenges of this work, which is already precarious lacking job security, social protection and insurance. With the region experiencing the highest global female unemployment rate and the lowest global female labor participation rate, these women end up being stuck between a rock and a hard place.
By performing evidence-based research directly in the region, the Feminist AI MENA hub can better support transformational technology development and bring those findings to the international feminist AI network. “Technology is a product of society, and should respond to the needs of society. What we hope for is that technology is informed by what is going on in reality.” Therefore, added Rizk, “the technology for the MENA region has to speak to the needs of the MENA region.”
Rizk and her colleagues plan on continuing to develop region-sensitive research, and bringing their findings to policy makers, civil society, and the international research network. Outside of the hub, Rizk is taking these principles into the classroom through teaching the course Feminist AI: Technology, Gender and Development. “It gave students a different perspective on using technology,” Rizk said, describing the impact she saw in her students. “We had two male students conduct research on the need to use feminist AI principles in FinTech. To me, it was really fulfilling to have students be aware of how you could actually implement principles of responsible AI.”
“We want to raise awareness and deliver a message of fairness, justice and inclusion,” Rizk concluded. “To be a feminist, you must always be sympathetic to all marginalized communities, not just to women. Therefore, technology must be inclusive to all. We work towards that future.”
