Artificial intelligence
“Why gender perspectives must be included in the study of artificial intelligence”
Artificial intelligence (AI) currently plays a central role in the digitisation and modernisation strategies of public administrations and companies throughout Europe, the United States and China. The potential improvements and advances in efficiency that the incorporation of AI can offer strategic sectors in different countries have made it indispensable in a new era of technological transformation. And while no one wants to be left behind, the main players of this new digital era have from the very beginning approached these technologies in significantly different ways.
While the United States and China have already embraced AI as one more component of their geopolitical strategies, the European Union (EU) is positioning itself as a global leader in its ethical use. According to the EU, in order to be considered ethical, any AI technology used in its territory must ensure respect for the fundamental rights of EU citizens. In this way, the EU hopes to avoid the potential harm that the misuse of AI can cause to its citizens and to find solutions to the major ethical concerns (bias, discrimination, algorithmic opacity, lack of transparency, privacy issues, technological determinism, etc.) that these emerging technologies bring with them.
But despite the best efforts of the EU and others to mitigate the harmful effects of AI, some of the technology’s inherent flaws have yet to be adequately addressed. One such flaw is gender inequality.
Prejudices and habits built in from the design phase
Given that technology is a human construct that is socially and culturally conditioned, any prejudices, habits and ideas that are not regularly and rigorously examined are destined to find their way into the design and use of new technologies. AI is no exception: if not approached from a gender perspective capable of taking these circumstances into account, analysing the different aspects involved and correcting them where necessary, all the prejudice, bias and discrimination present in our society are likely to be reproduced and even augmented.
Gender biases have been present in AI from its inception. This is partly due to the fact that, for decades, it has been almost exclusively the domain of men. This is illustrated by the choice of the term ‘intelligence’ to designate this new group of technologies.
Though the term is presented as universal, the ‘intelligence’ in AI has in truth always referred to the reproduction of human abilities associated with logical-mathematical thinking and, therefore, with traditional male rationality.
At the same time, other qualities such as feelings and care, historically attributed to women, have been excluded from the field. But while AI has only been able to reproduce the kind of skills traditionally associated with male thinking, this has been enough to earn it the label ‘intelligent’. This is by no means to suggest that women are not capable of this type of intelligence, far from it. The point here is to bring to light how certain qualities traditionally associated with masculinity are immediately equated with universal intelligence – without even seriously asking whether a machine that is only capable of computing data can truly be thought of as intelligent.
At the same time, the very impossibility of reproducing characteristics such as sensitivity, feeling, intuition, etc., until so recently relegated to the background due to their association with femininity, has led to a new appreciation for this type of intelligence. These traits are increasingly viewed as unique, characteristic and defining of human beings, even as machines have proven capable of reproducing logical thought. Whether there is truly such a thing as logical thought divorced from feeling and intuition is the subject of another debate.
Biases in the data that feeds AI
In addition to the gender bias present in AI since its inception, the data that feed into it and the algorithms that determine its operation present their own set of problems that negatively impact women in particular. One reason is that the data typically used for AI are obtained from the internet or from databases where men tend to be over-represented.
While 55 per cent of men globally have access to the internet compared with 48 per cent of women, the gap is much greater in parts of the world where equality remains a distant reality. In Africa, only 20 per cent of women have access to the internet, compared to 37 per cent of men. This phenomenon is known as the gender digital divide.
This divide renders the actual lives of women less visible while their online depictions tend to be more stereotyped and presented through a very masculinised filter. Various studies addressing this problem have revealed that women are frequently represented on the internet as highly infantilised, sexualised and precarious. Particularly well-known examples are Microsoft’s AI chatbot Tay, which quickly developed xenophobic, sexist and homophobic behaviour in its interactions with twitter users, andAmazon, which in 2015 discovered that its AI system used for personnel selection discriminated against women.
Such complex problems require a multifaceted and holistic response that addresses their root causes. In the short term, the databases that feed AI must be audited to ensure that women are equally represented and that data are free of gender, or any other, biases.
Over the longer term, ensuring that women have equal access to the internet, to digital services and to electronic administration is crucial to reducing the digital gender divide.
This requires promoting women’s education across the board, but specifically in the area of digital skills, increasing the presence of women in key political decision-making and relevant areas in the technology sector, and actively combating sexist stereotypes and the objectification of women. Special attention must be paid to the particular vulnerability and discrimination faced by racialised, non-Western, non-urbanite and precarious women.
On the other hand, if algorithms are not designed to take gender perspectives into account, they are likely to end up reproducing trends that negatively impact women, as is the case with public policy. Mechanisms that allow women to be put on equal footing (such as positive discrimination quotas) should be required for algorithms, just as they should be for other political spaces.
Putting an end to the feminisation of assistive technologies
Finally, the inequality between men and women in the field of AI is also reflected in the very physical characteristics of its hardware. Various studies have highlighted the overwhelming presence of feminine traits in chatbots and assistive AI technologies. The use of voices and names traditionally associated with women, such as Alexa and Siri among others, reproduces the already existing association between women and servitude. However, the transposition of roles between the analogue and digital spheres goes beyond characteristics such as name or voice. Robots with humanoid features (which are becoming more and more common) are often distinctly female in their appearance.
The feminine appearance of these robots more often than not reflects the established canons of beauty where non-normative bodies and racialised women, among others, have no place.
This can give rise to even more complex and harmful problems, such as robots used for sexual purposes that perpetuate violence against women. All these issues highlight the terrible situation of discrimination suffered by women and reproduced in new technologies. If our goal is to create egalitarian societies, all countries, companies and public administrations must address the study and use of AI from a gender perspective before making use of such technologies.