Gender is the missing frontier at the UK’s AI Safety Summit

AI safety decision-makers cannot prioritize abstract, existential risks over existing, everyday harms to individuals and communities.

Expert comment Published 1 November 2023 3 minute READ

Amrit Swali

Former Research Associate, International Security Programme

Gender equality, and the recognition of gendered harms and risks, has long been an important commitment for the UK’s science and technologies agendas. This commitment must now extend to the UK’s approach to AI safety.

This week’s AI summit will focus on the safety of ‘frontier AI’, which the UK has defined as ‘highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models’. The UK is approaching AI safety from a ‘risk’ lens, and the summit categorizes risks in two ways: ‘misuse’ and ‘loss of control’.

Thus far, discussions on these risks have centred on the existential and the catastrophic. In a recent speech UK Prime Minister Rishi Sunak highlighted how AI could help build chemical or biological weapons or be weaponized by terrorist groups or criminals.

But such alarmist narratives distract from day-to-day risks and harms faced by women and marginalized genders, which can be amplified by technology.

Cyberspace and new technologies can reinforce and exacerbate existing power structures, discrimination and bias relating to gender and other identities.

This is a well-known problem in AI, which is often trained on datasets that are likely to reflect existing systemic biases.

This can result in inaccurate and harmful outcomes – such as in recruitment, where its use has already resulted in fewer women being recommended for roles.

The published programme for the AI summit refers elusively to ‘global inequalities’, but makes no specific mention to gender. In mapping and mitigating risks, it is essential that the summit – and AI safety more broadly – considers intersectional gendered risks and harms in frontier AI.

Failing to do so will miss the chance to agree potential ways to mitigate significant AI risks.

Applying gender to the summit agenda

Gender equality is a well-accepted global governance objective: it is an international development commitment observed by many countries and is one of the UN’s Sustainable Development Goals.

Throughout the summit, UK policymakers should leverage this area of consensus to drive discussion of gendered dimensions to frontier AI misuse, applying a ‘lived experience’ perspective to specific categories listed on the summit’s agenda.

These include: risks to global safety from frontier AI misuse; risks from unpredictable advances in frontier AI capability and risks from the integration of frontier AI into society.

Risks to global safety from frontier AI misuse

There are significant established risks to global safety that have a gender basis.

For example, bioweapons can have severe impacts on reproductive health and fertility, affecting all genders differently. AI could be used to model certain pathogens to undertake precision attacks on certain genders.

Frontier AI capabilities could also help malicious actors enhance the sophistication, scope and impact of cybercrime and exacerbate existing gendered harms in cyberspace

Frontier AI capabilities could also help malicious actors enhance the sophistication, scope and impact of cybercrime and exacerbate existing gendered harms in cyberspace, for example by generating and disseminating deepfakes, increasing online (sexual) harassment of women and children and abuses of privacy.

Risks from unpredictable advances in frontier AI capability

Risks from unpredictable advances in AI relate to unintended or unexpected outcomes.

This has already been demonstrated with the rapid scaling of models for public administration: for example, systems operating across UK government departments have been accused of withdrawing benefits and denying marriage licenses using biased decision-making.

These systems can exhibit bias along gendered, socioeconomic and racial lines given the context in which they are deployed.

Measures to mitigate such risks – by building in ethical considerations at the AI development and testing stage – may be difficult to implement, especially if training data sets are not being updated. This is why it is vital to ensure that human control, oversight and safeguards are emphasized throughout.

Another important related aspect is one of access. Open-source AI increases the possibility of such tools being misused by malicious actors.

At the same time, such models might be a powerful tool for breaking down gendered digital divides by democratizing access to and use of technology.

Understanding gendered access to, use and perception of AI technology is vital to considering unexpected outcomes from frontier AI

In either case, understanding gendered access to, use and perception of AI technology is vital to considering unexpected outcomes from frontier AI.

Risks from the integration of frontier AI into society

Any risk assessment of AI’s integration into society must consider how it would affect the most vulnerable and the most historically marginalized.

DSIT’s Emerging Processes for Frontier AI Safety paper notes that imbalanced or inaccurate data can lead to less accurate and unhelpful AI systems.

AI models used in medical settings, for example, can display damaging biases. One study showed that AI models for predicting liver disease were more likely to miss the disease in women than in men.

Similarly, genomic research is largely conducted on the genes of white people, meaning that the AI models developed from these data sets may not be effective in a multicultural society or on a global scale.

Models trained on unrepresentative data sets could also create a global AI divide, where some societies benefit from AI tools while others find them inaccurate or unhelpful

Models trained on unrepresentative data sets could also create a global AI divide, where some societies benefit from AI tools while others find them inaccurate or unhelpful.

AI used in the criminal justice system will also have been trained on data sets that incorporate harmful and dangerous gendered and racial incarceration trends.

Noting these risks, the summit’s breakout discussions on improving AI safety would be enhanced with the adoption of an intersectional gender lens to improving safety.

Developers and scaling responsibly

While many AI developers have committed to responsible data collection practices, the concept of ‘responsibility’ is subjectively defined and selectively implemented.

Adopting a gender-perspective can level-up AI developer discussion on responsible scaling.

Committing to adopt feminist principles of data collection, for example, could help prevent harmful content from being included in training data in future.

However, commitments to gender equality go beyond data collection and input control, to the employment of workers hired to filter out harmful content from training data. In the case of ChatGPT, OpenAI has been accused of exploitative and inequitable employment

Role for national and international policymakers

An approach rooted in mapping and mitigating identity-specific harms will enable developers, policymakers and scientists to better understand and respond to AI risks and consider the needs of all those affected by its deployment.

Article 2nd half

The summit’s exclusivity is an obstacle to meaningful multistakeholder input, but breakout discussions could allow attendees to discuss risk with an intersectional gender perspective, to inform for more inclusive and responsive AI safety outcomes.

This would be consistent with the UK’s approach to science and technology, which has encouraged young girls to study STEM subjects, sponsored the participation of women fellows in UN cyber processes and made commitments to human rights, diversity, and gender equality in its National Cyber Strategy.

For UK policymakers, advocating for gender and inclusivity approaches in these discussions, the AI Safety Institute and future initiatives is a strategic opportunity to establish the UK as a leading advocate for responsible AI innovation.