Why AI must be decolonized to fulfill its true potential

Data gaps, western bias and extractive business models limit AI’s effectiveness and perpetuate historic harms, writes Mahlet Zimeta.

The World Today Updated 3 October 2023 Published 29 September 2023 4 minute READ

Dr Mahlet Zimeta

Data and technology policy expert, Freelance

Artificial intelligence, incubated in the American private sector, is becoming a new kind of colonial hazard, reshaping people’s understanding of empire. Traditional decolonization, with its origins in the Atlantic revolutions of the 18th century, seeks to dismantle European imperialism, with the contemporary movement catalyzing a new generation of intellectuals around Black Lives Matter.

Decolonization involves reckoning with the factors that made European colonization possible and shaped how it happened, such as predatory business models and the violent imposition of social hierarchies.

A new vector of colonial harm

It also involves identifying the longer-term effects of these – such as modern-day economic and environmental injustices, or the loss of intellectual and cultural equity – ameliorating those harmful impacts and preventing them from happening again. Until broader decolonization is achieved, these same factors will affect how AI is developed and deployed, making AI a vector of  a new kind of colonial harm.

But just as AI might reshape our conceptions of empire, it may also have the potential to forge new alliances for a new conception of freedom.

A high-profile area in which AI could have significant benefits for humanity is health. AI has the potential to increase speed, precision and innovation in health systems, from diagnostics through to the delivery of care. But health systems are fundamentally compromised by how European colonization erased other communities, and this has significant implications for contemporary health AI.

AI models are developed by training on data, so the quantity and quality of relevant data affects the accuracy and usefulness of any AI model. A good example of this is the use of AI in genomic research.

All living things have their own set of genes. The richness of genomic data enables scientists to identify patterns across generations, or genetic variations that might cause different effects, making genomic research a powerful tool in developing medicines by helping in the understanding of which genes can cause or prevent diseases in humans. The complexity of genomic data is well-suited to how AI works through the detection of deep patterns in large datasets.

However, a report in Nature Medicine in 2022 estimated that 86 per cent of genomic research in the world is carried out on genes of people with white European ancestry, roughly 12 per cent of the global population. This means that AI models developed from genomic data may not be effective on the rest of the world’s population, nearly eight billion people.

Data gaps

The missing genomic data could hold vital clues for developing new drugs and treatments that would benefit everyone, because some genetic variations might only exist in certain communities. So although the use of AI in genomic research could increase the speed of drug discovery, those gains are dwarfed by data gaps that limit what research can be done. Developing AI models on incomplete or unrepresentative data will widen the research gap.

The often traumatizing ‘ghost work’ of labelling harmful content is typically done by workers in low-income countries paid as little as $10 a day

Decolonizing AI is essential if it is to achieve its potential for public good in other areas as well. The colonial erasure of communities has led to the same sorts of under-representation in contemporary national statistics, raising similar challenges for the development of AI in the public sector.

Although the work of government could be made more efficient by the implementation of AI models, this is again undermined by structural flaws in population data. This affects government functions as diverse as education and justice, as well as transport and energy infrastructure, housing and employment policy, with associated costs and missed opportunities for us all.

While AI is often touted for its potential benefits to society, research led by Abeba Birhane found in 2022 that only about 15 per cent of the field’s most influential AI research papers connected their work to any societal need, and only 1 per cent considered potential negative aspects.

Most developments in AI are driven by the kinds of extractive business models that drove European colonization, with many of the same harmful impacts. In 2021, the landmark ‘stochastic parrots’ research led by Timnit Gebru warned that the performance gains underpinning what would become generative AI were disproportionately small compared with the increase in the operating costs and environmental impacts involved in developing and running them, ‘doubly punishing’ marginalized global communities.

Recent investigations by the Bureau of Investigative Journalism and by The Guardian have exposed how the monotonous and often traumatizing ‘ghost work’ of labelling harmful content and other datasets for corporate AI systems is typically done by workers in low-income countries being paid as little as $10 a day – a form of labour now being offered to prison inmates  in Finland for less than $2 an hour.

The lack of equivalent ESG reporting for the digital economy undermines our shared progress

European colonization was not just about extraction, though, it was also about suppression and control of colonized populations so that extraction could be maintained. In 2022, investigative work on AI colonialism led by Karen Hao for the MIT Technology Review revealed how AI tools were being used to exploit the global poor, introduce digital apartheid and create new forms of political disenfranchisement and coercive surveillance around the world.

In traditional sectors of the global economy, environmental, social and corporate governance (ESG) reporting has been developed to counter these sorts of colonial legacies in global supply chains and business activities. But the lack of equivalent ESG reporting for the digital economy undermines our shared progress towards sustainable development goals.

AI for the flourishing of all humans and non-humans

This is a global problem that will only get worse as more traditional ‘bricks and mortar’ organizations develop their digital capabilities and the global digital economy expands. If part of what made European colonization possible was a narrow imagination imposed on a wider world, then part of what is involved in decolonization is the fostering of a richer range of perspectives.

And so decolonizing AI communities now include: researchers at DeepMind identifying tactics for de-centring western norms; communities at MozFest developing decolonial alternatives to voice technology and facial-recognition technology; and the Indigenous Protocol and AI Working Group exploring how indigenous epistemologies and ontologies can contribute to the development of AI for ‘the flourishing of all humans and non-humans’.

Content ctd

Decolonizing AI is not just about undoing the harms of the colonial past, it is about learning from the harms of the past to make sure AI helps build a better future for everyone. ‘Our destinies are intertwined,’ says the AI Decolonial Manyfesto. ‘We owe each other our mutual futures.’

European colonialism was maintained by competition between the imperial powers and by geographical separation between colonizer and the material consequences of colonization. But because of decolonization, ours is an era of international cooperation and multicultural perspectives, broadening and deepening all aspects of our lives. These gains must also underpin our ambition for AI.

Freedom is not just independence: it is flourishing through mutually respectful interdependence with each other and with the world around us. AI is poised to become a general purpose technology, rolling out across sectors, domains and borders with a speed and power that might exceed how European colonization swept across the globe. None of us can afford to get it wrong; decolonizing AI can help us get it right.

Note on the illustration: The illustration for this story has been produced using generative AI. We identified two main concerns using AI for artwork: the ethics of attribution and of transparency. On transparency, we designed our prompts to ensure the resulting illustration is not mistaken for photojournalism or as depicting actual events or people. On the ethical pitfall of using generative AI models which exploit the creative efforts of humans without attribution, we chose the software deliberatively. Adobe’s Firefly is trained on content from its own stock library. This meant the compositional output would be more limited in range, a trade-off we considered less consequential than the exploitation of others’ labour.