If dominated by major powers, AI development risks creating a new form of digital colonialism, particularly in Africa and other parts of the Global South. But a more optimistic future is imaginable, in which universal rules on AI are jointly shaped in a global public sphere drawing on many cultures and value systems.
American and Chinese artificial intelligence (AI) systems, both their algorithms and data infrastructures, are in a contest for supremacy, which many in the AI and policy communities are following with interest. But for many countries around the world, the question of which model will prevail is secondary to the uncomfortable fact that both represent a similar force of foreign technology. AI imposed from outside, and shaped by the language and social systems of a few powerful countries, risks becoming a form of digital colonialism that ignores diversity of geography, language and culture.
How can today’s post-colonial societies, such as many in Africa, avoid being recolonized, this time through foreign technology? AI systems are not neutral intermediaries. They, like every technology, are tools of political power, and attention should be paid not just to their technical implications but to their potential to disrupt and fragment societies in the same manner that colonialism did in analogue contexts in the past. Colonial administrations in the 20th century sought to suppress the use of African native languages, and were especially anxious to promote the written use of European languages. ‘People were taught to feel ashamed for their own language,’ observes a researcher in natural language processing (NLP) – a branch of AI associated with enabling computers to understand text – quoted in one academic article.
This risk of exclusion may even be accentuated by the inception of foundation and generative models in AI, for example if large language models (LLMs) such as ChatGPT rely on geographically, culturally or linguistically non-diverse sources: ‘LLMs model their output on the texts they have been trained on, which is more or less the writing of the entire Internet, including all the biases – the prejudices, racisms, and sexisms – that constitute much of it … in the future, language models themselves may take on the status of a surrogate public sphere.’
Diversity and inclusivity in AI need to be enshrined within, and supported by, an internationally developed human rights framework.
Viewing AI as a sociotechnical system – not just as a tool – brings the values underlying AI to the surface. And it is essential that such values are diverse: representative not only of people on the West coast of the US, but also of other nationalities and cultures as well as the formerly oppressed, including women and thought leaders from the Global South. Diversity and inclusivity in AI need to be enshrined within, and supported by, an internationally developed human rights framework. This also means taking a critical eye to current imbalances in the global development of AI – such as the frequent marginalization of non-Western voices – and recognizing the problem’s sources in institutional structures and historical inequalities.
One way to think about addressing the issue is through the pursuit of what has been described as an ‘overlapping consensus’ – one that would draw its values substantially from the Global South and Europe rather than just the US or China, and could thus inform a global AI-enabled ecosystem that is more equitable than one that excludes some parts of the world in its design.
Values in technology and regulation
For the past two decades, values have been globally exported through digital technology. In the case of social media, for instance, norms around freedom of expression, newsworthiness and privacy have been renegotiated through the algorithms built in Silicon Valley. The new generation of AI tools are no different, with the risks associated with their influence increasing the more frequently we outsource our decisions to the human-like answers the tools might give.
In Africa, this means AI tools may often ignore African values and reflect those of the countries leading AI development, most notably the US and China. In very simple terms, US systems tend to emphasize the autonomy of the individual and commodify social relationships. Chinese systems, meanwhile, advance the value of social control. To date, African countries have often had to choose between these two competing blueprints, even though neither necessarily benefits local cultures or provides a public good. For example, where governments such as Senegal have seemed to embrace the Chinese model of digital sovereignty (for example, by localizing government data on to domestic servers), such action has sometimes given the impression of performative policymaking for political ends. This may ultimately lead to increased, rather than reduced, hegemony of imported values along with a strengthening of foreign economic interests (Senegal’s new national data centre, opened in 2021, was Chinese-built).
Europe is a different case, in some ways less obviously invasive as an AI power, but also emblematic of the challenges and uncomfortable dilemmas African countries face as they seek to navigate the AI landscape and shape it to their advantage in the future. What Europe lacks in tech export capacity it makes up for in its world-leading regulation. In an attempt to boost member states’ technological autonomy, and insulate European citizens from US and Chinese AI tools, the EU is developing an AI regulatory framework that will include protections for individuals, markets and digital products. The most notable element of this initiative is the new EU AI Act, approved by the Council of the EU in May 2024. (See also, in particular, Chapter 3, ‘Regulating AI and digital technologies – what the new Council of Europe convention can contribute’, and Chapter 5, ‘Open source and the democratization of AI’.) The EU’s global influence has given rise to anticipation, at least in Europe, that the world (including Africa) will embrace the standards enshrined in the AI Act, including those around values such as privacy and autonomy of the individual.
Yet in its own way, the EU’s ostensibly progressive approach is also an unwelcome imposition of values on non-Western countries, and a form of domination based on paternalism. It means Africans, for example, may potentially be denied the right to choose to govern their societies based on their own values of community and the equitable distribution of social goods. Although it does not appear that African countries will be coerced into adopting EU regulations, in practice states may nonetheless choose to comply with the EU AI Act in order to access European markets – in much the same way as some African states have already adopted European cyber governance standards. In short, the regulatory power asymmetry between Europe and Africa that is partly a historical legacy may come into play again where AI regulation is concerned.
While not all European values are bad per se, the imposition of the values of individualism that accompany Western-developed AI and its regulations may not be suitable in communities that value communal approaches. Just as dual-use biometric technologies have the potential to create unintended consequences – for example, amplifying ethnic tensions – the values that currently underpin AI deployment will likely lead to increased inequality alongside social, economic and political disruption, with technologically disadvantaged and under-represented populations in Africa faring the worst.
The need for homegrown solutions
Given how AI systems may have disproportionately negative impacts on historically disadvantaged groups, more attention needs to be paid to how technology impacts the rights to self-determination in post-colonial societies. African societies have different approaches in this area from those of the countries and jurisdictions dominating the current discourse. For instance, on the question of whether humans owe ethical obligations to robots, African ‘ubuntu’ values – which promote harmony, consensus, collective action and the common good – have thus far been excluded from the debate. Ethicists discussing the implications of robotics, in other words, have considered many variables but not how ubuntu fits into the picture.
Such dynamics confirm that we cannot expect solutions to come from the existing centres of power. The UK’s proposed AI audits and the EU’s comprehensive AI regulations are designed to protect European markets and preserve that continent’s technological strategic autonomy and global dominance. Paternalism can also be observed in how China collects African biometric data as a means to diversify AI training datasets that lack black faces; China needs datasets containing black faces to train the algorithms its AI labs produce. The promise of logistical efficiency that automation brings to the production and distribution of goods and services comes at the expense of African communal values. When Western companies harness machine learning to improve the productive efficiency of industrial agriculture, they disrupt traditional societal structures that make African life meaningful. In addition, in globalized economic systems, major decisions on resource allocation are taken far from individual producers and consumers and have become opaque to them. These and other cases of value imposition by global AI superpowers show that Africa is ‘a theatre of operations rather than the focus itself’. When foreign values compete in this geopolitical theatre, they erode African collective values such as communalism. Despite having their own downsides, these values give meaning to Africans and support the political agency necessary to counteract external domination.
To secure such agency, multi-stakeholder approaches to AI governance are critical. Drawing from the readily available wealth of scholarship and expertise on resisting colonialism among the formerly oppressed, such approaches will need to challenge fundamental assumptions about proprietary research; they will also need to address issues such as lack of representation, and the absence of mechanisms for shared ownership. This is important given that AI governance discussions that only include regulators and tech companies miss critical voices: individuals and communities who are most affected by the vulnerabilities AI could create. The decision to listen, learn and invite new leaders to the table could shape an AI-driven future of equity, compassion, human creativity and opportunity, rather than one of exclusion and exploitation.
The decision to listen, learn and invite new leaders to the table could shape an AI-driven future of equity, compassion, human creativity and opportunity, rather than one of exclusion and exploitation.
An inclusive AI partly informed by ubuntu values would work both ways: not only benefiting Africa but also providing normative standards for the rest of the world, to the benefit of all. In this more equitable digital world, European regulators would not be alone in pushing back against US and Chinese hegemony; they would have the support of the Global South. For this to occur, there needs to be a global public sphere in which universal rules on AI can be debated and forged. While such a sphere would certainly include and respect European voices, its heart might lie in the southern hemisphere, with the debate led by the perspectives of communities that have historically faced oppression and colonialism.
In this way, the above-mentioned ‘overlapping consensus’ could bring together the best thinking from the Global South and Europe to create a safer, more sustainable and more equitable vision for the future of AI. Such a consensus, based on an intercultural discourse, can ultimately address the unfair distribution of benefits and harms of AI by evaluating the systemic colonial social power arrangements behind such a distribution.
In taking this approach, we will build better AI, too: systems that spotlight historical inequalities and locate problems not just within technical systems, but within the social structures and institutions they originate from.