Artificial intelligence is changing so rapidly that its would-be regulators are having trouble keeping up. But the potential impacts of AI on societies may be so transformative – whether for better or worse – that strengthening cooperative, global governance to ensure a future of responsible AI is an urgent necessity.
In 2023, Chatham House asked its network of digital technology and policy experts for their big questions on artificial intelligence (AI). This essay collection – the inaugural paper in a planned series of publications on AI – sets out to offer some answers. Written by leaders in their fields, the essays present a range of perspectives on the promise and pitfalls of efforts to govern this emerging technology. The collection brings together voices from industry and government, civil society and academia, and perspectives from Africa, Europe, Latin America, the UK and the US.
Governance of emerging technologies such as AI may prove to be one of the defining challenges for international relations in the 21st century. The competition for technological hegemony promises its winners economic advantage, the entrenchment of their values and norms, and an edge in military power. China and the US – locked in an increasingly tense rivalry in many areas, including technology – remain the most significant investors in AI development globally.
Yet if 20 years of digital technology development have proven one thing, it is that power derived through technology rarely maps neatly to geographies, markets or any existing set of international rules, norms or values. New centres of power have emerged. Governance of technology is usually retroactive. Institutions – whether democratic or autocratic, in politics, media and throughout the economy – need time to come to terms with the changing technology landscape and to adapt accordingly. But technology advances rapidly, which means that decisions made in corporate boardrooms often precede and have more weight than those made in parliaments, government ministries or regulatory agencies. Whether intentionally or inadvertently, companies are often in effect setting global standards on fundamental rights, on political and social norms, and on the assumptions, aims and values that shape the technology we use in modern life.
While quick to spot the opportunities, national governments in particular have been slow to rise to these challenges, and multilateral institutions even slower. Shaping foundational digital technologies – digital media, sharing platforms, cloud storage, encrypted messenger apps and now AI – remains a point of weakness for most governments, particularly democracies.
Why does this matter now in particular? There is an emerging consensus that the stakes are higher than before as a result of this next wave of technologies. Even the most sceptical observer of AI development would agree that AI will be responsible for significant upheaval: it will certainly disrupt economies, societies and many dimensions of physical and digital security; the impacts will likely be even broader as the technology continues to be deployed more deeply into our everyday lives. AI prophets might go further. They might promise, or warn of, a reassessment of the most fundamental aspects of global society – encompassing understandings of economic value, questions around the superiority of humans to machines, or even the likely survival of humans as a species.
How AI will transform the world is a geopolitical question. Conflict and competition will shape the technology in certain ways; cooperation will shape it in others.
How AI will transform the world is a geopolitical question. Conflict and competition will shape the technology in certain ways; cooperation will shape it in others. AI designed and built in a cutthroat marketplace will look different to AI dominated and shaped by monopoly power. AI development led by universities will not resemble that led by states, militaries, philanthropic organizations or technology companies. AI developed in China will be different to AI built in the US, Europe or India. Which of these trajectories are more or less likely, whether some might coexist, and how they can be steered are the questions at the heart of this essay collection.
Taken as a whole, the authors’ arguments on various dimensions of AI governance underscore the idea that ensuring collective human benefit should be the guiding principle for negotiations on the future of AI. Beyond a focus on risk aversion or harm prevention, the authors demand a clear articulation of the kind of world we should be aiming to bring about through this technology. Perhaps above all else, the collection is a call for clarity from those around the table about their aims, and about the realities of this technology revolution.
Whether change needs to come from the design of new institutions or regulatory frameworks, from multi-stakeholder consultation or from community leadership, the message is clear: current AI governance is insufficient. It is insufficiently incentivized, insufficiently resourced, insufficiently coordinated and insufficiently representative. AI governance will need new agreements, treaties and institutions: a CERN-like institution for global cooperation on AI research, for instance, or new corporate models and multilateral treaties governing the use of AI.
Without a change, we risk repeating and entrenching blunders made in the provision of digital technology in recent decades. While the proponents of AI may be fond of emphasizing its novelty, there are deep and disconcerting continuities at the heart of the AI revolution. Access to digital technology remains wildly uneven around the world on any measure: internet connectivity, advertising spend, the availability of affordable mobile internet services, investment in digital infrastructure. Improved access to AI is essential: through skills development, infrastructural investment and the thoughtful use of open-source approaches. The race for market share by US and Chinese technology firms is a familiar story that carries lessons for the next generation of AI-enabled technology, particularly when considering the often questionable effectiveness of regulation in anticipating future technical developments, and the insufficient influence of global majority countries on the technology their citizens use.
Without a change, we may also miss the promise of these new technologies. While headline-grabbing warnings of the existential risks of AI have somewhat faded, democracies have for the most part retreated into their comfortable roles as regulators and rule-makers. The potential consequences are twofold: on the one hand, the impetus behind the development of blueprints for state-backed AI may be left to autocratic or authoritarian states, for which the potential of AI as a route to expanded power is irresistible. On the other hand, liberal democracies may fail to demand normative technology that meets the standards and needs of their countries, and may fail to take advantage of the power of AI to buttress liberal values and cultural norms or to transform public services. Without committed action to ensure its responsible development, AI may simply amplify the worst excesses of digital media and of state-led, technology-backed repression.
Competition, conflict and cooperation around the design, deployment and governance of emerging technology will remain central influences in global affairs. AI technology – on the battlefield and on the trading floor, in hospitals, newsrooms and classrooms – presents challenges and opportunities for states looking to advance their position in the world, or to respond to the concerns of citizens who expect their governments to protect and provide. Rising to this challenge – through clarity of mission and purpose, multi-stakeholder dialogue, and investment and innovation in governance – will ensure this latest technology is a force for global good. My hope is that this collection will drive that agenda forward.