When the UK hosted its AI Safety Summit on 1–2 November 2023, the country’s prime minister, Rishi Sunak, used the occasion to interview the entrepreneur and chief of X, Tesla, SpaceX, Neuralink and The Boring Company, Elon Musk.
There was some trepidation about this. It was not just that Musk is a controversial figure, but that the prime minister – the democratically accountable leader of a G7 nation – was the interviewer rather than the interviewee, the questioner rather than the one with the answers. The tableau crystallized a shifting landscape – one where leaders of technology companies wield significant power, and where leaders of states seem to come to them for solutions.
The theorist Ian Bremmer and Mustafa Suleyman, the co-founder of DeepMind, argued in 2023 that we are living in a ‘technopolar’ world – where power is wielded not just through control of capital, territory or borders, but through control of computing capacity, algorithms and data. Under this model, tech companies significantly shape how ordinary people interact with the world, and are similarly consequential for labour markets and geopolitics.
Nowhere was this more clearly underlined than in Ukraine in 2022–23. Musk’s Starlink satellite internet services had emerged as a critical capability of the Ukrainian resistance, but their provision was dependent on a private company, leading to uncertainty over who called the shots on their use and availability.
If warfare is changing, so too are international norms: decisions affecting ordinary people in all sorts of ways are increasingly made in Silicon Valley boardrooms. Norms on privacy, access to information and freedom of expression are set out in terms of service for billions of digital platform and software users worldwide. Such consolidation makes perfect sense from the major tech firms’ perspectives; after all, local laws, languages and values create costly administrative inefficiencies in globally minded businesses.
But this narrative of tech power also misses some of the critical ways in which states still shape tech companies’ ability to act. Governments have not stayed on the sidelines in response to Big Tech’s more prominent role, or potential role, in geopolitical events. Indian state authorities, for example, are known for the frequency with which they shut down internet access to avoid social or political unrest; and India is increasingly shaping norms about the censorship of social media networks such as X (formerly Twitter). The Chinese state has long sought to challenge Western hegemony in internet architecture, and to influence global digital governance standards. Export controls imposed by the Biden administration in the US affect where and how the tech industry locates factories and develops advanced chips. The concept of the tech company, or even the private sector, as entirely separate from the state is not a reality everywhere.
State power and tech power interact, and have long done so. The question facing us in the years to come is how those relationships may change or break down. At the fringes of technology – from artificial intelligence (AI) to quantum computing – state power can feel scarce. With some exceptions, when governments do come to the table, they arrive too late or too poorly staffed to be seen as equals: well-meaning bureaucrats at best, a handbrake on profit or progress at worst.
Chatham House is interested in posing questions about this – and, ideally, answering some of them. What would it take for co-governance of technology by the state and the private sector, and how can states around the world adapt to the rising power of tech companies, collaborate with them, and coordinate responses and regulation? What is the extent of Big Tech’s power on policymaking today? States, after all, are politically answerable for many of the decisions affecting their citizens even where those decisions are currently made in boardrooms, not in parliaments or ministries. If, as the economist Mariana Mazzucato suggests, governments need to learn how to row the boat so they can steer it, then states need to learn how to build and make tech, not just interact with it, to steer their way through 21st-century challenges.
By turn, it is a moment for industry to look at itself, and ask whether it is able to deliver the public goods it sometimes touts, and how it can steward and respond to the consequences of vast technological change. How both sides broker the relationship between tech power and state power is going to shape geopolitics in the future.
These questions are especially acute in a year in which half the world is going to the polls. Some people are concerned about a future of electoral ‘post-reality’ shaped by AI-enabled mis- or disinformation. Our window onto politics and candidates will be framed by the technology that mediates our access to news and information. In this collection of essays on AI’s implications for society and governance, and in our ongoing work at Chatham House, we explore these questions: looking at the merits of community-driven AI, unpacking the challenges around international cooperation and efforts to establish common rules, discussing AI ‘decolonization’, arguing the case for open-source AI development and more.
Bronwen Maddox
Director and chief executive, Chatham House