On 1–2 November, the UK will host its AI Safety Summit at Bletchley Park, bringing AI powerhouses like the US and China together with industry leaders, civil society and experts, in an attempt to lead on managing AI risks on an international level.
Today, UK Prime Minister Rishi Sunak previewed the summit by announcing a new UK AI Safety Institute, which would monitor AI development and risks and share its findings worldwide.
When the UK first announced the summit in June 2023, there was some criticism that it added another process to an already crowded landscape.
While there is a need to coordinate across these efforts, especially the existing Global Partnership on AI, the summit will have a distinct focus on ‘frontier’ AI risks – that is the concern that the most powerful AI models could either be used for dangerous purposes or act in unanticipated ways.
The UK government highlights the potential to synthesize new bioweapons, others the likelihood AI could develop sophisticated disinformation at scale, or evade human supervision once deployed.
Some are sceptical about these warnings, arguing their proponents have not outlined how they would occur in practice, and that they shift the focus from other risks – including to jobs, and of discrimination.
But others frame the summit as an opportunity – regardless of broader disagreements – to get major powers, and especially the tech companies who are developing the most powerful AI models, to slow down, coordinate, and seek to control risks at the international level.
Highly powerful AI is currently being developed by just a few countries and labs. Most of the world lacks insight into what’s happening.
The summit will not deliver an agreed new international regulatory framework. But it and the new UK Institute can be a success if they establish shared international understanding of major AI risks, offer models to help address AI knowledge gaps in world governments, and kickstart a process that gives due prominence to Global South voices and other states without significant AI capacity.
A difficult context
Developing systems for governing AI internationally is challenging. The technology’s development has been rapid, outpacing experts’ anticipated trajectories, let alone efforts to develop regulation.
There is competition between countries to develop powerful AI, particularly between the US and China. And, unlike previous massive technological advances – nuclear, space or global telecommunications – AI is mostly being developed by private companies, who will need to be both bound by and inform any regulatory systems.
All this comes at a time when key post-1945 institutions struggle to govern more traditional international problems, from conflict and climate to global economic challenges, let alone ones of the emergent and complex nature of AI.
And there is geopolitical competition over influence in international institutions, with China determined to have greater sway in the UN and other global bodies.
Regulation of digital technologies is no less a site of international competition. Experts often refer to the ‘Brussels Effect’ – where tech companies conform to the EU’s generally high regulatory standards to access its massive consumer market.
By contrast, the ‘Beijing Effect’ describes China’s approach to selling low-cost digital infrastructure, often to developing countries, in a way that arguably locks them into China’s regulatory tendencies around surveillance and data protection.
A London effect?
Could the UK hope to have a ‘London effect’ on global AI governance? It would largely rely on convening power and example-setting initially. The government has kept its stated objectives relatively modest, and the new Institute is a long way from a global governance model.
But the broader goal, of seeking to bring powerful AI firmly into the realm of global governance – rather than that of private tech companies and geopolitical competition between the US and China – is a positive one.
In doing so, the UK also looks to cement itself as a credible leader in AI globally.
Some have suggested that AI should eventually be managed by a body like the International Atomic Energy Agency (IAEA), which monitors access to the materials required to develop nuclear power.
However, some argue this system is under strain, and would be more challenging with AI inputs than the highly specific materials required for nuclear power.
Others have used the example of the International Civil Aviation Organization(ICAO) – a model which monitors the domestic regulations states enforce and uses those to certify states as compliant with international standards.
A small group of countries could initially sign up to minimum standards, and then restrict AI imports and trade with states outside the regime.
Others cite UN processes, like the Open-Ended Working Group on Information and Technology – which meets regularly to discuss IT and cyber-security. While not perfect, the group has required governments to upskill in cyber technology, encouraged countries to take public positions on what responsible behaviour looks like and enabled contributions from non-government stakeholders.
Ultimately all governance regimes present trade-offs – between being inclusive of all parties and moving quickly, between being flexible and specific about the risks they address, between generating sufficient buy-in and having enough teeth to adequately enforce rules.
These challenges are acute with frontier AI, which currently requires a level of computing power and knowledge such that only a few rich countries and labs can develop it, but which potentially presents risks to everyone.
What can the UK’s AI summit achieve?
The summit is not seeking to – and was unlikely to – establish a definitive international regulatory regime, but to kickstart a process to eventually agree some kind of governance.
One possible eventual outcome would be that the UK’s AI Safety Institute acts as a precursor to a body like the Inter-Governmental Panel on Climate Change (IPCC) – an independent, international, expert-led body which could provide advice and clarity to governments about the pace of AI development and the scale of the risks.
The IPCC has faced criticism, for its slowness and non-binding nature, but an approach like this could help address knowledge gaps about AI risk in governments.