As AI technologies are developed and deployed at scale, concern is growing around the risks they pose. In May, some industry leaders and scientists went as far as to claim AI is as great a threat to humanity as nuclear war.
The analogy between both fields is gaining increasing traction and influential figures, including OpenAI’s CEO Sam Altman and the UN Secretary-General Antonio Guterres, have proposed the establishment of an international agency akin to the International Atomic Energy Agency (IAEA).
But they are very different types of technology, and the nuclear governance model would not work at all well for AI.
What the IAEA is
The IAEA was established in 1957 to promote the peaceful use of nuclear technology thanks to US President Eisenhower, who proposed the agency in his ‘Atoms for Peace’ speech, with the hope that ‘… the splitting of the atom may lead to the unifying of the entire divided world.’
The agency is charged by its statute to promote nuclear energy for peace, health and prosperity and ensure – as far as it is possible – that it is not used in ways that further military purposes.
The IAEA conducts safeguarding inspections in civil nuclear facilities such as nuclear power plants and research reactors to ensure that nuclear materials in non-nuclear weapons states are not transferred to military programmes.
The agency has been extraordinarily successful in its safeguarding, with the exception of Iraq in the late 1980s. It has discovered several instances of non-compliance and, except in the case of North Korea, has contributed significantly to the reversal of behaviour and prevention of proliferation, including thus far in Iran.
Existential fear of nuclear war
From early on in their development nuclear weapons posed a known, quantifiable existential risk.
The nuclear bombing of Hiroshima and Nagasaki in August 1945 attested to the destructive, indiscriminate, and uncontainable nature of these weapons.
One of the key motivations for founding the IAEA and for arms control treaties such as the Nuclear Non-Proliferation Treaty (NPT) was the deep fear of nuclear war.
These fears were well founded. At the height of the Cold War, the US and then Soviet Union were said to have enough nuclear weaponry to ‘destroy humanity as we know it’. Recent calculations reveal that the number of nuclear weapons required to destroy conditions for human habitation is fewer than 100.
The risks posed by nuclear weapons’ very existence and the threat of their use are therefore existential; and the profound humanitarian risks and consequences that would result from their use was a driving force leading to the 2017 adoption of the Treaty on the Prohibition of Nuclear Weapons.
Fear of catastrophe is distracting efforts away from known risks
At the moment, there is no hard scientific evidence of an existential and catastrophic risk posed by AI.
Many of the concerns remain hypothetical and are derailing public attention from the already-pressing ethical and legal risks stemming from AI and their subsequent harms.
This is not to say that AI risks do not exist: they do. A growing body of evidence documents the harm these technologies can pose, especially on those most at risk such as ethnic minorities, populations in developing countries, and other vulnerable groups.
Over-dependency on AI, especially for critical national infrastructure (CNI), could be a source of significant vulnerability – but this would not be catastrophic for the species.
Concerns over wider, existential AI risks do need to be considered, carefully step-by-step, as the evidence is gathered and analysed. But moving too fast to control could also do harm.
AI is difficult, if not impossible, to contain
The technicalities of nuclear weapons are inherently different from AI. The development of nuclear weapons is faced with physical bottlenecks. Their manufacture requires specific materials in specific forms – such as plutonium and highly-enriched (above 90 per cent) uranium and tritium.
These materials produce unique, measurable signatures. The tiniest of traces can be discovered in routine inspections, and clandestine activities exposed.
Nuclear weapons cannot be made without these special materials. Controlling access to the materials physically prohibits countries that are not allowed to acquire them from doing so. This is very different from AI, which is essentially software-based and general-purpose.
Although the development and training of AI can require heavy investment and supercomputers with tremendous processing power, its applications are widespread and increasingly designed for mass use across all segments of society. AI is, in that sense, the very opposite of nuclear weapons.
The intangible nature of AI would make it difficult, if not impossible, to contain – especially with the increase of open-source AI.
Safeguarding measures and verification methods akin to those employed by the IAEA would therefore not work for AI due to these inherent technical differences.
What could work?
Policy responses are necessary to address the risks in developing and deploying AI technologies. But governance models away from the nuclear field offer better inspiration.
A solution similar to the US Food and Drug Administration (FDA) might provide a sensible approach to overseeing the release and commercialization of AI products.
This would consist of a scaled launching model, alongside robust auditing requirements and comprehensive risk assessments to evaluate both the direct and indirect implications of the product in question.
The EU’s Reference Laboratory for Genetically Modified Food and Feed (EURL GMFF) also provides a useful way to think about some AI controls and regulation.
National and international attempts to control and regulate human gene editing and human embryo research are worth study, as attempts to control and regulate amorphous technology in very different cultural contexts.