How to deal with military AI’s Oppenheimer moment

Private sector defence tech firms are necessarily opaque. But too much secrecy puts them dangerously beyond democratic control, writes Yasmin Afina.

The World Today Updated 3 October 2023 Published 29 September 2023 4 minute READ

Yasmin Afina

Former Research Fellow, Digital Society Initiative

The AI race in the security and defence sector is on. Amid mounting geopolitical tensions, research and development in this area will only increase as states seek to gain advantage over adversaries through cutting-edge technology. At the same time, there is growing concern at the influence and power the private sector wields over military AI.

A striking example of this close relationship came when Alex Karp, the chief executive of Palantir, became the first western executive to visit President Volodymyr Zelenskyy after war broke out in Ukraine. Palantir, which describes itself as building ‘world-class software for data analytics and data-driven decision making’, has recently signed a three-year deal with the UK’s Ministry of Defence to aid military operations and intelligence worth £75 million.

Its reputation is less than sparkling. Following the outrage over the US Immigration and Customs Enforcement’s policy of separating immigrant families under the Trump administration, Palantir was reported to have enabled such practices by gathering information on undocumented immigrants, mapping out family relations and selecting targets.

Profoundly disruptive effects

Palantir also holds multi-million pound contracts with NHS England, partnership agreements with Scuderia Ferrari and was implicated in the Cambridge Analytica scandal in which data was harvested from Facebook users.

Palantir attests to two worrying realities. First, while the private sector has always played a key role in international security and defence, the lines between the military and the civilian realms are being blurred more than ever.

This is exacerbated by the rise in dual-use technologies, which benefit both civilian and military communities, and which many states including NATO members and China are pursuing. Such developments have both operational and legal implications, which require clarification to ensure they comply with applicable laws, ethical standards and policies.

Second is the amount of data the company holds. Such a concentration of data, and ultimately power, in a handful of private companies practically puts them on par with states.

Given the life-or-death stakes involved in the deployment of military AI, corporations lose the right to say ‘Oops’

Concerns arise with regards to the accountability and lack of democratic oversight of these companies. Opacity in the defence sector is understandable.  The private sector is agile and can act more quickly, while government agencies tend to be hamstrung by lengthy deliberative processes surrounding innovation and procurement.

Yet these processes are necessary to ensure democratic control over defence matters is maintained. Efforts to increase transparency over military spending foster democratic accountability over decision-making, while corporate accountability has to be upheld against regulatory requirements and democratic oversight.

These processes are necessary to ensure corporations do not prioritize profit at the expense of compliance with laws, policies and ethical standards. Given the high stakes involved in the deployment of military AI, which can sometimes mean life or death to civilians, corporations practically lose the right to say ‘Oops’.

While the perfect blueprint does not exist, there are certain considerations when seeking corporate accountability in the context of military AI. First, there are a number of exemplary mechanisms in other sectors, including the UN Guiding Principles on Business and Human Rights and environmental, societal and governance frameworks. Incentivisation is key, and security and defence industries would do well in drawing inspiration and the lessons learnt from these existing frameworks.

Civil society organizations are already alarmed by AI use for mass surveillance, such as Israel’s use of facial recognition technology

Second, states, remaining the main clients for military AI solutions, are bound by the social contract – at least, in democracies – to uphold the people’s fundamental rights. As such, not only do industries have a duty to act in the best interest of their shareholders and clients, they are bound by applicable laws, policies and reputational considerations.

Strengthening existing processes to enable democratic oversight on security expenditures is necessary, with the eventual adoption of measures to clarify what this would look like in the context of military AI. Beyond parliamentary processes, there are calls to reform procurement and acquisitions as key enablers for accountability.

Third, risks surrounding profit-driven innovation must not only be identified but also addressed. While, for example, Palantir claims never to have had dealings in Russia, nothing prevents it from doing so in future, unless such matters are expressly stipulated in contractual agreements.

Yet, this one-off, case-by-case solution is neither sustainable nor reliable. Not only are there risks of sales to adversary states – the field of arms sales including the British manufacturers is no stranger to human rights violators, including Libya and Saudi Arabia – but civil society organizations are already raising the alarm on AI use for mass surveillance and control. One example is the Israeli government’s use of facial recognition technology in the occupied West Bank and East Jerusalem, reportedly relying on CCTV cameras provided by Hangzhou Hikvision Digital Technology and TKH Security.

Beyond state clients, nothing prevents companies from supplying military-grade AI technologies to non-state actors such as paramilitary forces and armed groups. This, in turn, can be destabilizing for states, the effects of which can be profound, long-lasting and spill over borders. In the interest of international security, AI capitalism must be kept in check. Technological deregulation carries too high a risk.

AI’s Oppenheimer moment

In a recent article, Karp, Palantir’s chief executive, argued that with the development of military AI, we are living through an ‘Oppenheimer moment’. He believes that in the case of AI, while we must choose ‘whether to proceed with the development of a technology whose power and potential we do not yet fully apprehend’, other countries will not stop pressing forward in developing such technologies.Thus, if free and democratic societies are to prevail, they will have an obligation to build hard power through software.

Yet, Karp fails to mention the deep concerns Robert Oppenheimer,  considered the father of the atomic bomb, had about starting a nuclear arms race with the Soviet Union. Not only did the world witness the devastating humanitarian impact of the nuclear weapons dropped on Hiroshima and Nagasaki, but we are left with the long-lasting problem of nuclear arms control and disarmament.

Content ctd

The nuclear weapons policy landscape is deeply polarized, concrete progress has ground to a halt and, despite efforts made to advance nuclear disarmament, which led to the adoption of the nuclear weapons ban treaty, there is still much to do. Risks of a nuclear explosion from one of the more than 12,000 nuclear warheads remain, in a world where geopolitical tensions run high.

The AI genie is out of the bottle

Just like the technology that underpins nuclear weapons, the AI genie is out of the bottle. Military use, and even dependency on AI technologies, will only grow in scale. But if there is one thing the nuclear arms control field can teach us, it is that agreements and arms control measures are incredibly difficult to negotiate among states, even when the stakes are as concrete and as high as the lives of thousands, if not millions of civilians.

The presence and power of private industries will only exacerbate the complexity of the problem. Hence, establishing their responsibility and accountability in the military AI space through concrete regulatory frameworks is not desirable, it is a necessity. 

Note on the illustration: The illustration for this story has been produced using generative AI. We identified two main concerns using AI for artwork: the ethics of attribution and of transparency. On transparency, we designed our prompts to ensure the resulting illustration is not mistaken for photojournalism or as depicting actual events or people. On the ethical pitfall of using generative AI models which exploit the creative efforts of humans without attribution, we chose the software deliberatively. Adobe’s Firefly is trained on content from its own stock library. This meant the compositional output would be more limited in range, a trade-off we considered less consequential than the exploitation of others’ labour.