For all of human history, politics has been fundamentally driven by conscious human action and the collective actions and interactions of humans within networks and organizations. Now, advances in artificial intelligence (AI) hold out the prospect of a fundamental change in this arrangement: the idea of a non-human entity having specific agency could create radical change in our understanding of politics at the widest levels.
Not least because of the influence of literature, cinema and television, popular thinking about AI can tend towards the fanciful. Fictional, apocalyptic depictions of war between humans and robots have influenced breathless coverage of sometimes relatively minor AI developments. Periodically, too, leading figures in the fields of science and technology have issued stark warnings that AI may pose an existential threat to human life. Together, these have given rise to a perception among the general public that a new form of intelligence that exceeds human intelligence is just around the corner – or even with us already.
Humans and limited forms of AI already coexist: AI technology helps us to navigate, to translate text and to find cheap flights, to give just a few examples; and – notwithstanding its known flaws and limitations – it looks set to be emblematic of a radically transformed future.
But the more extreme ideas of what advances in AI may mean for how humans live, work and engage is far distant from the current reality. The nature of AI in 2018 – and very likely for the foreseeable future – is somewhat mundane. Indeed, the field is seeing relatively minor advancements that bring specific practical benefits in identified areas, rather than AI with general application. Services such as Google Translate, for instance, are undoubtedly useful, but the increased efficiencies created by such services do not yet hold out the prospect of noticeably changing the power balance at the international level. A truly non-human intelligence would likely do so, but a constructed system capable of operating on a comparable level to a human brain – an artificial general intelligence, or AGI – would require a broad-based advancement in every aspect of the field: hardware, software, and even our understanding of what cognition actually is.
Technological change does not have to be dramatic or sudden to create meaningful shifts in power balances or social structures
The more prosaic advancements are not insignificant, however. Technological change does not have to be dramatic or sudden to create meaningful shifts in power balances or social structures. Indeed, focusing on the distant prospect of dramatic change may well distract from developing a more nuanced understanding of slower and subtler, but equally significant, changes.
This Chatham House report examines some of the challenges for policymakers, in the short to medium term, that may arise from the advancement and increasing application of AI. It is beyond the scope of the report to offer a fully comprehensive set of predictions for every possible ramification of AI for the world. Significant areas not addressed here – including medicine, public health and law – might be fundamentally transformed in the next decades by AI, with considerable impacts on the processes of the international system. Furthermore, towards the end of the process of compiling the report, public attention has increasingly turned to the possibility of AI being used to support disinformation campaigns or interfere in democratic processes. We intend to focus on this area in follow-up work.
This report does not attempt to offer specific predictions for the progress of discrete technological avenues, or proposals for specific avenues of technological development. Rather, it draws together strands of thinking about the impact that AI may have on selected areas of international affairs – from military, human security and economic perspectives – over the next 10 to 15 years.
The report sets out, first, a broad framework to define and distinguish between the types of roles that artificial intelligence might play in policymaking and international affairs: these roles are identified as analytical, predictive and operational.
In analytical roles, AI systems might allow fewer humans to make higher-level decisions, or to automate repetitive tasks such as monitoring sensors set up to ensure treaty compliance. In these roles, AI may well change – and in some ways it has already changed – the structures through which human decision-makers understand the world. But the ultimate impact of those changes is likely to be attenuated rather than transformative.
Predictive uses of AI could have more acute impacts, though likely on a longer timeframe. Such employments may change how policymakers and states understand the potential outcomes of specific courses of action. This could, if such systems become sufficiently accurate and trusted, create a power gap between those actors equipped with such systems and those without – with notably unpredictable results.
Operational uses of AI are unlikely to fully materialize in the near term. The regulatory, ethical and technological hurdles to fully autonomous vehicles, weapons and other physical-world systems such as robotic personal assistants are very high – although rapid progress towards overcoming these barriers is being made. In the longer term, however, such systems could radically transform not only the way decisions are made but the manner in which they are carried out.
The report then turns to examine the near-term implications of AI applications in the military, human security and economic fields. Missy Cummings, looking at the military sector, concludes that truly autonomous weapons systems are still some distance away: a combination of operational and doctrinal issues have largely prevented their adoption hitherto, although remotely operated vehicles are increasingly prevalent for some applications such as aerial and undersea reconnaissance. She argues that a significant shift in gravity is under way between traditional defence industries and the non-defence technology industry, with implications for how military systems are designed and acquired.
Heather Roff argues that AI does have positive implications for human security, but that unlocking progress means first understanding the roles in which AI can be put to positive use – and, critically, understanding the difference between using data (which machines can sort effectively) and knowledge (which humans remain far better at). She concludes, furthermore, that in order to fully reap the potential benefits of AI in the realm of human security, proactive steps need to be taken to ensure equality of access to technology.
Kenn Cukier makes the case that AI is likely to reshape what work looks like, but that it is unlikely to fundamentally change underlying economic power structures. Artificially intelligent systems – both those employed in operational roles (like autonomous vehicles) and those in analytical and predictive roles – are likely, in his view, to create significant wealth, but the distribution of that wealth will not inherently become more equal for humans.
Building a framework for better managing the rise of artificially intelligent systems in the near term might also reinforce the process of mitigating longer-term risks
AI technology may have profound impacts on economic and geopolitical power balances, but it will require clarity of purpose to ensure that it does not simply serve to reinforce existing inequities. Building a framework for better managing the rise of artificially intelligent systems in the near term might also reinforce the process of mitigating longer-term risks. To this end, the report makes the following recommendations for governments and international non-governmental organizations, which will have a particularly important role in developing and advocating for new ethical norms:
- In the medium to long term, AI expertise must not reside in only a small number of countries – or solely within narrow segments of the population. Governments worldwide must invest in developing and retaining home-grown talent and expertise in AI if their countries are to be independent of the dominant AI expertise that is now typically concentrated in the US and China. And they should work to ensure that engineering talent is nurtured across a broad base in order to mitigate inherent bias issues.
- Corporations, foundations and governments should allocate funding to develop and deploy AI systems with humanitarian goals. The humanitarian sector could derive significant benefit from such systems, which might for example decrease response times in emergencies. Since AI for humanitarian purposes is unlikely to be immediately profitable for the private sector, however, a concerted effort needs to be made to develop them on a not-for-profit basis.
- Understanding of the capacities and limitations of artificially intelligent systems must not be the exclusive preserve of technical experts. Better education and training on what AI is – and, critically, what it is not – should be made as broadly available as possible, while understanding of underlying ethical and policy goals should be a much higher priority to those developing the technologies.
- Developing strong working relationships, particularly in the defence sector, between public and private AI developers is critical, as much of the innovation is taking place in the commercial sector. Ensuring that intelligent systems charged with critical tasks can carry them out safely and ethically will require openness between different types of institutions.
- Clear codes of practice are necessary to ensure that the benefits of AI can be shared widely while its concurrent risks are well managed. In developing these codes of practice, policymakers and technologists should understand the ways in which regulating artificially intelligent systems may be fundamentally different from regulating arms or trade flows, while also drawing relevant lessons from those models.
- Particular attention must be paid by developers and regulators to the question of human–machine interfaces. Artificial and human intelligence are fundamentally different, and interfaces between the two must be designed carefully, and reviewed constantly, in order to avoid misunderstandings that in many applications could have serious consequences.