1. Introduction: Artificial Intelligence and International Politics
Across the entire spectrum of human behaviours, politics may be one of the most difficult activities to automate. Politics, as it is commonly understood (as the mechanism by which competing objectives are weighed against each other1), is an inherently complex task that reflects the complexity of human behaviour on both an individual and a mass scale. This becomes all the more true at the level of international relations.
It seems unlikely that human-level artificial intelligence (AI) – so-called artificial general intelligence (AGI) – will emerge in the near future. Even if the state of the art advances much more quickly than anticipated, there is considerable resistance to the idea of turning responsibility over to machines. This can be seen playing out notably in the present-day debates over autonomous cars and robotic weapons systems. In this context, therefore, it is difficult to envisage a world in which the decision-making elements of politics are outsourced to machines in their entirety without also imagining a completely different, speculative world of the future.
While the probability of a robotic (in the literal sense) president or foreign minister seems very distant, this does not mean that AI will not affect international politics in significant ways. Its impacts are likely to be more diffuse, and subtle. They are likely to be manifested through changes in the ways in which human decision-makers are informed, while not extending so far as to allow AI to take decisions. Consideration of the application of AI in international affairs appropriately includes what the structures that support decision-makers look like, and the speed with which critically significant decisions are made.
It seems safe to assume that artificially intelligent systems will not replace humans at the top level of decision-making. But they will be an increasingly significant part of the context in which human decision-makers operate
In short, it seems safe to assume that artificially intelligent systems will not replace humans at the top level of decision-making. But they will be an increasingly significant part of the context in which human decision-makers operate. This evolution presents both huge opportunities and substantial risks, so considering the potential impacts at an early stage is critical.
How might AI fit into international relations?
Ultimately, AGI might be capable of executing any cognitive or operational task for which human intelligence is currently necessary. But given the likelihood that such AGI will take decades, or perhaps centuries, to develop, current analysts and policymakers might reasonably focus chiefly on consideration of tasks that may be assigned to AI in the relatively near term.
Such tasks will depend heavily on the capabilities of AI. Machines are of course capable of processing enormous amounts of data exceptionally quickly. They can also store and access far greater amounts of data than can a human mind. With the correct software, a machine can recognize patterns in data much more quickly and accurately than any human can. But machines also operate along a very limiting set of parameters – where a human child can instinctively recognize a cat from any angle, a computer (even after sorting thousands of cat images) can be flummoxed by seeing a cat whose face is temporarily hidden from view.2
Consider, for example, the case of the first fatal crash of a vehicle being operated in ‘self-drive’ mode: a Tesla Model S, which in May 2016 drove at full-speed into the side of a truck; its human driver was killed in the collision. According to investigators, the car’s sensors were confused by sunlight reflecting off the white paint of the truck’s trailer, which it was unable to distinguish from the sky. The system neither braked nor warned the human driver of the impending collision. Investigators concluded that the ultimate responsibility lay with the human driver, whose failure to properly oversee the operation of the vehicle led to the accident.3
Machines and humans have different capabilities, and, equally importantly, make different mistakes based on fundamentally divergent decision-making architectures
Regardless of the legal responsibility, it is hard to imagine a human driver making that particular mistake. By the same token, it is impossible to imagine an AI system making the kinds of error that human drivers do frequently and with often horrific consequences – such as driving while tired, distracted or drunk. Machines and humans have different capabilities, and, equally importantly, make different mistakes based on fundamentally divergent decision-making architectures.4
One frequently mooted solution is to combine humans and machines in teams that allow them to operate in complementary fashion. Provided a suitably useful interface can be set up to mediate between the two – in itself a not insignificant obstacle – humans working with machines might combine the strengths of both while avoiding the pitfalls associated with either. Teams following this arrangement, dubbed ‘centaurs’ after the half-man, half-horse of Greek mythology, have already been trialled for military use by the US, and seem likely to be the goal for the near future at least.
The key with these blended approaches is a clear delineation of responsibility. Unless the human–machine interfaces – both at the individual operator level and higher up the chain – are designed and tested with extreme care, the teaming brings with it completely new terrain in which mistakes can be made and where responsibility can slip through the cracks. And as the scope of responsibility increases, so too do the consequences.
Explored next are the three categories where AI is likely to be used in a particularly instrumental way in international politics and policymaking.
Analytical roles
Artificially intelligent systems are already found in analytical roles, combing through large datasets and deriving conclusions based on pattern recognition. These are precisely the ‘dull’ tasks (of the ‘dull, dirty and dangerous’ formulation5) that are generally regarded as the highest priority for automation.
The possibilities of such intelligences are iterative: they will increase with the spread and growth in capability of artificially intelligent software. In Chapter 2 of this report, Missy Cummings explores in fuller detail the extent to which AI is being used within weapons systems, and the issues and challenges that this presents. But a few possible missions or roles can already be imagined without too much difficulty.
Monitoring the outputs of sensors set up to verify compliance with, for instance, a nuclear, chemical or biological arms control treaty might well be a deadening job for human analysts – albeit one that would require significant specialist training and experience. By contrast, a machine learning system6 set up to do the same would never tire or grow bored of its task. And while it might (especially in the process of learning) flag a large number of ‘false positives’, by assigning human specialists to oversee and correct it the system’s accuracy would quickly increase.
In a similar, albeit less dramatic fashion, artificially intelligent processes might be very useful for optimizing the procedures of the more mundane aspects of political exchange. Given the increasing amount of real-time data generated by industrial and commercial operations (through what is often referred to as the ‘internet of things’, or IoT), it is not difficult to imagine artificially intelligent systems monitoring trade and feeding into decision-making processes around economic policy.
In other words, AI will only become more important in how policymakers see and understand the world. In doing so, it will effectively expand their capacity for processing information, but at the same time it will introduce new uncertainties and complexities into established decision-making protocols.
Predictive roles
Another set of roles for AI might be prediction rather than analysis. In other words, whereas analytical applications of AI are intended to streamline current operations, artificially intelligent systems may offer opportunities for policymakers to understand possible future events.
One such example in the arena of international affairs would be the possibility of modelling complex negotiations. Along with using AI systems to monitor compliance and improve the efficiency of complex international instruments, parties to negotiations (whether economic or strategic in nature) might use sophisticated machine-learning methods to forecast others’ positions and tactics.
But a number of moderating factors must be considered. Notably, for instance, while predictive algorithms have been demonstrated with some success in some capacities, they are not yet necessarily more accurate than their human equivalents.7 Time, accumulated knowledge and increasingly powerful computer hardware may ultimately make them comparably accurate with (or more accurate than) predictions made by humans, but the nature of prediction makes it unlikely that there will be one clear standard of success on this front. Furthermore, as seen in the example of the Tesla fatal accident in 2016, the interface between machine and human understandings of the world creates new potential for miscalculations without necessarily providing a compensatory benefit. That potential is amplified by the fact that complex negotiations are by definition multi-party: the machine system and human operator are not simply trying to obey a set of rules more efficiently; they are trying to predict the actions of, and overmatch, one or more opponents (who may themselves also be using predictive algorithms).
There may well be benefits arising from machines operating in such predictive capacities – not least the fact that the ability of an artificial system to store and compare numerous historical data points will almost certainly exceed that of a human or group of humans. But, as Heather Roff observes in Chapter 3, for those benefits to be shared equitably, the underlying technology must be accessible – and, by the same token, stringent measures must be taken to protect personalized data that feed into the algorithms. Striking the right balance will be difficult indeed.
It is worth noting, moreover, that AI might take on other predictive roles with a bearing on geopolitics, contributing for instance to more accurate forecasting of elections, economic performance and other relevant events. But such areas are functions less of machine learning and more of the quantity of data available, and so should instead be considered chiefly in that light.
Operational roles
The final category is somewhat different, covering autonomous systems in the more traditional sense of robots. The implications of these applications are likely to be diffuse and indirect, but their potential significance warrants consideration alongside analytical and operational functions.
Autonomous logistical systems are likely to have significant indirect implications for international politics. The day-to-day functioning of the international system would not be expected to change appreciably if truck drivers, ship crews or pilots were replaced with automatons, but the large-scale replacement of existing human labour in these capacities is likely to cause widespread economic and political disruption in the short to medium term, a prospect that Kenn Cukier addresses in Chapter 4.
Autonomous weapons are another significant issue, albeit, as Missy Cummings argues in Chapter 2, perhaps further from wide acceptance than is generally recognized. There is a considerable public debate about the ethics, morality and legality of the development and use of such weapons. The central tenet of this debate is autonomy, which differentiates it from that concerning unmanned drones; the latter are remotely piloted rather than fully autonomous systems.
Ethical questions aside, autonomous weapons do not in and of themselves necessarily change the balance of power.8 An autonomous strike aircraft makes certain trade-offs relative to a piloted one – essentially greater endurance and expendability as against flexibility and versatility – but it is not inherently a more powerful, game-changing weapon. In the long term, the ability of autonomous systems to react with greater speed than humans are able to might make a difference in some environments (in space, in cyberspace or with respect to hypersonic weapons9). Autonomy might also enable the development of new classes of weaponry – swarms of small, interlinked robotic vehicles could create wholly new paradigms of military capability, for example – but given both the logistical and ethical questions arising from such potential systems it is premature to declare the technology ready for deployment, let alone game-changing. One possible exception could be cyberweapons imbued with autonomy and machine learning, with the potential to make them far more effective and adaptable than their existing counterparts.
Even if a fundamental change in warfare driven by autonomy is not an immediate prospect, the consequences in terms of changing norms and standards of how policymakers see and respond to threats are likely to undergo a fundamental change
But, as with civilian autonomous vehicles, the real impact is likely to be indirect for some time to come. A military with autonomous systems may be only marginally more capable than one without – at least in the near term. But even if a fundamental change in warfare driven by autonomy is not an immediate prospect, the consequences in terms of changing norms and standards of how policymakers see and respond to threats are likely to undergo a fundamental change.
How deployment may proceed
To be adopted in any of these three sets of roles, AI will have to demonstrate comparable or greater effectiveness than humans at a comparable cost. If human policymakers are already confident in their human analysts’ ability to operationalize a strategically important arms control arrangement, they are unlikely to turn these processes over to a wholly new and unproven system.
Deployment in any case will not simply be a case of handing over the keys or flipping a switch. There will be no ‘artificial analysts’ ready to simply take on human roles. Rather, AI will increasingly and incrementally be paired with human analysts to take on these tasks. These human–machine ‘centaurs’ offer the most promise in the foreseeable future, and represent the best chance for a measured transition away from sole human oversight. ‘Centaur’ pairings theoretically allow the combination of the best qualities of human and machine intelligence: the machine can process enormous quantities of data quickly while the human can spot-check and correct where necessary, as well as understanding, framing and responding to the results in ways that interface with existing policy mechanisms.
Some efforts have already been made to begin to devise an ethical or legal framework for autonomous systems. Most recently, in May 2018 a new declaration on protecting the rights to equality and non-discrimination in machine learning systems was opened for signature by a group of international NGOs. Building on the framework of international human rights law, and emphasizing the responsibilities of public- and private-sector actors, the ‘Toronto Declaration’ aims to ensure that new machine learning technologies – and AI and related data systems more broadly – incorporate principles of respect for inclusivity and non-discrimination.10 Meanwhile, the UN has begun to convene working groups under the aegis of the Convention on Certain Conventional Weapons (CCW) to define the legality of the development and use of autonomous weaponry.11 These steps are welcome, but nascent.
This study does not attempt to cover the entire scope or breadth of the implications of AI technologies for international politics. There are significant areas – among them medicine, public health and law – where AI systems may be transformative in ways that directly impact the processes of the international system in the next decade or two. Nor are the various ways in which AI might intersect – or interfere – with democratic processes considered here – a topic that certainly warrants specific in-depth consideration. Rather, in focusing on the application of AI technologies from military, human security and economic perspectives, the report presents a snapshot of some of the nearer-term consequences of what might be one of the defining trends of the coming decades.