5. Conclusions and Recommendations
Across most, if not all, sectors, the future of human endeavour will see more and more integration between humans and machines in both operational and decision-making roles. The driver of this change is – and will continue to be – the quest for greater efficiency, greater effectiveness and greater safety, all of which stand to be significantly improved through advances in AI. The challenge for policymakers, then, lies not in building the technology but rather in nurturing the governance frameworks needed to manage and regulate this integrated approach. The task is complex because of the manifold issues arising from such fundamental changes to the way we work, and is further compounded by the speed at which technology is evolving.
Managing the transition well will allow society to absorb the shock that increased automation and autonomy will have on the workplace. While this report has focused on AI with particular reference to the future of warfare, human security, and the economy and jobs, there are almost limitless potential impacts on the drivers of international politics. Some drivers are easy to envision at this stage – the use of AI to supercharge disinformation or to influence political processes such as election campaigns, for example. Others, like changes to legal or public health systems, might have more subtle or indirect effects.
The use of ‘centaurs’, in principle, could create a mechanism by which decision-making processes are enhanced but ultimate responsibility still lies in human hands. This is important when it comes to governance as it allows for accountability, a key tenet of liberal democracy. The question of accountability is particularly salient due to the complexity of AI systems and the difficulty of translating their ‘thought’ processes to non-specialist audiences – what has come to be known as the ‘black box’ problem. While work is being done on addressing this issue, AI processes and decisions remain largely opaque, especially to policymakers.135 As AI further permeates people’s lives, the process of assigning responsibility for harm caused by decisions made by (or with) algorithms can be extremely complicated, especially in situations where the impact of these decisions is not immediate or direct. Throughout, efforts to ensure responsibility and liability must be balanced with the risk of stifling innovation or potential, representing a new dimension of old challenges for policymakers.136
Questions of accountability are particularly pertinent when considering the military applications of AI. In 2017 prominent technologists, including Elon Musk of Tesla and Mustafa Suleyman of DeepMind, published an open letter in which they urged the UN to find a way to protect society from the potentially negative developments and uses of lethal autonomous weapons systems.137 Indeed, the question of meaningful human control has been at the heart of discussions over how to legally and ethically control lethal autonomous weapons, including at the April 2018 UN Group of Governmental Experts meeting.
Moving towards a ban could be a useful strategy for policymakers at this time, even if its precise contours remain contentious. A dominant focus on such ‘killer robots’ – to use the term adopted by campaign groups and the media alike – risks obfuscating a more nuanced discussion of the much broader, non-lethal applications of AI. Removing from the equation the fundamental question of whether a robot should have the authority to decide whether or not to take a life might create a space for a much-needed public discussion around other areas and applications of AI.
As Missy Cummings observes, though, the critical issue is that much of the underlying research and technology that would drive and enable these weapons is being developed in the private sector, beyond the purview of state regulation, so a ban – though it would deplete the market – would likely not prevent the capacity being taken forward. Indeed, the question of who aside from states would be purchasing the technology gives rise to other concerns.
The ‘Fourth Industrial Revolution’ ushered in by AI will by some predictions create a range of new jobs that have not yet been imagined, while others see it destabilizing and undermining the current world order to detrimental effect
The availability of AI technology raises other concerns, related to the affordability and accessibility of technology as it advances, and the implications for societies that do not have the capacity or the resources to develop their own AI systems, as Heather Roff points out. There is a risk that AI will deepen already pronounced inequalities between advanced and less developed economies. While some countries may benefit indirectly from advances in AI – such as through the benefits to the humanitarian sector that AI may bring – there may be significant repercussions for societies whose workforce is replaced or streamlined by its application. The ‘Fourth Industrial Revolution’ ushered in by AI will, as Kenn Cukier sets out, by some predictions create a range of new jobs that have not yet been imagined, while others see it destabilizing and undermining the current world order to detrimental effect. These concerns underlie efforts – such as the May 2018 Toronto Declaration on protecting rights to equality and non-discrimination in machine learning systems – intended to avoid the deployment of AI systems designed without due regard for universal human rights.
Whichever predictions turn out to be true, managing the resulting changes to society will be integral to defining our subsequent relationship with AI. The challenge for policymakers is to grapple with the speed, scale and breadth at which AI can operate, and at which government, as a general rule, cannot. Failure to do so could mean that while the impact of AI will be international in scope, it may be divisive at a national level as governments struggle to get to grips with the range of possibilities that this technology offers.
Policy recommendations
Where AI is discussed in reference to public policy, the narrative tends to veer towards the extreme. Elon Musk’s widely reported warning, in August 2017, that AI represents ‘vastly more risk’ than North Korea is a case in point.138 While AI undoubtedly poses some significant risks that must be mitigated, discussion of these often crowds out sober analysis of the ways in which machine-aided decision-making is likely to change international politics in the relatively near term. Such analysis should have both near- and far-term goals. For the near term, the aim should be to achieve meaningful and measureable progress towards demystifying the technology and enabling productive conversations between those developing it and those who will be responsible for implementing and regulating it. At the same time, discussions around AI should not lose sight of the fundamental ways in which it may change the nature of international politics and power structures, and should aim to build up ethical and legal frameworks to manage those changes.
While no one can predict the exact trajectory that AI will take over the coming decades, it is clear that it will have an increasing and profound impact on society. To prepare for this transformation, this report makes a number of recommendations for policymakers:
- AI expertise must not reside in only a small number of countries – or solely within narrow segments of the population. As AI is entrusted with more and more significant responsibilities, programmers and policymakers alike must be more aware of its potential impact on existing structural inequalities. The processes by which AI systems are developed and deployed must be as inclusive as possible in order to mitigate societal risks, and inherent bias issues, at the point of inception. Policymakers must invest in developing home-grown talent and expertise in AI if countries are to be independent of the current dominant AI expertise that is concentrated particularly in China and the US. This will mean investing in education at all levels, both to foster a pipeline of leading AI engineers, and to ensure a workforce that is developing skills that AI possibly will not be able to replicate, such as those that value emotional capital.
- Corporations, governments and foundations alike should allocate funding to develop and deploy AI systems with humanitarian goals. There are significant advantages that AI could bring to the humanitarian sector, including through the use of complex datasets and planning algorithms that could, for instance, improve response times in humanitarian emergencies. More research needs to be done with discrete sectors to consider the specific implications that advances in AI will bring to them. Because AI development for humanitarian purposes is unlikely to be immediately profitable for the private sector, a concerted effort needs to be made to develop such systems on a not-for-profit basis.
- Understanding of the capacities and limitations of artificially intelligent systems must not be the exclusive preserve of technical experts. The information technology revolution will undoubtedly need more STEM graduates, but AI developers also need more ‘soft’ skills both at the operator level and in terms of integrating AI successfully into larger decision-making networks. Better education and training on what AI is – and, critically, what it is not – should be made as broadly available as possible.
- Developing strong working relationships, particularly in the defence sector, between public and private AI developers is critical, as much of the innovation is taking place in the commercial sector. The defence sector needs to find a way to harness and utilize the innovation that the rapidly evolving commercial market for AI technology is developing with, in some cases, greater resources to dedicate to it. Ensuring that intelligent systems charged with critical tasks can carry them out safely and ethically will require openness between different types of institutions.
- Given the broad applicability of the technology, clear codes of practice are necessary to ensure that the benefits of AI can be shared widely while its concurrent risks are well managed. Neither engineers nor policymakers alone possess the tools necessary to design, test and implement these codes – rather, they will require sustained and cooperative engagement between those communities. Policymakers and technologists should, moreover, understand the ways in which regulating artificially intelligent systems may be fundamentally different from regulating arms or trade flows, while also drawing relevant lessons from those models.
- Particular attention must be paid by developers and regulators to the question of human–machine interfaces. Artificial and human intelligence are fundamentally different, and interfaces between the two must be designed carefully, and reviewed constantly, in order to avoid misunderstandings that in many applications could have serious consequences.
It is essential for the public debate to move beyond an apocalyptic vision of robotic disruption on the one hand and a fanciful, automated idyll on the other. This is not a conversation about the future: AI is already in everyday use in mundane and not very spectacular ways – in areas such as navigation, text translation and retail. As the integration of AI applications with daily life continues, it is important that governments and publics alike understand what this means for now and for the future. Enabling an informed, nuanced and in-depth discussion will mean moving one step closer to that understanding.