3. Advancing Human Security Through Artificial Intelligence
In anticipation of this report, the text of this chapter was first published by Chatham House in May 2017.33
Introduction
Over the next two decades, human security will be confronted by significant challenges. With continuing global warming there will be increased temperatures, rising sea levels and more extreme weather events.34 These changes will lead to a scarcity of resources, particularly of water, food and energy.35 The hardest hit areas of the globe are most likely to be those already suffering from various types of instability, violence and unrest, such as sub-Saharan Africa, Pakistan, and parts of the Middle East and North Africa.36 The confluence of climate and political refugees will undoubtedly compromise local, regional and the international community’s ability to secure individuals from fear and want.
With an estimated 50 billion connected devices, all generating mass amounts of data, information will become an even more powerful tool for development, coordination, persuasion and coercion
Concomitantly, growing connectedness via social media and changes in labour and production due to advancing technology proliferation will also place new stresses on the world economy, as well as create new shifts in political and economic power. Microsoft has predicted that by 2025, 4.7 billion people will use the internet – just over half the world’s expected population at that time – and, of that number, 75 per cent of users will be in emerging economies.37 With an estimated 50 billion connected devices, all generating mass amounts of data, information will become an even more powerful tool for development, coordination, persuasion and coercion.38 Moreover, these individuals will enter new (and old) economic market sectors, and be faced with increasing automation and the stresses of wage devaluation.
In this future world, increasingly divided on demographic, economic and technological lines, achieving human security will not be without its difficulties. Systemic challenges, such as climate change and war, and more localized threats like social, economic or political disruptions are almost certain.
One way to meet these challenges is through novel applications of technology, and of AI in particular. AI holds much promise to enable the international community, governments and civil society to predict and prevent human insecurity. With increased connectivity, more sophisticated sensor data and better algorithms, AI applications may prove beneficial in securing basic needs and alleviating or stopping violent action.
This chapter lays out first the principles of the UN approach to human security, as well as more critical viewpoints. Next, it argues that many of the conflict and development problems facing the international community, states and civil society can be ameliorated or solved by advancements in AI. In particular, algorithms adept at planning, learning and adapting in complex data-rich environments could permit stakeholders to predict and coordinate responses to many types of humanitarian and human security related situations. Finally, the case is made that to ensure broad access, transparency and accountability, especially in countries that may be prone to human security emergencies, the relevant AI ought to be open source and sensitive to potential biases.
Human security
Human security is a concept that takes the human – as opposed to the state – as the primary locus of security. As former UN High Commissioner for Refugees Sadako Ogata has written, ‘Traditionally, security issues were examined in the context of “State security”, i.e. protection of the State, its boundaries, its people, institutions and values from external attacks. People were considered to be assured of their security through protection extended by the State.’39 Yet, with changes in the post-Cold War era, where external threats to state security declined and internal threats of intra-state violence increased, many policymakers, practitioners and scholars required a new lens through which to understand these internal conflicts.
Indeed, in 1994, the UN Human Development Report asserted:
[W]ithout the promotion of people-centred development, none of our key objectives can be met – not peace, not human rights, not environmental protection, not reduced population growth, not social integration. It will be a time for all nations to recognize that it is far cheaper and far more human to act early and to act upstream than to pick up the pieces downstream, to address the root causes of human insecurity rather than its tragic consequences.40
From 1994 onwards, many different avenues for examining the concept of human security emerged.41 Central to all, however, was the focus on the nexus between development, human rights (protection and promotion), and peace and security. The premise that people possess dignity logically entailed that they ought to be ‘free from fear’ and ‘free from want’.42 To establish what this expansive formulation meant, the 1994 Human Development Report identified seven elements comprising human security.43
Table 1: Human security dimensions
Object of security |
Content |
---|---|
Economic |
Freedom from poverty |
Food |
Access to food |
Health |
Access to healthcare and protection from disease |
Environmental |
Protection from environmental pollution and depletion |
Personal |
Physical safety (e.g. freedom from torture, war, criminal attacks, domestic violence, drug use, suicide and traffic accidents) |
Community |
Survival of traditional cultures, ethnic groups and the physical security thereof |
Political |
Freedom to enjoy civil and political rights, freedom from political oppression |
Source: 1994 Human Development Report findings, as cited in: Paris (2001), ‘Human Security: Paradigm Shift or Hot Air?’.
Human security should be seen as complementary to state security, and measures taken to uphold human rights and build local or regional security capacities through non-coercive measures will simultaneously generate greater stability and development
Following from this, the UN also framed human security as emerging from the achievement of ‘sustainable development’ and various established international development goals.44 Human security should be seen as complementary to state security, and measures taken to uphold human rights and build local or regional security capacities through non-coercive measures will simultaneously generate greater stability and development.
However, despite the UN rhetoric, the notion is not without critics. Some claim that it is ‘so broad that it is difficult to determine what, if anything, might be excluded from the definition of human security’.45 The problem, of course, is that if human security as a concept includes such extensive facets of human existence, in reality it means little and impedes the formulation of sound policy. Others point out that the two key elements that define human security have not been treated equally, with progress on the ‘freedom from want’ portion subjugated to issues related to war and violence, in an attempt to make ‘freedom from fear’ a reality.46 Such prioritization reflects various realities of power politics, and demonstrates how some states view their obligations towards capacity-building in areas that have little, if any, strategic or economic interest for them. Indeed, even responses to global health crises appear to mirror power politics and national security interests.47
From a practical perspective, difficulties of adequately and appropriately responding to potential, emerging or ongoing human security crises are endemic. One might claim this is due to the fact that the concept is over expansive, but this objection notwithstanding, achieving human security may have more to do with the inability of various stakeholders, such as the UN, civil society and nation states, to monitor, predict and react to a crisis. Since there are linkages between development, human rights and security, the number of different actors with varying priorities and knowledge bases is high. These actors become disconnected, and may even be forced to work against one another for resources or funding. Lack of communication and information exchange between these actors may only exacerbate problems.
Thus, to counter some of these objections, especially in light of the challenges in the coming 10–15 years, it is necessary to devise novel approaches to ameliorate human insecurity and vulnerability. Specifically, by taking a closer look at how new AI applications can help a variety of stakeholders predict, plan and respond to human security crises.
Securing the human through AI
The expansive and interconnected set of factors that affect human security is not the only challenge to alleviating human insecurities.48 There are three antecedent constraints on human security-related activities: the inability to know about threats in advance; the inability to plan appropriate courses of action to meet these threats; and the lack of capacity to empower stakeholders to effectively respond. Tackling these constraints could save thousands of lives. The use of AI is one potential way to enable real-time, cost-effective and efficient responses to a variety of human security-related issues.
However, it should be noted that AI is not a panacea. As an inter- and multidisciplinary approach to ‘understanding, modelling, and replicating intelligence and cognitive processes by invoking various computational, mathematical, logical, mechanical, and even biological principles and devices,’ it is effective at carrying out certain tasks but not all.49 Much depends on the task at hand. For example, AI is very good at finding novel patterns in mass amounts of data.50 Where humans are simply overwhelmed by the volume of information, the processing power of the computer is able to identify, locate and pick out various patterns. Moreover, AI is also extremely good at rapidly classifying data. Since the 1990s, AI has been used to diagnose various types of diseases, such as cancer, multiple sclerosis, pancreatic disease and diabetes.51 However, AI is not yet able to reason as humans do, and the technology is far from being a substitute for general human intelligence with common sense.
In short, AI looks to find various ways of using information communication technologies, and sometimes robotics, to aid humans and complete tasks. How the AI is created (its particular architecture) and its purpose (its application) can vary significantly. For the purposes of this chapter, however, the tasks that AI is particularly well suited to, in the human security domain, are related to planning and pattern recognition, especially given big data problem sets. In view of the current considerable capabilities in these areas, it is reasonable to estimate that in the coming years AI will be able to overcome the three constraints on human security-related activities mentioned earlier.
Knowledge
The ability to generate knowledge is no easy feat. Knowledge is subtly different from mere data, which are just an amalgamation of discrete and observable facts or inputs that lack meaning without analysis and context. Only when sets of data are given meaning do they become information, which feeds into and builds knowledge.
There are two obstacles to developing knowledge to tackle human security challenges. The first comes from the considerable amount of data that future generations will generate. Everything from individual output from wearable devices to content created on new communication or social media platforms will saturate the world in an ocean of bits and bytes. There will be a requirement for a way to make these data, from the billions of new devices and millions of new users, intelligible.
Because human security crises can emerge from anywhere and result in various physical or economic social impacts, there will be an urgent need to disentangle discrete flows of data specific to the various vulnerabilities
Second, because human security crises can emerge from anywhere and result in various physical or economic social impacts, there will be an urgent need to disentangle discrete flows of data specific to the various vulnerabilities. Such data flows can be specific to one type of phenomenon, such as extreme weather events prediction,52 or could even be more diffuse, such as searching and correlating various events or combining datasets to look for indicators of conflict onset or escalation.
AI applications related to search, classification and novel pattern recognition can help to correlate and extract content and meaning from multiple sources. For example, the Early Model-Based Event Recognition using Surrogates (EMBERS) application forecasts key events up to eight days before they happen with a 94 per cent accuracy rate. EMBERS is a ‘24x7 forecasting system for significant societal events using open source data including tweets, Facebook pages, news articles, blog posts, Google search volume, Wikipedia, meteorological data, economic and financial indicators, coded event data, online restaurant reservations (e.g. OpenTable), and satellite imagery’ to forecast events and notify users in real time.53 This far outstrips current abilities in traditional political science for prediction and explanation of war, where scholars trudge through and manually code content analysis.54 In the future, it is more likely that scholars or practitioners will use intelligent artificial agents to process real natural language to comprehend text, rather than merely looking for word frequencies and correlations, thereby deepening the capabilities of programs like EMBERS even further.55
Another area for the use of AI in human security is health. There are various applications and abilities in this domain but a few are of particular note. First, AI’s ability to classify and identify images allows it to recognize patterns more quickly and accurately than people. This has been particularly true in the diagnosis of certain types of cancer.56
However, one need not be in a state-of-the-art facility or hospital to receive this type of care. Mobile phones are increasingly being used for bioanalytical science, including digital microscopy, cytometry, immunoassay tests, colorimetric detection and healthcare monitoring. The mobile phone ‘can be considered as one of the most prospective devices for the development of next-generation point-of-care (POC) diagnostics platforms, enabling mobile healthcare delivery and personalized medicine’.57 With advancements in mobile diagnostics, millions more people may be able to monitor and diagnose health-related problems, especially given the estimated increased use in mobile data and devices.
The ability to know what is happening, when and where is the first step in addressing vulnerability
Moreover, with increased connectivity through social media, AI can leverage big data in ways that encourage the uptake of preventive measures. For instance, one application uses machine learning to estimate real-time problematic areas or establishments that may cause food-borne illnesses.58 This particular application alerts health inspectors in real time to potential outbreaks of food-borne illness so that they may take immediate action. In essence, the ability to know what is happening, when and where is the first step in addressing vulnerability.
Planning
In addition to acquiring and contextualizing knowledge, it is also essential to have the ability to plan an appropriate response. Algorithms related to planning can quickly, reasonably and reliably enable users to carry out complex and multi-stage actions. The need for this facility can be illustrated by examining the UN’s average estimated response times for new peacekeeping missions. The UN estimates that when a new crisis emerges – that is, a crisis that involves violence and mass threats to human rights – the estimated response time to plan and field a credible peacekeeping mission is six to 12 months.59 There are two reasons for this; first, is the strict structure and process of formulating peacekeeping missions.60 Second, attempts ‘to develop better arrangements for rapid deployment have been repeatedly frustrated by austerity and a zero-growth budget’.61 In short, politicking within the bureaucracy and money constraints limit the UN’s ability to act swiftly.
Additionally, there are serious problems related to logistics once a mission is approved. The ability to rapidly and reliably estimate, plan and deliver equipment, supplies and services is ‘a constant demand across all field operations’. As such, the UN created a Department of Field Support in 2007, whose focus is on developing a standardized approach to forecasting and planning for new operations; human resource planning; supply chain logistics; and evaluative service centres for mission re-tasking and re-planning.62
While politics may get in the way of ensuring human security, there are technological solutions that may help. Specifically, advancements in planning algorithms are promising, particularly in emergency response situations. Emergency logistics scheduling is an application that deals with ‘the need to identify, inventory, dispatch, mobilize, transport, recover, demobilize, and accurately track human and material resources in the event of a disaster.’63 Depending upon the type of disaster or crisis, various linear or nonlinear, single- or multi-objective algorithms are presently available for this purpose. These algorithms can identify ideal station points,64 routing paths for distribution and evacuation,65 the amount of relief required,66 and scheduling.67
In the coming years, these algorithms are likely to improve, and as they are able to undergo further modification, such as through genetic evolution, learning and adaptation, their abilities will sharpen. Indeed, there is no ostensible reason why the primary objectives of the UN’s Department of Field Support could not be met by using planning AI, thereby automating the majority of its tasks. By doing this, time spent on forecasting and planning would fall, and its implementation would cut costs related to human resources and training and remove many redundancies and barriers in supply chain logistics. This would improve the efficiency of any service centres and potentially result in rapid deployment of forces at a reduced cost.
Governments, NGOs and civil society groups can also avail themselves of this AI. NGOs and civil society groups may in fact be best placed to trial these technologies, as they are often not hampered by political obstacles or byzantine bureaucratic rules. They may have greater flexibility to try out new approaches and improve confidence in those ideas. This would help immensely in various human security-related situations, such as complex humanitarian crises – those that are combinations of political and natural disasters – as they could examine vast amounts of data relating to available resources, use satellite imagery or images from surveillance aircraft to map affected terrain, as well as find survivors, and thereby estimate the necessary resource requirements given limitations on time, goods and manpower.
Empowerment
As human security has such a broad definition, there are almost limitless ways in which AI can help individuals to be more secure. The key is that such applications empower actors and enable them to make better decisions. Determining how AI can do this without exacerbating existing inequalities or unintentionally creating situations of insecurity is also a consideration. As UN General Assembly Resolution 290 states, ‘Human security calls for people-centred, comprehensive, context-specific and prevention-oriented responses that strengthen the protection and empowerment of all people and all communities,’ acknowledging that it also ‘equally considers civil, political, economic, social and cultural rights’.68
Empowerment is thus not easily achieved. If all human rights are of equal value, then trade-offs between them are not easily resolved. Furthermore, it is unclear how AI might contribute or detract from such rights
Empowerment is thus not easily achieved. If all human rights are of equal value, then trade-offs between them are not easily resolved. Furthermore, it is unclear how AI might contribute or detract from such rights. For example, AI’s ability to find patterns in big data is an asset in diagnosing diseases such as cancer, but it may not be desirable when the pattern that it finds is controversial in some way, such as if it is obviously racist, sexist or extremist. Such patterns may well exist because of the available data or because of existing inequalities or systemic biases in a society. AI could merely be making visible the tyranny of the majority in this situation by classifying particular people, groups or behaviours in various categories.69 Take, for instance, the Microsoft Twitter bot that within 24 hours of being deployed on Twitter was turned into a racist, sexist and genocidal chatbot due to the amount of these types of phrases being ‘fed’ to the bot by other Twitter users. Microsoft had to deactivate the bot immediately. When it was accidentally activated a few weeks later, it once again began making inappropriate tweets.70
In the most concerning of cases, AI could actually disempower people. This was demonstrated by an algorithm used to predict recidivism rates, which incorrectly scored black defendants in the US along all metrics, such as the likelihood of reoffending or of committing violent acts.71 These estimates were considered as evidence in sentencing recommendations, and because of systemic race and gender biases against classes of individuals those being sentenced were unfairly and systematically sanctioned.
Thus, it is necessary to interrogate the purpose and effects of AI applications as they relate to empowerment and human security. To facilitate this, one might think of utilizing particular principles as normative guides. An example might be applying something like a Rawlsian principle of justice – which aims to give the greatest benefit to the least advantaged members of society – to AI applications.72 This would provide a general and high-level principle with which to test various context-specific cases to estimate the likely effects of a given AI application. To succeed, it would require AI application developers to adopt an attitude that reflects both their technical know-how and a consideration of broader social elements. In particularly sensitive applications, such as in those related to potential human rights transgressions, further scrutiny would be warranted to ensure that the data provided were robust and diverse, as well as designed to be mindful of the value of these rights. Recent work by the US Federal Trade Commission and the White House on the need for further regulation of big data and algorithmic-based decisions, such as through best practices, codes of conduct and even existing or new laws, is also important.73
Equitable, transparent and accountable
From a policy standpoint, it is essential to know what data are used, an AI model’s guiding assumptions, as well as the kinds of practices that developers utilize
Ultimately, as more data are used to influence decisions, and as algorithms are increasingly utilized to shape, guide or make these decisions, humans must be vigilant in asking for transparency, accountability, equity and universality from these applications. These are all elements of ensuring a just distribution and accounting of the benefits of AI. From a policy standpoint, it is essential to know what data are used, an AI model’s guiding assumptions, as well as the kinds of practices that developers utilize. An understanding of these components will allow for accurate estimates of the likely effects of an AI application.
As it is known that technology is not value neutral, but is in fact a human creation guided by (often implicit) particular values, policymakers ought to ensure that such technologies and their benefits are accessible to everyone in an open source format. Human security is not for an elite few, and so the capabilities of AI must be within everyone’s grasp.74 When it comes to applications related to disaster relief, conflict prevention, human rights protection and justice, it is imperative that wider schemes of data sharing are employed by individuals, groups, NGOs and governments. However, it is simultaneously imperative that data, through sharing and acquisition, are also protected to the greatest possible extent. Health, in particular, is one immediate area of focus for privacy, transparency and accountability policies, best practices and regulation.75
These concerns are important to human security activities. Take, for instance, the movement to require biometric identification to receive humanitarian aid. While the intention is to track individuals to reduce fraud, these data could also be used for political oppression. UNHCR, for example, argues for increased biometric identification, but these data are shared by a variety of actors for multiple purposes.76 Yet there is no discussion of data protection, and there is a gap with regard to policy guidance or compliance. Thus, in situations of complex humanitarian disasters, where refugee safety is certainly of concern, regulations need to be established.77
In short, AI that enables human security must, by its very nature, ensure that it is aimed at minimizing human insecurity, maximizing human empowerment, and is as equitable, transparent and accountable as possible. The consequences of algorithms misclassifying or failing to plan appropriately could be catastrophic. Therefore, good policy, regulation and accountability measures need to be in place. These may range from pre-emptively instituting a set of best practices to remedial or coercive measures after the fact. Whatever the situation, humanity must not walk into a future increasingly influenced by AI and claim the equivalent of an AI ‘Twinkie Defense’.78 Therefore, AI that is sensitive to context, vulnerability and capacity-building, but guided by good judgment, foresight and principles of justice would be most beneficial for all.