The ‘border of the future’ is expected to be ‘heavily dependent on digital systems, data analytics and automation-at-scale to both improve facilitation and mitigate risk’. Work has already started on scoping out potential uses for artificial intelligence (AI) in immigration systems and asylum decision-making processes. For example, at the height of the COVID-19 pandemic, when many states restricted movement across borders, scenario-planning led by OECD member countries tackled the following near-future proposition:
Rapid shifts towards automation in various sectors, including at borders, raise questions about how to guarantee that international legal standards are carried through into the AI era. This research paper offers a snapshot of the near-future outlook for AI in national systems that receive and process refugee protection claims, including when individuals seek asylum at borders, and associated concerns under international law. It explores emerging approaches to mitigating legal and ethical risks associated with AI in public decision-making, with the aim of supporting policymakers and civil society in thinking through effective safeguards in the asylum context and identifying gaps still to be addressed.
The appeal of AI and automated decision-making
There is no single definition of AI, but it can be usefully described as ‘a set of computational technologies, that are inspired, but typically operate quite differently from, the way people use their nervous systems and bodies to sense, learn, reason, and take action’.
The ability of AI to approximate human decision-making has created demand for ‘automated’ or ‘algorithmic’ processes that can support or act in the place of human decision-makers. AI’s appeal crosses many sectors in which decisions that may be informed by data are made, including within public administration.
Typically, AI-related technologies in the public sphere aim to improve the quality, accuracy, consistency, efficiency, effectiveness or timely delivery of functions. To date algorithms have been used more often to help decision-making that is high-volume and routine in nature, and more easily coded by expert systems.
Now, machine learning and other advanced and emerging techniques such as neural networks and natural-language processing are offering opportunities for AI to analyse vast quantities of data and identify patterns and correlations that can support strategic planning, inform investigations, and enable problem-solving in critical fields of government. In other words, AI is becoming a feature of decision-making in situations that are inherently complex.
Why do asylum and refugee protection test AI?
The power of states to control their borders is tempered by their obligations under international law. Under international refugee and human rights law, states must not return individuals to countries where there are substantial grounds for believing they will face a real danger of persecution, torture or other serious human rights violations (this proscription under the principle of ‘non-refoulement’ includes pushbacks at borders). To prevent refoulement, states are expected to adopt a range of legal and practical interventions, such as establishing national systems (known as asylum systems) to assess claims to refugee status and other forms of international protection in line with international legal standards, including the principle of non-discrimination.
This means states need to design and administer government policies – including technological systems – in line with their legal obligations. This is a complex and politically sensitive task for many governments. The question of when states may limit entry at borders came into sharp relief during the COVID-19 pandemic, when a vast majority of countries imposed additional entry restrictions in order to ‘health-proof’ their borders. These restrictions included the use of remote surveillance technologies, from temperature checks to location-tracking for quarantine, that in effect brought border security into people’s homes.
The expansion of technology into health surveillance across borders follows decades of demand for more secure borders to combat terrorism and transnational crime, and for greater control over migration flows.
The expansion of technology into health surveillance across borders follows decades of demand for more secure borders to combat terrorism and transnational crime, and for greater control over migration flows. These factors, including the current pandemic, create ‘moral panics’ that are often used to scapegoat migrants and refugees. The same factors are also contributing to an increase in measures to limit access to asylum, making it harder for asylum seekers to leave countries of risk and enter countries of safety, putting international law and the principle of non-refoulement under pressure. The constraints on access to asylum include extremely limited resettlement opportunities. As a result, migrants (including asylum seekers) are increasingly taking risky voyages to reach destination countries, including using people-smugglers.
This complex environment is a driver for technological innovation, and is renewing attention on how the principle of non-refoulement applies at borders. The new and emerging technologies will operate in an environment where, if international legal standards are not rigorously applied to AI tools and the ecosystems in which such tools are introduced, there may be real human consequences.
Introducing AI systems into this field presents significant human rights-related challenges. At the same time, the existing legal protections against bias, unlawful decisions and refoulement are already under pressure. As the UN Special Rapporteur on contemporary forms of racism flagged when looking at the challenges of introducing new technology in this field: ‘Executive and other branches of government retain expansive discretionary, unreviewable powers in the realm of border and immigration enforcement that are not subject to the substantive and procedural constraints typically guaranteed to citizens.’ These challenges can be exacerbated at scale when AI systems offer seemingly simple solutions to complex problems.
This means that asylum and refugee protection will form one of the test cases for global and national governance of AI, and for whether human rights-compliant AI can be achieved. As the UN secretary-general has said: ‘As refugees go, so goes the world.’