Refugee protection in the artificial intelligence era

A test case for rights
Research paper Published 7 September 2022 Updated 8 September 2022 ISBN: 978 1 78413 532 4 DOI: 10.55317/9781784135324
Aerial photo showing a boat carrying migrants stranded in the Strait of Gibraltar.

Madeleine Forster

Former Academy Associate, International Law Programme

Artificial intelligence (AI) is being introduced to help decision-making in high-risk fields. This includes decision-making about asylum and refugee protection, where automated ways of processing people and predicting risks in contested circumstances hold great appeal.

This field, even more than most, will act as a test case for how AI protects or fails to protect human rights. Wrong or biased decisions about refugee status can have life and death consequences, including the return of refugees to places where they face persecution, contrary to international law. Existing refugee decision-making systems are already complex and are often affected by flaws, including lack of legal remedies – issues that can be exacerbated when overlayed with AI.

This paper examines the primary protections being proposed to make AI more responsive to human rights, including the upcoming EU AI Law. Can innovation and protection of human rights really be combined in asylum systems and other domains that make decisions about the future of vulnerable communities and minorities? This is a question not just for governments but also for private sector providers, which have independent human rights responsibilities when providing AI products in a politically charged and changeable policy field that decides the future of vulnerable communities and minorities.