AI is well and truly on the radar for asylum and border authorities. At the request of the European Commission, in May 2020 the global advisory firm Deloitte identified a shortlist of AI capacities that could be operational within national asylum systems in the EU within five years. However, recommendations from firms and the creation of pilot schemes trialling AI do not necessarily indicate that AI will become a key feature of asylum policy in the near future. If legal and regulatory systems work as planned, not all of AI’s potential uses will be implemented. In other words, a challenge for the AI era is to ensure that public authorities have not only the space and opportunity to investigate possible technical aids, but also the obligation to abandon initiatives that do not meet legal standards or the measure of public trust (regardless of any financial pressures to continue with a particular solution).
What is clear is that as the range of available AI methods continues to grow, so does the range of possible interventions across the asylum decision-making cycle. Some current and expected near-future applications of AI in asylum systems are listed below, spanning two broad categories: decision-making support; and identity verification and risk analysis.
i. Decision-making support in asylum systems
In 1986, in an article aptly titled ‘The British Nationality Act as a Logic Program’, a research group tried to translate key legal standards in UK citizenship legislation into computer code. It was, they said, ‘a rich domain for developing and testing artificial intelligence technology’ because the act contained ‘vague’ phrases such as ‘being of good character’ that were not defined in the legislation and would require factual and legal interpretation, making the standards difficult to put into code. When applied by decision-makers, these or similar vague phrases often carry unspoken values – such as what constitutes a ‘well-founded fear’ of being persecuted for refugee claims – or can be used in a way that discriminates based on race or other characteristics.
Today, AI can approximate some forms of human thinking and intelligence, but the technological capacity still does not exist to reliably code complex legal tests to determine a person’s refugee status or need for protection against refoulement under international law. Assessments require decision-makers to have regard to the future possible risks to individuals refused entry or returned to their country of origin; such assessments also rely on complex and nuanced tests associated with confirming identity and credibility. Any effort solely reliant on AI to decide refugee status, or to reject claims for other forms of international protection based on future risk of human rights abuses, would be highly controversial.