Because technological adoption can spread quickly and widely across borders, tackling the ethical and legal concerns attached to AI will not be achieved by domestic efforts only. Although most multilateral AI governance initiatives are in their infancy, standard-setting for AI in the asylum sector is poised to develop significantly in three notable areas: multilateral data sharing frameworks; high-level initiatives; and development assistance in migration contexts.
Multilateral frameworks for data sharing
Multilateral frameworks that set the terms for data exchange between states are beginning to include guidance and minimum standards for automation and AI. The frameworks include those on information exchange in immigration, which also has implications for refugee protection. Pioneering efforts at standard-setting can be helpful, but they can place a significant burden on less resourced countries that seek to adopt AI early, potentially without the necessary human and legal safeguards against harm.
For example, the International Civil Aviation Organization (ICAO) recently revised standards for the collection and analysis of Passenger Name Records (PNRs). PNRs are used by airlines for a variety of commercial purposes and to facilitate implementation of UN Security Council resolutions relating to the prevention of terrorism and the use of risk-based security controls for airline passengers.
The revised ICAO standards take on board a European Court of Justice 2017 opinion and the EU data protection regime (cited above in the section ‘meaningful human control’). This is a positive step, recommending that states ‘base the automated processing of PNR data on objective, precise and reliable criteria that effectively indicate the existence of a risk, without leading to unlawful differentiation’, and that they discourage ‘decisions that produce significant adverse actions affecting the legal interests of individuals based solely on the automated processing of PNR data’.
These ‘legal interests’ should logically include the ability to seek asylum and protection against refoulement in both transit and destination countries. However, given the expanding number of situations in which cross-border data exchange will rely on automated processing, protecting freedom of movement and the ability to leave a place of risk in an automated risk-assessment era requires further attention. Data sharing without safeguards can also place asylum seekers and their families at harm if the information shared reveals that they have left their country of origin and sought asylum against persecution. Other very practical questions concern how to ensure all states are equipped to assess risks objectively, prevent bias, identify political or other interests in data and machine learning, and provide meaningful opportunities for human appeal, oversight and interventions. This seems all the more necessary given that emerging technologies will be promoted by large, well-resourced early adopters of AI through bilateral and multilateral frameworks.
High-level initiatives on AI
Numerous initiatives have begun the search for common ground on ethical principles for AI. Many of these aim to bring ethical principles together into normative frameworks, and to reinforce commitment to existing human rights law and standards so as to provide a legal foundation for ethical considerations. Some initiatives are expected to focus on technology, such as FRT, used at borders. A perceived need to counter the rise in China’s capacity to export surveillance technologies, including FRT, was an impetus for the US joining the Global Partnership on AI (GPAI) with like-minded states in June 2020.
Multilateral efforts may help to promote minimum standards for AI where serious human rights concerns are associated with particular technologies and actors. However, commentators rightly fear that high-level principles may fall short of providing enforceable rights and safeguards for non-citizens (such as asylum seekers) when translated into domestic frameworks, given already high levels of public tolerance for new technologies at borders.
Development assistance frameworks for migration management
There are long-standing debates about whether development aid can – or should – be linked to specific goals of donor countries, including the management and reduction of refugee and migrant flows. Meanwhile, technological assistance for immigration infrastructure, border management and refugee systems is already common.
Donor countries and international organizations offering technological assistance are required to exercise due diligence to ensure assistance and cooperation do not result in human rights violations abroad. Appropriate due diligence includes the transparent and effective use of HRIAs.
As bilateral or multilateral assistance begins to incorporate new technological functions, attention will need to be paid to how these applications operate at an individual and system-wide level. Among other aspects, due consideration will need to cover any impacts on global movement of people – especially when technical assistance is coupled with agreements on the sharing of personal data.