Meaningful human control
To override and avert some of the risks associated with fully automated decision-making, many policymakers emphasize the importance of human control over AI systems. This is evident, for instance, in the European Commission’s draft AI Act. There has also been an identifiable shift in AI policy thinking towards augmenting rather than replacing human action and expertise, given questions about whether AI tools can apply the same logic as humans, and questions about trust and accountability (reflecting the sense that a human or institutional subject should remain accountable and liable for outcomes).
This position is replicated in public statements from some immigration authorities. Australian officials, for example, have publicly affirmed that even where AI technologies are used to inform a decision, a human decision-maker will make the final determination. UK officials have also given evidence that data-based risk profiling is not used in immigration to make solely automated decisions, and that procedures retain a ‘human in the loop’. Products for border security functions are often developed on the proviso that they will provide advice and assistance only to a final human decision-maker.
But the conditions for meaningful and effective human control are still being worked out, including through the courts. One case with relevance for the asylum sphere occurred in 2017, when the European Court of Justice examined a proposed EU–Canada agreement to authorize the cross-border transfer of airline-collected passenger data (known as Passenger Name Records or PNRs) for security pre-screening purposes. The bilateral agreement anticipated that the data would be assessed using automated methods (including algorithms, but not necessarily advanced AI). Given that the data collected could then be used by a human decision-maker to make binding decisions about an individual’s authority to enter Canada, the court provided a non-exhaustive list of recommendations to ensure the data transfer agreement met EU data protection standards, including in the following areas:
- Non-interference with privacy: Pre-established models and criteria should be specific and reliable, making it possible to arrive at results targeting individuals who may be under a ‘reasonable suspicion’ of participation in terrorist offences or serious transnational crime. The results should be ‘non-discriminatory’ to prevent indiscriminate interference with the right to privacy.
- Human control: Given the potential for error, any ‘positive result’ must be subject to an individual re-examination by non-automated means before the relevant action adversely affects the air passenger in question.
- Reliability: The reliability of automated models must be subject to review under the EU–Canada agreement.
While principles-based requirements for information exchange are welcome, there is increasing consensus that effective human control will be difficult to achieve where an AI tool contains complex algorithms that operate as a ‘black box’ system. In such cases, a tool may not be fit for purpose (and so should be subject to a ban or moratorium) until solutions such as ‘explainable AI’ are well developed. The UK Information Commissioner’s Office has recommended that organizations looking to use AI tools should select tools that can already be reviewed easily for accuracy or where the logic of the rules used can be explained.
Human control is also not a perfect protection against harm. To act as a safeguard against mistakes with significant consequences, effective human control needs to be just that – effective. Determining factors in this respect include the decision-maker’s expertise and capacity to consider, review and make decisions that are appropriately informed by, but independent of, an AI analysis. This in turn depends on what non-AI-derived information is available, the scope of the human decision-maker’s legal and actual authority to reject AI-generated results, and the human decision-maker’s level of professional knowledge. Many national asylum systems already suffer from under-resourcing and limited training for decision-makers, leaving rejected asylum claims vulnerable to being overturned on appeal. Other common characteristics of asylum systems, such as the potential for political pressure on decision-makers, limit the suitability of relying on a ‘human in the loop’ to act as a control mechanism for automated functions.
Legislative safeguards
Courts are generally sympathetic to public authorities’ reasons for wanting to introduce AI systems, but have ruled against governments in several recent cases because of lack of legal safeguards and legislative oversight. Courts are placing the burden back on authorities and providers of AI to justify the need and work through legislative oversight processes, as opposed to individuals having to challenge AI systems through the courts.
For example, UK police were trialling FRT in public spaces for identification of persons on watchlists. In deciding that FRT did not meet standards of legality, the Court of Appeal found flaws with the process to authorize the use of FRT and identified a lack of clear, authorized criteria for deciding how individuals would be selected and placed on a watchlist, which would then be fed into the FRT system. Similarly, the existence of border watchlists and stop-lists populated mostly by individuals from minority communities has also raised concerns about unlawful discrimination.
UK courts have also recently affirmed that the cost of creating and maintaining a technological system is not a sufficient rationale for failing to correct policy, legislation and technological unfairness.
Responsible innovation
Canadian and Australian immigration authorities have both explored the possibility of automating so-called ‘neutral’ or ‘positive’ decisions in immigration visa application triaging and identification matching. The logic for this is that AI tools would not be used to make a final (or near-final) decision that could negatively affect a person’s legal rights or obligations, thereby potentially generating a reason to seek review of the decision under domestic legal frameworks. Used in a non-discriminatory way, AI assistance in the triage and prioritization of cases in decision-making streams could speed up the processing of asylum claims – to the benefit of both refugees and governments.
Caution may be needed, though, as intermediary steps (such as triage and prioritization) in asylum processes can themselves produce negative consequences and be subject to appeal. The UK recently removed algorithms from use in immigration triage after legal actions seeking their disclosure for bias testing were commenced, although the authorities denied the tools were biased. Separately, a Dutch court found that AI used to identify risk factors for welfare fraud that would then be investigated by a human could – unless safeguards were put in place, including greater transparency and testing for discrimination – still arbitrarily interfere with the right to privacy under European human rights law.
The UK recently removed algorithms from use in immigration triage after legal actions seeking their disclosure for bias testing were commenced, although the authorities denied the tools were biased.
Multi-stakeholder development?
In debates on responsible and trustworthy AI, significant emphasis is placed on the need for multi-stakeholder engagement to create AI that is more likely to be beneficial, lawful and ethical, and that does not build in risks from the outset. Such debates consider the use of multi-disciplinary teams in the design of AI processes, and the potential for co-creation or consultation with stakeholders to avoid the introduction of human rights blind spots ex ante.
There have been comparable efforts in the justice sector, including from judges, to identify uses for AI that would receive broad support within the sector (including from civil society), and to identify AI tools that present ‘red flags’, in order to steer policymakers on the types of tools that should or should not receive investment.
There are several other potentially significant benefits to the co-creation of AI systems in the asylum sector. Firstly, such systems, if well designed, could support dignified application procedures in what is currently an adversarial, contested and undignified space. Members of displaced communities trying to enter a country for safety are often desperate and have suffered trauma. Overly intrusive searches for data in phones, social media and other sources to create risk profiles also cause asylum seekers to eschew technology that can and should help them to remain connected to friends and family.
Secondly, private firms are more alert than ever to their human rights duties and impacts on vulnerable communities. Effective business human rights due diligence should be informed by consultation with potentially affected groups. The UN Guiding Principles on Business and Human Rights urge businesses to pay special attention to human rights impacts on individuals from groups or populations that may be at heightened risk of vulnerability or marginalization; this requires a tailored understanding of the rights of those groups and an appreciation of the risks generated by AI in the specific contexts in which automated processes will be used.
But the highly changeable nature of asylum policy – along with trends towards more restrictive borders, as seen at the height of the COVID-19 pandemic – is likely to stymie collaboration, creating real risks for community groups and technology firms that might wish to partner with governments. Policy shifts towards more restrictive immigration regimes may well create legal and reputational questions for private sector partners that design, license and operate AI-based systems, given corporate responsibility to respect international human rights law standards as well as domestic privacy and non-discrimination legal regimes.
For example, since 2013, the US Immigration and Customs Enforcement (ICE) agency has used a computerized risk classification assessment to help determine whether to detain or release a person pending an immigration (including asylum) hearing. As an existing tool embedded within an established system, ICE’s computerized assessment has presented researchers with the challenge of improving its operational effectiveness. Proposals have included the potential integration of predictive analytics that can better account for ‘equity factors’, including the rate at which asylum is ultimately granted. In theory this could assist officers in reducing the number of people detained. However, according to researchers during several US administrations, the variables and weightings have been manipulated to adapt the tool’s outputs to meet the immigration policy agenda prevailing at any given time – including the former Trump administration’s ‘no release’ policy on the mandatory detention of illegal immigrants. The result has effectively been to remove the tool’s option to recommend against detention. Arguably, this has undermined the goals of a risk-based approach, which in theory should allow individuals who are considered low-risk to remain in the community pending immigration proceedings.
The dilemma for business is clear: although tools must be able to adapt to changing official policy, this adaptability poses significant challenges. Should firms accept that if government policy becomes increasingly anti-immigration and runs contrary to international legal obligations, technology tools should follow suit?
Should firms accept that if government policy becomes increasingly anti-immigration and runs contrary to international legal obligations, technology tools should follow suit?
Human rights compliance for businesses performing state functions
Governments are increasingly outsourcing refugee protection, border management and AI at the same time. This creates a complex ‘government–business’ nexus in which private entities have significant involvement in, and sometimes control over, the design and implementation of policy but do not necessarily have the same level of accountability in terms of respecting and protecting rights.
The UN Office of the High Commissioner for Human Rights (OHCHR) has put it bluntly and in practical terms. If states are going to rely on the private sector to deliver public goods or services, they have to be able to oversee such processes and demand accuracy and transparency around human rights risks. If not satisfied that the risks can be mitigated, states should not use private contractors to deliver public goods or services.
A non-exhaustive list of tools that can increase confidence in private sector delivery of public services via AI systems includes the following:
1. Mandatory human rights impact assessments
Smart regulatory mixes for AI already include risk and impact assessments. But as indicated above, these tools will not be able to look at all risks to individuals in a way that meets international legal obligations. To address the issue, in Europe the Committee of Ministers of the Council of Europe has advocated the use of compulsory human rights impact assessments (HRIAs) for all public sector AI systems, in addition to any data protection, social, economic or other impact assessments required under existing law. Standalone HRIAs or assessments embedded in other tools are also relevant for businesses – which have their own duties under multilateral frameworks and domestic and regional legal systems – as well as for international organizations.
To truly mitigate AI-related risks in the context of asylum processes, assessments need to be broad enough to address system-based effects, and – as the Committee of Ministers has put it – include an evaluation of the ‘possible transformations that these systems may have on existing social, institutional or governance structures’. Timing is also critical. Assessments should occur ‘regularly and consultatively’ throughout the design and deployment processes, notably ‘prior to public procurement, during development, at regular milestones, and throughout their context-specific deployment in order to identify the risks of rights-adverse outcomes’.
2. Third-party audits and ongoing independent review functions
Complementing HRIAs, the proposed EU AI Act requires developers to prove compliance with certain standards such as on accuracy. But it allows developers and users to self-certify and self-monitor such compliance. This approach takes into account the likelihood that a vast number of AI systems will become a part of everyday life, making it difficult to demand and regulate independent third-party review or oversight of all systems.
However, it will remain crucial to have third-party audits and independent oversight in some domains, including asylum. Advocates of safeguards have argued that the opaque, discretionary and often discriminatory nature of decision-making in asylum and border control, along with the growing role of for-profit private entities in government, demands independent oversight.
In addition, independent reviews and supervisory functions can alleviate the burden on individuals to challenge AI-related assessments, particularly when the people involved may already be constrained by limited access to domestic justice mechanisms, a lack of resources, and language barriers. The New Zealand Algorithm Charter for government use of AI recommends the use of peer review for algorithms and encourages departments to ‘act on’ the results of those reviews. This tends in the right direction, but the effectiveness of this voluntary charter has not yet been tested. Nor is it an alternative to ensuring mandatory access to legal remedies for those affected by wrongful decisions.
3. Minimum viable fairness and accuracy levels for products
The next stages in AI policy development will need to define the level of acceptable risk against human rights standards. This is not easy.
Computer science research is actively looking to improve the accuracy, verifiability and reliability of AI tools after they leave the training environment. There are even hopes that AI can help to eliminate profiling based on generalized assumptions relating to race, ethnicity or other factors. The argument is that if a tool can search in a discriminatory way for patterns that indicate risk, it should also be able to look for patterns that reveal discrimination; however, some doubt that ‘fair learning AI’ is really feasible.
There are hopes that AI can help to eliminate profiling based on generalized assumptions relating to race, ethnicity or other factors.
For public sector applications being developed or introduced now and in the near future, something more will be needed. Courts have said clearly that they expect public bodies introducing new technologies to ‘satisfy themselves that everything reasonable which [can] be done [has] been done’ to prevent unlawful bias or other flaws. This includes products obtained from private vendors or developers.
Meeting this requirement is also challenging for public sector entities. As the UK’s Centre for Data Ethics and Innovation highlighted in a public enquiry:
The challenges for asylum systems are likely to mirror those in other domains. Are trade-offs between accuracy and transparency lawful and justifiable? Can a level of accuracy and assurance against discrimination be maintained over time, and how would this be monitored and evaluated? How would problems be rectified?
The ‘right’ level of transparency
There is currently a strong focus on defining transparency standards for public sector AI, with some encouraging indicators in evidence but a lot of questions still to be addressed.
The draft EU AI Act proposes a public register for high-risk AI uses, with notice to be provided to the people and entities affected. This replicates calls by advocates for a public register of AI systems used in asylum cases. In the UK, the government is expected to accept the recommendation of the national Commission on Race and Ethnic Disparities mandating transparency for all public sector organizations ‘applying algorithms that have an impact on significant decisions affecting individuals’. In New Zealand, the government department responsible for asylum and immigration has signed up to a New Zealand Algorithm Charter (July 2020) requiring departments to maintain transparency by ‘clearly explaining how decisions [that affect individuals] are informed by algorithms’, with recommendations on how to achieve this. But in all cases, the scope of the obligation (e.g. is a decision ‘informed by’ AI if the AI performs a sorting function?) has yet to be defined and tested.
For the proposed EU AI Act to meet EU data protection standards, transparency requirements should be sufficient to allow for independent review and should apply to both final decisions and ‘intermediary’ processes (such as vulnerability assessments that rely on profiling). Recent EU jurisprudence has highlighted the critical importance to final outcomes of transparency about AI used within a broader system, and its critical importance to the risk of human rights harm. A Dutch court was not given access to a system used to help identify welfare recipients who should be investigated for welfare fraud. Without some level of access to ‘independently verifiable information’ about how the system worked, the court found a violation of the right to privacy under EU law, because it was impossible to assess whether the interference was necessary and proportionate. Consequently, the court found that there was at least a possibility of discriminatory interference in the private lives of welfare recipients. The fact that the tool did not itself produce a final decision did not alter the court’s assessment, particularly as the tool was being trialled in economically deprived neighbourhoods.
Big questions that remain include what exceptions to evolving transparency standards should be permitted. For example, when can authorities use AI without notifying individuals or providing access to information about how AI assessments were made (and on the basis of what data)?
Even without the introduction of AI, the permissible scope of limitations on data protection – and on secrecy provisions around the disclosure of evidence – is still being litigated in a number of jurisdictions, including in relation to data and evidence in immigration deportation proceedings within Europe. Introducing automated decision-making into systems veiled with secrecy is fraught, and there is a hard-fought debate about the right to information and ability to challenge decisions.
Any carve-outs or exceptions to AI transparency introduced in legislation should be strictly limited. They must be narrow, align with legal standards, and not ultimately undermine avenues for independent and judicial review, so that those affected can still assert their rights and seek remedy where necessary.
Striking a balance between rights and interests is likely to become more, not less, complex with the introduction of AI technologies.
Striking a balance between rights and interests is likely to become more, not less, complex with the introduction of AI technologies. Particular factors to consider will include the use of predictive analytical tools based on profiling, the reliance of automated tools on increasingly large datasets controlled by private sector actors, and the increasing presence of AI across large-scale, complex IT and decision-making systems.