Both governments and companies should provide suitable access to remedy for when AI goes wrong. This entails effective reparation, accountability and measures to prevent recurrences.
6.1 Remedies: the landscape
Little attention has been given to the development of a scheme of remedies for when AI goes wrong. Responsibility needs to be clarified, and transparency is required to assess whether and how AI has gone wrong.
While AI governance principles commonly include a principle of accountability, this often refers to impact assessments, audit or oversight, rather than a requirement of remedy in the event of harms. Many sets of AI governance principles in fact have no provision for remedy. As the UN special rapporteur on contemporary forms of racism has pointed out, ‘[e]thical commitments have little measurable effect on software development practices if they are not directly tied to structures of accountability in the workplace’.
To some extent, legal remedies for wrongs caused by the application of AI already exist in tort law (negligence) and administrative law, particularly where those wrongs are on the part of public authorities. However, the law and its processes will need to develop metrics for evaluating AI. For example, English administrative law typically has regard to whether the decision-maker took the right factors into account when making their decision. But AI relies on statistical inferences rather than reasoning. Factors such as the opacity of AI systems and imbalance of information and knowledge between companies and users, scalability of errors and rigidity of decision-making may also pose challenges. As yet, there is no clear ‘remedy pathway’ for those who suffer abuses of human rights as a result of the operation of AI.
Those at greatest risk from harms caused by AI are likely to be the most marginalized and vulnerable groups in society, such as immigrants and those in the criminal justice system.
Those at greatest risk from harms caused by AI are likely to be the most marginalized and vulnerable groups in society, such as immigrants and those in the criminal justice system. This makes it all the more important to ensure that avenues for remedy are accessible to all, whatever their situation.
There has already been some litigation challenging the application of AI by reference to human rights law or its local equivalent. Notable cases include:
- In 2016, State of Wisconsin v Eric L Loomis, which challenged the use of AI COMPAS risk assessments when sentencing defendants in criminal cases. The COMPAS risk assessment was an assessment of recidivism risk, based on comparisons with other individuals with a similar history of offending. The Supreme Court of Wisconsin held that a court’s consideration of a COMPAS risk assessment is consistent with the defendant’s right to due process, provided that the risk assessment is used in parallel with other factors and is not determinative of the defendant’s sentence.
- In May 2017, teachers in Houston successfully challenged the use of an algorithm known as EVAAS, developed by a private company to measure teacher effectiveness. The aim of the algorithm was to enable the Houston Independent School District (HISD) to terminate the employment of teachers whose performance was deemed ineffective. The US district court denied HISD’s application for summary judgment against the teachers’ claim. The court found that the teachers were ‘unfairly subject to mistaken deprivation of constitutionally protected property interests in their jobs’, contrary to the Due Process Clause of the Fourteenth Amendment of the US Constitution, because they had no meaningful way to ensure correct calculation of their scores, nor opportunity to independently verify or replicate those scores. After the summary judgment, the case was settled and HISD abandoned the EVAAS system.
- In March 2018, Finland’s National Non-Discrimination and Equality Tribunal decided that a credit institution’s decision not to grant credit to an individual was discriminatory. The tribunal ruled that the credit institution’s decision was made not on the basis of the individual’s own credit behaviour and creditworthiness, but by drawing assumptions from statistical data and information on payment default relating to other people, by criteria such as gender, first language, age and residential area. The tribunal prohibited the credit institution from using this decision-making method.
- In February 2020, the Hague district court ordered the Dutch government to cease its use of SyRI, an automated programme that reviewed the personal data of social security claimants to predict how likely people were to commit benefit or tax fraud. The Dutch government refused to reveal how SyRI used personal data, such that it was extremely difficult for individuals to challenge the government’s decisions to investigate them for fraud or the risk scores stored on file about them. The Court found that the legislation regulating SyRI did not comply with the right to respect for private life in Article 8 ECHR, as it failed to balance adequately the benefits SyRI brought to society with the necessary violation of private life caused to those whose personal data it assessed. The Court also found that the system was discriminatory, as SyRI was only used in so-called ‘problem neighbourhoods’, a proxy for discrimination on the basis of socio-economic background and immigration status.
- In August 2020, R (Bridges) v Chief Constable of South Wales Police was the first challenge to AI invoking UK human rights law. South Wales Police was trialling the use of live automated facial recognition technology (AFR) to compare CCTV images of people attending public events with images of persons on a database. If there was no match, the CCTV images were immediately deleted from the AFR system. The complainant challenged AFR’s momentary capture of his image and comparison with its watch-list database, by reference to Article 8 ECHR and the UK Data Protection Act. The Court of Appeal found that there was not a proper basis in law for the use of AFR. Consequently, its use breached the Data Protection Act. The court declined to find that the police’s use of AFR struck the wrong balance between the rights of the individual and the interests of the community. But it did find that South Wales Police had failed to discharge the statutory Public Sector Equality Duty, because in buying the AFR software from a private company and deploying it, they had failed to take all reasonable steps to satisfy themselves that the software did not have a racial or gender bias (notwithstanding that there was no evidence to support the contention that the software was biased). The case therefore temporarily halted South Wales Police’s use of facial recognition technology, but allowed the possibility of its reintroduction in future with proper legal footing and due regard to the Public Sector Equality Duty. Indeed, South Wales Police has since reintroduced facial recognition technology for use in certain circumstances.
- The Italian courts, having held in 2019 that administrative decisions based on algorithms are illegitimate, reversed that view in 2021. The courts welcomed the speed and efficiency of algorithmic decision-making but clarified that it is subject to general principles of administrative review in Italian law, including transparency, effectiveness, proportionality, rationality and non-discrimination. Complainants about public decision-making are entitled to call for disclosure of algorithms and related source code in order to challenge decisions effectively.
- In July 2022, the UK NGO Big Brother Watch issued a legal complaint to the British information commissioner in respect of alleged use of facial recognition technology by Facewatch and the supermarket chain Southern Co-op to scan, maintain and assess profiles of all supermarket visitors in breach of data protection and privacy rights.
6.2 Remedies: human rights law
Human rights law requires both governments and companies to provide a suitable right to remedy in the event of breach of their obligations and responsibilities. Remedy comprises effective reparation, appropriate accountability for those responsible, as well as measures to prevent recurrences. The availability of remedy is crucial if human rights or ethical principles are to have real impact in the face of countervailing commercial considerations.
This means that, at all stages of design and deployment of AI, it must be clear who bears responsibility for its operation. In particular, clarity is required on where the division of responsibilities lies between the developer of an AI system and the purchaser and deployer of the system, including if the purchaser adapts the AI or uses it in a way for which it was not intended. Consequently, purchasers of AI systems will need adequate understanding or assurance as to how those systems work, as was demonstrated for the public sector in the Bridges case, discussed above. In that case, the court also held that commercial confidentiality around any AI technology does not defeat or reduce the requirement for compliance with the Public Sector Equality Duty.
Complainants need to know how to complain and to whom, and to be confident that their complaint will be addressed in a timely manner. Remedy relies on transparency and explainability – complainants should have enough information to understand how a decision about them was made, and the role and operation of AI in the decision-making process. They may need access to data on how the AI was designed and tested, how it was intended to operate and how it has operated in the specific case, as well as information on the role of human decision-making or oversight in the process.
Remedy may be provided by the courts, by other governmental mechanisms such as regulators, ombudspersons and complaints processes, as well as by non-governmental mechanisms such as corporate remediation processes. The UN Guiding Principles recommend that all businesses ‘establish or participate in effective operational-level grievance mechanisms’. Such mechanisms should be legitimate (i.e. enabling trust); accessible; predictable; equitable; transparent; rights-compatible; a source of continuous learning; and based on engagement and dialogue with stakeholders.
There are challenges in designing appropriate grievance mechanisms for addressing harms caused by AI. Remedial systems that rely on individual complaint tend to be better at addressing significant harms suffered by few than harms suffered by many. But AI, with its capacity for operation at scale, risks infringing the rights of large numbers of people – for example, by using personal data in violation of the right to privacy or engaging in widespread discriminatory treatment. Many of the people affected could be vulnerable or marginalized, including asylum-seekers and those in the criminal justice system. Consequently, there needs to be provision both for individual complaints and for group or representative complaints against a whole system rather than a single decision. Ombudsmen, national human rights institutions and civil society organizations should be adequately equipped to support victims’ complaints and to challenge AI systems that are systematically causing harm. Remedies should consist both of adequate remedy to victims and requirements to improve, or end the use of, AI systems to prevent recurrence of any harm identified.
Similarly, a business should be able to pursue accountability against other companies that have harmed its operations as a result of AI. This may be because the business has purchased an AI system that has not functioned as intended, or because another company’s AI has in some way interfered with its operations.
Many challenges are expected in this field in the coming years. The guiding principle should remain provision of an effective right to remedy, including for breach of human rights responsibilities.