Key principles of human rights law have an important role to play in determining AI governance standards.
There are three dimensions to AI governance: (i) the substantive standards, or principles, that the developers and implementers of AI should meet; (ii) the processes to ensure that substantive standards are met; and (iii) accountability and remedies for any breach of those standards.
In each of these dimensions, AI governance is immature because technology and its uses have developed much more rapidly than the rules constraining them. Human rights law offers baseline standards for all three dimensions.
4.1 Principles: the landscape
AI ethical principles from companies, civil society and intergovernmental organizations have proliferated in recent years, causing more confusion than clarity through their overlapping nature, number and diversity. There are common themes such as data protection, understandability, transparency for accountability and tackling bias. But the precise meaning of each of these terms varies. Some ethics principles identified – such as beneficence and non-maleficence – are so abstract that they are not easily translatable for practical use in governance. There is no unifying theme between rival sets of ethical principles and there are debates on the representativeness of those principles, as most stem largely from Europe and North America, from separate corporate and national contexts, and from men.
Some assert that, without unanimity as to what it entails, ethics offers a lexicon that can be used to give a veneer of respectability to any corporate activity. In the words of Philip Alston, ‘as long as you are focused on ethics, it’s mine against yours. I will define fairness, what is transparency, what is accountability. There are no universal standards.’
4.2 Principles: Human rights law
To date, there are no international human rights treaties that specifically address the impact of AI, but existing human rights laws apply to applications of AI. The former UN high commissioner for human rights, Michelle Bachelet, clarified that AI can have significant impacts on the implementation of many human rights, including privacy, health, education, freedom of movement, freedom of assembly and association, and freedom of expression. Bachelet noted that inferences and predictions about individuals made by AI may profoundly affect not only those individuals’ privacy but also their autonomy, and may raise issues regarding freedom of thought and opinion, freedom of expression, the right to a fair trial and other related rights. Uses of faulty data may result in bias or discrimination, as may faulty AI tools. Uses of AI in the criminal justice process may lead to violations of the rights to privacy, fair trial, freedom from arbitrary arrest and detention and even the right to life.
While all rights are relevant, this section provides an overview of key rights that should form the basis of any safeguards for AI development.
4.2.1 Privacy
The challenges presented by AI
AI is having a huge impact on privacy and data protection. Far more information about individuals is collated now than ever before, increasing the potential for exploitation. A new equilibrium is needed between the value of personal data for AI on the one hand and personal privacy on the other. There are two parallel challenges to overcome: (i) AI is causing, and contributing to, significant breaches of privacy and data protection; and (ii) use of extensive personal data in AI decision-making and influencing is contributing to an accretion of state and corporate power.
Examples of breaches of privacy and data protection include:
- AI’s requirement for data sets may create an incentive for companies and public institutions to share personal data in breach of privacy requirements. For example, in 2017, a UK health trust was found to have shared the data of 1.6 million patients with Google’s DeepMind, without adequate consent from the patients concerned.
- AI may facilitate the harvesting of personal data without adequate consent. Between 2013 and 2018, Cambridge Analytica collated personal data of up to 87 million Facebook users without their knowledge or consent for use in political advertising.
- The practice of using publicly available images to create AI facial recognition databases raises major privacy concerns. Projects such as Exposing.ai aim to highlight the privacy implications of extant large facial recognition datasets. Some large companies, including Microsoft and Facebook, have closed their facial recognition operations. Clearview AI’s provision of facial recognition technology for law enforcement purposes – via a database of 10 billion images gleaned from the internet – has been found in breach of privacy laws in several countries, including Australia, Canada, France and the UK.
- AI lends itself to bulk interception and assessment of online communications. In 2021, the ECtHR found that the UK’s former regime for bulk interception, using digital and automated methods, lacked necessary end-to-end safeguards for compliance with privacy rights.
- ‘Smart’ devices, such as fridges and vehicles, may not only collate data on users to improve performance, but also to sell to third parties. If not properly secured, such devices may also expose users to surveillance by hackers. In 2017, for example, the German authorities withdrew the ‘My Friend Cayla’ doll from sale over fears that children’s conversations could be listened to via Bluetooth.
AI impacts privacy in several ways. First, its thirst for data creates compelling reasons for increased collection and sharing of data, including personal data, with the aim of improving the technology’s operation. Second, AI may be used to collate data, including that of a sensitive, personal nature, for purposes of surveillance. Third, AI may be used to develop profiles of individuals that are then the basis of decisions on matters fundamental to their lives – from healthcare to social benefits, to employment to insurance provision. As part of this profiling, AI may infer further, potentially sensitive information about individuals without their knowledge or consent, such as conclusions on their sexual orientation, relationship status or health conditions. Finally, AI may make use of personal data to micro-target advertising and political messaging, to manipulate and exploit individual vulnerabilities, or even to facilitate crimes such as identity theft.
International human rights law
The human right to privacy currently entails that any processing of personal data should be fair, lawful and transparent, based on free consent or another legitimate basis laid down in law. Data should only be held for a limited period and for specific purposes, with those purposes not to be lightly changed. Data should be held securely, and sensitive personal data should enjoy heightened protection. Privacy entails that individuals should know that their personal data has been retained and processed, and that they have a right both to rectify or erase their personal data and to limit how it is used. Privacy further entails that individuals must not be exposed to mass surveillance or unlimited profiling. Personal data should not be transferred, particularly overseas, unless similar standards will be upheld by the recipient of that data.
Human rights law is already the widely accepted basis for most legislation protecting privacy. The EU’s General Data Protection Regulation (GDPR) is founded on the right to protection of personal data in Article 8(1) of the EU Charter of Fundamental Rights – this is an aspect of the right to privacy in earlier human rights treaties. Privacy and data protection is one of the European Commission’s Seven Principles for Trustworthy AI, while most statements of AI principles include a commitment to privacy.
Application of human rights law to the challenges of AI
With the development of AI, it is becoming apparent that changes need to be made to the contours of the right to privacy.
There is growing awareness of the tension between privacy’s requirement to restrict flows of personal data on the one hand, and economic and commercial arguments in favour of free flow on the other. There are many sound reasons for improved data accessibility: fostering developments in AI innovation; facilitating increased use of AI; and preventing data restrictions from distorting markets or acting as a barrier to competition and innovation.
Privacy should not be viewed as static: it is flexible enough to adapt and develop […] in light of rapidly changing technological and social conditions.
Privacy should not be viewed as static: it is flexible enough to adapt and develop, through new legislation or through judicial interpretation, in light of rapidly changing technological and social conditions. Individual privacy remains vital to ensuring that individuals do not live in a surveillance state, and that individuals retain control over their own data and by whom and how it is seen and used. This is critical at a time when the value of privacy is being steadily and unconsciously diluted.
The human right to privacy should be used to resolve competing interests in an AI-dominated world – whether those interests are commercial, individual or technical. For example, rather than privacy impeding the transfer of anonymized data for use in AI data sets, the balancing between rights and interests allowed by the human right to privacy could be used to set appropriate limits on data-profiling and micro-targeting.
4.2.2 Equality: discrimination and bias
The challenges presented by AI
Because AI generally operates by applying rules to the treatment of people, rather than by assessing each individual on their merits, it carries significant risks of embedding discrimination, as the rules that it applies may distinguish between people, directly or indirectly, by reference to protected characteristics. Indeed, examples of such bias and discrimination in the use of AI abound:
- In 2015, researchers found that female job seekers were much less likely than males to be shown adverts for highly paid jobs on Google.
- In 2016, researchers found that an algorithm used to determine offenders’ risk of recidivism often overstated the risk that black defendants would re-offend, and understated the risk of reoffending by white defendants.
- In 2017, Amazon abandoned its automated recruitment platform, built on observing patterns in applicant CVs over the previous years, having been unable to prevent it from discriminating on the basis of gender or from making other inappropriate recommendations.
- In 2018, Immigration New Zealand suspended its use of data-profiling, which had been predicting likely healthcare costs and criminality of immigrants on the basis of demographics including age, gender and ethnicity.
- In 2019, researchers found that AI widely used to allocate healthcare in US hospitals was systematically discriminating against black people, by referring them on to specialized care programmes less frequently than white people. The algorithm was predicting future healthcare costs as a proxy for illness, using past costs for individuals in similar situations. This failed to take account of the fact that less money had been spent historically on caring for black patients.
- In 2020, the Austrian public employment service (AMS) began using an algorithm that enabled it to classify jobseekers according to their likelihood of successful re-employment. The algorithm has been criticized for discriminating on the basis of gender, disability and other factors, and for intersectional discrimination. AMS has suspended use of the algorithm pending the outcome of legal challenges.
AI makes it difficult to assess whether discrimination has occurred. An individual usually becomes aware of discrimination by comparing their treatment, or its outcome, with that of other people. But when complex AI is used to make each individual a personalized offer (for example, on social security payments) or decision (for example, on school or college entry), that individual may have no means of knowing what criteria were used, nor how their result differs from others. Consequently, individuals may not know, or have any accessible way of finding out, whether they have been disadvantaged or how.
AI developers have learned from past problems and gone to considerable lengths to devise systems that promote equality as much, or more, than human decision-making. Nonetheless, several features of AI systems may cause them to make biased decisions. First, AI systems rely on training data to train the decision-making algorithm. Any imbalance or bias in that training data is likely then to be replicated and become exaggerated in the AI system. If the training data is taken from the real world, rather than artificially generated, AI is likely to replicate and exaggerate any bias already present in society. Second, AI systems rely on the instructions given to them, as well as their own self-learning. Any discrimination or bias deployed by the designer risks being replicated and exaggerated in the AI system. Third, AI systems operate within a context: an AI system will lead to bias if it is deployed within the context of social conditions that undermine enjoyment of rights by certain groups. Without human involvement, AI is currently unable to replicate contextual notions of fairness.
International human rights law
Human rights law provides standards of equality and non-discrimination by which to assess AI. It requires that all individuals’ rights be respected and ensured ‘without distinction of any kind, such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status’. The law entails prohibitions against not just direct discrimination (i.e. treating people differently on prohibited grounds), but indirect discrimination (i.e. treating people the same, but in a way that puts people from a protected group at a disadvantage without an objective justification) and structural discrimination (i.e. creating structural conditions in society that prevent all groups from accessing the same opportunities). Acknowledging that equality does not always mean treating everyone the same, discrimination law provides structured tests for assessing and preventing unlawful treatment.
This ban on discrimination has formed the basis for well-developed understandings of, and jurisprudence on, non-discrimination in both the public and private sectors. Human rights law obliges governments both to ensure there is no discrimination in public sector decision-making and to protect individuals against discrimination in the private sector. Human rights law does not forbid differential treatment that stems from factors other than protected characteristics, but such treatment must meet standards of fairness and due process in decision-making (see below).
Application of human rights law to the challenges presented by AI
Human rights practitioners are accustomed to considering the prohibition of discrimination by reference to well-established tests, and to resolving tensions between non-discrimination and other rights like freedom of speech. Adopting the standards that are well established and internationally accepted in human rights law minimizes the need for fresh debates on highly contested concepts in ethics (what is ‘justice’? what is ‘fairness’?). Further, it avoids the risk of confusion from the imposition of parallel, non-human rights standards of discrimination specifically in the field of AI.
International human rights law does not simply require governments to ban discrimination in AI. As the UN special rapporteur on contemporary forms of racism has observed, human rights law also requires governments to deploy a structural understanding of discrimination risks from AI. To combat the potential for bias, the tech sector would benefit from more diversity among AI developers, more guidance on bias detection and mitigation and the collection and use of data to monitor for bias, and more leadership by example from the public sector. AI developers and implementers must consider holistically the impact of all algorithms on individuals and groups, rather than merely the impact of each algorithm on each right separately. Algorithms should be reviewed regularly to ensure that their results are not discriminatory, even though obtaining data for comparison purposes may be challenging. Vigilance is needed to ensure that other factors are not used as proxies for protected characteristics – for example, that postcode is not used as a proxy for ethnic origin.
Adopting well-established and internationally accepted standards in human rights law minimizes the need for fresh debates on highly contested concepts in ethics.
Legislators, regulators (such as the UK’s Equality and Human Rights Commission) and courts need to consider the methodology for ensuring and overseeing compliance with the right to non-discrimination with regard to AI. New tools may be necessary to detect discrimination, as AI systems operate differently and are generally more opaque than non-AI decision-making processes. To be able to review the operation of AI effectively, the law and the courts may have to take more account of statistical method as well as context, while also adopting more standardized thresholds where possible and appropriate. In parallel, AI developers need to ensure that automated decision-making matches its human equivalent by developing capacity to take account of a rich complexity of factors relevant to the circumstances of the individual. Legal and technical communities should work together to find adequate ways of reducing discrimination in algorithmic systems, including by embedding transparency and contextual approaches.
4.2.3 Autonomy
The challenges presented by AI
AI poses two principal risks to autonomy. First, empathic AI is developing the capacity to recognize and measure human emotion as expressed through behaviour, expressions, body language, voice and so on. Second, it is increasingly able to react to and simulate human emotion, with the aim of generating empathy from its human users. Empathic AI is beginning to appear in a multitude of devices and settings, from games and mobile phones, to cars, homes and toys, and across industries including education, insurance and retail. Research is ongoing as to how AI can monitor the mental and physical health of employees.
Some empathic AI has clear benefits. From 2022, EU law requires that new vehicles incorporate telemetrics for the detection of drowsiness and distraction in drivers. Besides the obvious safety benefits for drivers and operators of machinery, empathic AI offers assistive potential (particularly for disabled people) and prospects for improving mental health. Other possible enhancements to daily lives range from recommendations for cures to ailments to curated music-streaming.
However, empathic AI also carries major risks. The science of emotion detection and recognition is still in development, meaning that, at present, any chosen labelling or scoring of emotion is neither definitive nor necessarily accurate. Aside from these concerns, empathic AI also raises significant risks of both surveillance and manipulation. The use of emotion recognition technology for surveillance is likely to breach the right to privacy and other rights – for example, when used to monitor employee or student engagement or to identify criminal suspects. More broadly, monitoring of emotion, as of all behaviour, is likely to influence how people behave – potentially having a chilling effect on the freedoms of expression, association and assembly, and even of thought. This is particularly the case where access to rights and benefits is made contingent on an individual meeting standards of behaviour, as for instance in China’s ‘social credit’ system.
Regarding manipulation, empathic AI blurs the line between recommendation and direction. Algorithms may influence individuals’ emotions and thoughts, and the decisions they make, without them being aware. The distinction between acceptable influence and unacceptable manipulation has long been blurred. At one end of the spectrum, nudge tactics such as tailored advertising and promotional subscriptions are commonly accepted as marketing tools. At the other, misrepresentation and the use of fake reviews are considered unacceptable and attract legal consequences. Between those extremes, the boundaries are unclear.
Retail and other commercial sectors are increasingly harnessing empathic AI technology. For example, just as advertising has long sought to take advantage of mood and feeling to promote sales, micro-targeting could be taken a step further by including emotion detection as one of its parameters, with the aim of persuading an individual to book a holiday or sign up for a therapy class, among other things. There are currently no parameters by which to assess the acceptable limits of influence, even as persuasive tactics edge further towards manipulation.
In social media, too, AI offers potential for emotional manipulation, not least when it comes to politics. In particular, the harnessing of empathic AI exacerbates the threat posed by campaigns of political disinformation and manipulation. AI use to harness emotion for political ends has already been widely reported. This includes the deployment of fake or distorted material, often micro-targeted, to simulate empathy and inflame emotions. Regulation and other policies are now being targeted at extreme forms of online influence, but the parameters of acceptable behaviour by political actors remain unclear.
Empathic AI could have major impacts on all aspects of life. Imagine, for example, technology that alters children’s emotional development, or that tailors career advice to young people in an emotionally empathic manner that appears to expand but actually has the effect of limiting choice. Vulnerable groups, including minors and adults with disabilities, are particularly at risk. Researchers of very large language models have argued for greater consideration of the risks of human mimicry and abuse of empathy they create.
The draft EU Artificial Intelligence Act would ban the clearest potential for manipulation inherent in AI by prohibiting AI that deploys subliminal techniques to distort people’s behaviour in a manner that may cause them ‘physical or psychological harm’. The Act would also limit the uses of individual ‘trustworthiness’ profiling. As most empathic AI involves the use of biometric data, it is likely to be subject to the Act’s enhanced scrutiny for ‘high-risk’ AI. However, empathic AI that operates on an anonymous basis may not be covered.
International human rights law
As well as privacy, human rights law protects autonomy. It protects the right to freedom of thought and the right to hold opinions without interference, as well as the better-known and -understood rights to freedom of expression, freedom of assembly and association, and freedom of conscience and religion. The EU Charter of Fundamental Rights also protects the right to ‘mental integrity’. Prior to recent technological developments, the rights to freedom of thought and opinion were underexplored. Further guidance is now emerging: for example, the UN special rapporteur on freedom of religion or belief has recently issued guidance on freedom of thought.
Children’s rights merit special consideration in this area. In addition to questions over privacy and the ability of minors to give consent when providing personal data, the UN Committee on the Rights of the Child has called for practices that rely on neuromarketing and emotional analytics to be prohibited from direct or indirect engagement with children, and for states to prohibit manipulation or interference with the child’s right to freedom of thought and belief through emotional analytics and interference.
Application of human rights law to the challenges presented by AI
There are considerable concerns about the extent to which emotion recognition, capture and simulation may infringe human rights, in ways that are not necessary or proportionate to perceived benefits.
At present, challenges to autonomy are generally viewed through the prism of privacy and data protection. While this enables consideration of the impacts of surveillance, it is not a sufficient framework by which to consider issues of manipulation. Empathic AI can still be effective without capturing personal data – examples include billboards that adapt their advertising according to the reactions of people walking past, stores that adapt their advertising and marketing after capturing shoppers’ reactions in real time or bots that reflect unnamed users’ emotions in order to influence their decision-making.
Initiatives to set limits on simulated empathy, such as the technical standard under development by the IEEE, ought to take account of the absolute nature of the rights to freedom of opinion and freedom of thought, as well as the right to mental integrity and the rights of the child. Further legislative and judicial consideration is needed to establish precisely what constraints human rights law imposes on potentially manipulative uses of AI, and precisely what safeguards it imposes to prevent the erosion of autonomy.
Meanwhile, some are reaching their own conclusions on empathic AI. For example, a coalition of prominent civil society organizations has argued that the EU’s Artificial Intelligence Act should prohibit all emotion recognition AI, subject to limited exceptions for health, research and assistive technologies. In June 2022, Microsoft announced that it would phase out emotion recognition from its Azure Face API facial recognition services. In that announcement, Microsoft noted the lack of scientific consensus on the definition of ‘emotions’, challenges of generalizations across diverse populations, and privacy concerns as well as awareness of potential misuse of the technology for stereotyping, discrimination or unfair denial of services.
4.2.4 Equality: implementation of economic and social rights
International human rights law protects a wide range of economic and social rights, and provides an anchor for sustainable development. Just as AI offers opportunities to achieve implementation of the SDGs, so it offers significant potential to improve the implementation of rights such as those to education, health, social security and work. Equality is key to achieving this potential: not just through the avoidance of discrimination, but through AI that benefits all communities and through the provision of equal opportunity for all in accessing the benefits. Failure to realize such opportunities risks not only entrenching but exacerbating current social divisions.
Ideally, such provision would begin with research into AI technologies that would help to implement the SDGs, and funding for the development and rollout of those technologies. The challenges are to incentivize developments that benefit all communities, as well as those that are most profitable; and to ensure that no AI systems operate to the detriment of vulnerable communities.
4.2.5 Fairness and due process in decision-making
AI decision-making brings a risk that the ‘computer says no’ in respect of significant life decisions, without possibility of review or challenge. Aside from discrimination, this also raises questions as to fairness of process and quality of decision-making in AI systems. It concerns both whether the use made of AI to reach the decision was fair, and whether AI reached or contributed to a fair decision in the specific case – and if not, what the recourse might be.
In making decisions, AI may segment people by reference to a wide range of factors and without consideration as to whether segmentation is appropriate in the particular case. These factors may be unrelated to the decision in question, but decisions that treat some people unfairly in comparison to others may still result. For example, if a travel insurance provider were to double the premiums offered to people who had opted out of receiving unsolicited marketing material, it would not be discriminating on the basis of a protected characteristic. Its decision-making process would however be biased against those who have opted out.
Where an individual’s human rights are affected by a decision made by a public authority, they should be able to seek remedy and will usually be able to challenge the decision in public law – for example, by way of judicial review. Decision-making processes need to be sufficiently transparent to enable such review. Individuals should know who the decision-maker is, the factors on which the decision is made and be able to verify the accuracy of any personal data used in the process. There should be adequate human involvement or oversight – while acknowledging that human involvement may not be essential in every case and is not necessarily a failsafe.
International human rights law stipulates requirements for fairness in legal proceedings. Public and private law bases of challenge to decisions commonly reflect these requirements, and they can provide the basis for guidelines on minimum standards for transparency, human control and accountability through possibility of review for all AI activities.
4.2.6 Other rights
AI, used in different contexts, may have serious implications for the full range of human rights.
For example, the use of AI for content curation and moderation in social media may affect the rights to freedom of expression and access to information. The use of analytics to contribute to decisions on child safeguarding, meanwhile, may affect the right to family life. The use of facial recognition technology risks serious impact on the rights to freedom of assembly and association, and even on the right to vote freely. In extreme cases – for example, in weapons for military use – AI risks undermining the right to life and the right to integrity of the person if not closely circumscribed. In each of these areas, existing human rights can form the basis for safeguards delimiting the appropriate scope of AI activity.