Human rights have been wrongly overlooked in AI governance discussions. They offer clarity and specificity, international acceptance and legitimacy, and mechanisms for implementation, oversight and accountability.
In the 1940s, there was fervent belief that human rights would be central to world peace and to human flourishing, key not only to safeguarding humanity from catastrophe but to the enjoyment of everyday life. Supporters of the ‘vast movement of public opinion’ in favour of human rights at that time would be amazed at their relative absence from today’s debate on AI.
3.1 Human rights overlooked
AI governance has much to gain from a multidisciplinary (and potentially interdisciplinary) approach, drawing from, among others, philosophy, human rights law, science and technology studies, sociology, statistics, diverse impact assessment and audit practices and stakeholder theory. However, with some exceptions, the human rights framework has been overlooked as an existing and flexible baseline for AI governance.
AI governance initiatives are often branded as ‘AI ethics’, ‘responsible AI’ or ‘value sensitive design’. Some of these initiatives, such as the Asilomar AI Principles, are statements drawn primarily from the philosophical discipline of ethics. Many are multidisciplinary statements of principle, and so may include human rights law as an aspect of ‘ethics’. For example, the UNESCO Recommendation on the Ethics of Artificial Intelligence lists ‘[r]espect, protection and promotion of human rights and fundamental freedoms and human dignity’ as the first of its ‘values’ to be respected by all actors in the AI system life cycle. And the Institute of Electrical and Electronics Engineers (IEEE)’s Standard Model Process for Addressing Ethical Concerns during System Design lists as its first ‘ethical principle’ that ‘[h]uman rights are to be protected’.
Many sets of AI governance principles produced by companies, governments, civil society and international organizations fail to mention human rights at all.
Many sets of AI governance principles produced by companies, governments, civil society and international organizations fail to mention human rights at all. Of those that do, only a small proportion (around 15 per cent) take human rights as a framework. Most national AI strategies do not engage with human rights in depth.
So why, then, are human rights not central to AI governance?
First, in many arenas, human rights are simply omitted from discussions on AI governance. Software developers and others in the AI industry generally do not involve anyone from the human rights community in discussions on responsible AI. There is a marked lack of human rights-focused papers or panels at the largest international conferences on responsible AI. Corporate-level discussion of AI ethics and their implementation often fails to refer to, or engage with, human rights. Job advertisements for corporate AI ethics specialists usually make no reference to human rights. Governments focused on AI ethics may not involve human rights lawyers in policy development until a late stage, if at all. In contrast, human rights are often the focus of civil society and academic discussions in different venues – and with different participants – to those where corporate and public sector AI governance decisions are made. Notable exceptions are discussions hosted by international organizations such as the UN and the Council of Europe, where human rights law forms a well-established shared lexicon; and the European Union, which has placed human rights at the core of the draft Artificial Intelligence Act.
Second, certain myths about human rights can too often lead to them being disregarded by those involved in AI governance discussions. The following are some of the most common.
3.2 Myths about human rights
Myth 1. ‘Ethics holds all the answers’
Ethics and human rights are distinct disciplines with valuable, complementary roles to play in AI governance. Both ethics and human rights share the rationale of curbing state and corporate power by acting as a bulwark of the interests of the individual. But they offer different, complementary means for reaching this end. One cannot substitute for the other or be considered at the exclusion of the other. Both disciplines must be considered together.
Ethics plays an important role in preceding and supplementing regulation. It has been the subject of much pioneering research and implementation in the field of AI governance. However, ethics is a branch of philosophy, not a system of norms: multiple versions are possible, and – despite, or perhaps exacerbated by, the efforts to draft so many sets of AI ethics principles – there is currently a lack of international consensus as to what precisely AI ethics entails. Significant differences of both substance and terminology between these sets of principles make it difficult for companies and public bodies to understand their responsibilities, and for individuals to know what standards to expect.
The malleability of ethics means that it is difficult for civil society to hold other actors to account. Some technology companies face criticism for so-called ‘ethics-washing’ undertaken for reputational purposes, and for exerting undue influence on some ethics researchers through funding. Courts and tribunals do not allocate remedies for compliance with ethics. Moreover, while ethical principles are intended to ensure that technology reflects moral values, a focus on ethics may minimize the appetite for legal regulation.
Although in some environments, the branding of ‘ethics’ may be more palatable than that of human rights for political reasons, it is of primary importance that human rights are considered at all – whatever the branding. To avoid conceptual confusion, human rights ought to be regarded as parallel to ethics rather than as a mere element of it. Any principles and processes of ethics should complement, rather than compete with, the existing human rights legal system. Conflicts between norms are damaging as they undermine the legal certainty and predictability of regulatory behaviour on which states, businesses and individuals rely.
Current popular support for AI ethics in principle, without a shared understanding of what AI ethics means in practice, has similarities with support for human rights in the 1940s. There was widespread support then for the concept of human rights, to prevent a repetition of the atrocities of the Second World War and to end domination and repression. However, there was no specific understanding or consensus on what exactly ‘human rights’ meant. Establishing agreement on the content of the Universal Declaration of Human Rights – and later the International Covenant on Civil and Political Rights and International Covenant on Economic, Social and Cultural Rights – required worldwide canvassing, expert input, negotiation and political compromise. There is no evidence that reaching universal agreement on AI ethics without reference to the already-agreed human rights framework would be easier, or less politically charged, than those 20th century debates.
Myth 2. ‘Human rights prevent innovation’
Human rights do not prevent innovation or undermine a ‘move fast and break things’ ethos, save that they entail compliance with minimum standards and therefore forbid certain egregious activities. Most innovators want a level playing field, and to avoid being undercut by actors with lower standards or being caught in a ‘race to the bottom’ with unscrupulous competitors. Innovators want to know how they can meet shared standards and inspire trust in their products. Human rights provide an appropriate basis for standards and processes internationally. For businesses, considering human rights from the outset of AI development and deployment may help to foster customer trust and minimize potential costs and time expended in litigation at a later stage.
Myth 3. ‘Human rights are complex and entail expensive legal advice’
While human rights can appear complex to non-specialists, initiatives such as the UN’s B-Tech project show how the technology industry and investors can implement their human rights responsibilities. Routine inclusion of human rights in computer science and coding training could reduce the perception of complexity. In reality, human rights are no more complex than any equivalent system of rules or principles: they consist of clear rules, with steps to be followed in implementing them. While novel situations will still pose challenges, human rights have been developed over many years and are inherently flexible to adapt to such challenges. In this way, human rights have answers for many situations, in terms of steps to follow or outcomes to reach.
A business trying to establish ethical credentials needs advice in order to do so effectively – this is the case whatever the source of the rules followed. Following human rights standards means following relatively clear, existing rules and minimizing the chances of public censure or litigation for failure to comply.
Myth 4. ‘Human rights are about governments’
Human rights are not commonly part of the lexicon of AI developers and corporate ethics advisers – particularly outside the EU – because they are seen as regulating government, rather than corporate, activity.
While states are the primary bearer of duties under international human rights law, all companies have responsibilities to respect human rights. The Office of the UN High Commissioner on Human Rights (OHCHR)’s Guiding Principles on Business and Human Rights, unanimously endorsed by the UN Human Rights Council (HRC) and General Assembly (UNGA) in 2011, state that governments are obliged to take reasonable steps to ensure that companies and other non-state actors respect human rights, and that companies have a responsibility to respect human rights in their activities worldwide, including through due diligence and impact assessment. Consideration of human rights impacts ought therefore to be a standard part of corporate practice.
However, the extent of corporate responsibilities is only patchily understood. This situation is changing, slowly and gradually, as businesses find it in their interests to take account of human rights impacts. Increasingly, both national laws and investors’ environmental, social and governance (ESG) or equivalent frameworks, plus civil society and public pressure, are obliging companies to give due regard to human rights. The European Commission’s proposed directive requiring mandatory human rights and environmental due diligence by larger companies based or active in the EU would be transformative and should herald a consistency of approach within the EU.
Myth 5. ‘Human rights are radical’
There are two dimensions to this particular myth: first, that – in line with popular news coverage – human rights are only relevant to extreme situations, such as the treatment of criminals, immigrants or terrorists. This view is plain wrong: human rights are about everyday protection from harm and discrimination for every adult and child, living free from state interference, and being provided with basic entitlements. In democracies, most people have a general, unspoken assumption that their human rights will be respected: for example, if arrested they will be treated with dignity; if prosecuted they will be granted a fair trial in a language they understand; or if voting their vote will be secret and will be counted fairly. Human rights routinely inform new legislation and policies, from data protection to social housing and social security. They are not often politically controversial. They are only newsworthy on the rare occasions when they are denied, or when they are portrayed as an obstacle to popular policies. The human rights law framework is not a radical philosophy, but a check and balance against discrimination or indignity in policy development.
The second dimension to this myth is a misconception that human rights are absolutist in nature: that, for example, they prohibit developments such as facial recognition technology. The desire for quick political soundbites in today’s world encourages absolutist positions that can do human rights a disservice. The reality of human rights is more nuanced. For example, many civil society organizations currently assert that all facial recognition technology is contrary to human rights law. But this is a shorthand for asserting that facial recognition as commonly configured (i.e. involving mass capturing and retention of personal data and potentially discriminatory judgements without regard to human rights considerations) is contrary to human rights law. In fact, human rights law does not lead to a conclusion that facial recognition, properly configured and constrained, should be banned where there are good reasons of safety or security for using it. Rather, in this case as elsewhere, human rights law balances rights and interests to reach nuanced, subtle judgements.
Myth 6. ‘Human rights are vague’
There is a perception that human rights norms are too vague to guide AI. For example, some advocates of ethics argue that human rights are unable to provide guidance when values conflict. These objections are largely unfounded. One strength of human rights law is its system for weighing competing rights and interests, whether the balance is to be struck between competing individual rights or with other collective or societal interests.
One strength of human rights law is its system for weighing competing rights and interests.
Many human rights are framed in terms that make this balancing explicit. For example, Article 21 of the International Covenant on Civil and Political Rights states that the right of peaceful assembly shall be subject to no restrictions, ‘other than those imposed in conformity with the law and… necessary in a democratic society in the interests of national security or public safety, public order (ordre public), the protection of public health or morals or the protection of the rights and freedoms of others’. In considering whether this right has been violated, the UN Human Rights Committee will consider first whether there has been an interference, then if so, whether that interference is lawful and both ‘necessary for and proportionate to’ one or more of the legitimate grounds for restriction listed in the article. UN human rights bodies, national and regional courts have developed extensive jurisprudence on the appropriate balancing of rights and interests, balancing flexibility with predictability. These well-established, well-understood systems have repeatedly proven themselves capable of adaptation in the face of new policy tools and novel situations. For example, the European Court of Human Rights (ECtHR) recently developed new tests by which to assess bulk interception of online communications for intelligence purposes.
The impact of AI is a novel but not insurmountable challenge, as emerging jurisprudence is already demonstrating. Indeed, one strength of international human rights law is its capacity to develop incrementally both as societal standards progress and in the face of new factual situations.
Myth 7. ‘Human rights get it wrong’
Some may consider that human rights protect the wrong values, apply protection in the wrong ways or are too rigid to apply to technological or social developments. For example, it has been suggested by some policymakers and academics that the individual right to privacy should be replaced or augmented by a concept of collective interest in appropriate handling of data that is sensitive to the interests of minority groups. Group privacy may be a useful political concept in assessing appropriate limits of state or corporate power resulting from mass collection and processing of data. But it cannot substitute for human rights law. Such claims underestimate the flexibility of human rights and its processes, including due diligence and human rights impact assessment, to secure the protection of human rights for all rather than just for those who claim infringement. The right to privacy is capable of evolution in light of competing interests, and enables a balance to be struck between privacy and the public interest in data-sharing and accessibility, while safeguarding the interests of groups categorized as such by AI by insistence on both freedom from discrimination and fairness and due process in decision-making. There may be scope for considering greater empowerment of data subjects and/or group enforcement of rights; but it would be a rash move to abandon many years of judicial interpretation and scholarship, including concerns about the displacement of individual rights by group rights, by adding, or replacing them with, new legal constructs.
Myth 8. ‘Human rights are organized around national models’
Human rights obligations are primarily owed by a state’s government to people within that state’s territory or jurisdiction. These jurisdictional limitations are under pressure: for example, UNGA has stressed that arbitrary surveillance and collection of personal data can violate various human rights, including when undertaken extraterritorially. Regarding businesses, the corporate responsibility to respect human rights applies in respect of all individuals affected by a company’s operations, regardless of location. In practical terms, businesses should consider their human rights responsibilities towards everyone impacted by their work, in any country.
Myth 9. ‘Human rights entail greater legal risk’
Human rights are legal rules, and so do entail accountability through courts and tribunals. But this accountability does not hinge on whether an organization pays attention to human rights, but on whether it is liable by reference to a rule of law. Considering human rights will not place a company or government at greater risk from human rights claims. On the contrary, addressing human rights issues should help to protect against potential claims.
3.3 What human rights have to offer
Human rights law provides a means to define the harm that AI should avoid. It places its focus on the interests of each individual and addresses the most pressing societal concerns of AI, including non-discrimination, fairness and privacy. It provides an excellent starting point by which to assess whether and to what extent AI is ‘for good’. Economic and social rights offer a basis for considering societal distribution of AI’s potential benefits.
Human rights offer a framework for regulating AI that is an existing system of international, regional and domestic law, commanding international legitimacy and a shared language across the world. This framework should be adopted in respect of AI, not only because of its intrinsic merit but because the current geopolitical stasis is likely to prevent effective multilateral cooperation on new normative frameworks. The focus of discussion should not be on whether human rights can or should be applied to AI, nor on potential alternatives, but on how the existing framework of human rights does apply in the field of AI. This is already the focus of international organizations at both regional and global level.
Human rights crystallize a set of ethical values into international norms. The system is not perfect, and was not created with AI in mind, but is a universally agreed blueprint for the protection of human values and the common good that has proven itself capable of adaptation to new circumstances. It avoids the need for fresh theoretical debates on the relative merits of different approaches. As a set of norms, human rights avoid the allegation – often levelled at ethics – of being vague and malleable enough to suit corporate interests.
Human rights are relatively clear. It is possible to list comprehensively the legally binding international, regional and domestic human rights obligations that apply in each country in the world. The meaning of those obligations is reasonably well-understood.
The human rights approach has proved relatively successful over more than 70 years, developing incrementally with the benefit of several generations of academic input, governmental negotiation, civil society input and court rulings from many parts of the world. It has evolved in tandem with societal development, its impact gradually increasing without meeting widespread calls for abandonment or radical change.
Human rights provide processes and accountability as well as principles
Human rights law is accompanied by a vast range of practical tools for implementation, political oversight and legal accountability that are absent from ethics. Breaches of human rights entail legal as well as political avenues of redress. The international human rights framework includes a range of remedial mechanisms with practical effect, ranging from civil society advocacy through domestic and international courts, to scrutiny by UN bodies and other states. In many parts of the world, violations of rights by government may be challenged in court with legally binding effect – acting as an important constraint on state power.
As companies and governments already have human rights commitments, their use of AI will be scrutinized by human rights mechanisms in any case, including through claims made to domestic courts in the event of alleged breach. Human rights have already formed the basis for high-profile rulings on, for example, image databases and uses of facial recognition technology.
Human rights have international acceptance and legitimacy
International human rights law benefits from a higher degree of international acceptance and legitimacy than any other system of values. Governments in every continent know and understand the core human rights treaties. Every state is party to some of them, while some treaties have near-universal ratification. This remains the case, despite an apparently waning commitment to the universality of human rights in the rhetoric of certain countries. Human rights have played a role, to a greater or lesser extent, in shaping the policies and activities of governments around the world.
UN processes affecting all states, such as the HRC’s Universal Periodic Review and the UN treaty bodies’ periodic examinations of states’ compliance, entail that every UN member state engages with the international human rights architecture. Regional treaties that have strong local support reinforce these UN instruments in some parts of the world. International human rights law has constitutional or quasi-constitutional status in many countries, notably in Europe, embedding it deep into systems of governance. Civil society uses the human rights law framework as a basis for monitoring state and corporate activities worldwide.
International human rights law offers a degree of discretion to governments as to how they implement each right, within certain parameters.
This international legitimacy has given human rights a significant role in the production of internationally negotiated sets of AI governance principles. For example, the OECD AI Principles call on all actors to respect the rule of law, human rights and democratic values throughout the AI system life cycle. As discussed previously, UNESCO’s Recommendation on the Ethics of Artificial Intelligence names human rights and fundamental freedoms as the first of the ‘values’ around which it is crafted. The Council of Europe’s Committee on Artificial Intelligence (CAI) is working on a potential legal framework for the development, design and application of AI, based on the Council’s standards on human rights, democracy and the rule of law. Although the universality of human rights is increasingly contested, there is still, to a large degree, a global consensus on the continued relevance of long-agreed human rights commitments.
Human rights achieve a balance between universality and sensitivity to national contexts
International human rights law offers a degree of discretion to governments as to how they implement each right, within certain parameters. This flexibility is known as the ‘margin of appreciation’ in Europe, now enshrined in the preamble to the ECHR, and has similar effect in the UN human rights system. It varies according to the specific right in question and the impact of any interference: for example, human rights law offers governments no discretion in implementing bans on torture or slavery, but European human rights law permits governments a narrow margin of appreciation concerning general bans on protest, and a wider margin concerning whether governments choose to sanction protestors who intentionally disrupt ordinary life.
Human rights are necessary but not sufficient for AI governance
International human rights law may not currently address all the potential harms to people caused by AI. But it is adaptable to new circumstances and changing social norms: the ECHR, for example, is ‘a living instrument which… must be interpreted in light of present-day conditions.’ The UN secretary-general’s High Level Panel on Digital Cooperation has called for an urgent examination of how human rights frameworks apply in the digital age.
Human rights law may develop through new attention to existing rights. For example, the rights to freedom of thought and opinion are absolute. However, their parameters remain relatively unclear because they were largely taken for granted until challenged by the emergence of a technologically enabled industry of influence. Further, new contexts may lead to new understandings and formulations of rights. For example, explainability and human involvement – commonly discussed elements of AI ethics – are not usually considered as elements of human rights, but might be found in existing requirements that individuals be provided with reasons for decisions made concerning them, and of the possibility of contesting those decisions and securing adequate remedies. The Council of Europe’s work on a potential convention is likely to clarify the application of human rights to AI, as human rights litigation is already beginning to do.
The development of human rights law and its subsequent interpretation take time, yet technology moves quickly. Human rights in their current form, while essential, are not sufficient to act as an entire system for the ethical management of AI. Human rights should rather be the starting point for normative constraints on AI, the baseline to which new rights or further ethical guardrails might appropriately be added, including any ethical principles that businesses or other entities may choose to adopt.
The second half of this paper explores the contributions of human rights in detail and concludes by recommending practical actions to place human rights at the heart of AI governance.