To place human rights at the heart of AI governance, companies, governments, international organizations, civil society and investors must take effective practical steps.
As AI begins to reshape the human experience, human rights must be central to its governance. There is nothing to fear, and much to gain, from taking human rights as the baseline for AI governance.
Failure to take account of human rights means setting aside well-established, widely acknowledged parameters of liberty, fairness and equality, as well as processes and accountability for their implementation. It involves creating confusing and inadequate alternatives to existing norms. It also duplicates much of the work of developing those norms, the processes for their implementation and the remedies for their breach.
If human rights are to be placed at the centre of AI governance, the following practical actions are necessary.
For companies:
- Continue to promote AI ethics and responsible business agendas, while acknowledging the important complementary role of existing human rights frameworks;
- Champion a holistic commitment to all human rights standards from the top of the organization. Enable a change of corporate mindset, such that human rights are seen as a useful tool in the box rather than as a constraint on innovation;
- Recruit people with human rights expertise to join AI ethics teams to encourage multi-disciplinary thinking and spread awareness of human rights organization-wide. Use human rights as the common language and framework for multi-disciplinary teams addressing aspects of AI governance;
- Conduct human rights due diligence and adopt a human rights-based approach to AI ethics and impact assessment. Create decision-making structures that allow human rights risks to be monitored, flagged and acted upon on an ongoing basis;
- Ensure uses of AI are explainable and transparent, so that people affected can find out how an AI or AI-assisted decision was, or will be, made; and
- Establish a mechanism for individuals to seek remedy if they are dissatisfied with the outcome of a decision made or informed by AI.
For governments:
- Ensure adequate understanding of human rights among government officials and place human rights at the heart of AI regulation and policies, either via the establishment of a dedicated office or other existing mechanisms;
- Equip teams involved in government procurement of systems and services with expertise in AI and human rights. Use contracting policy and procurement conditions to increase compliance with human rights standards among businesses;
- Establish a discussion forum on AI governance that engages all stakeholders, including human rights advocates, to foster better understanding and mutual benefit from others’ perspectives;
- Ensure that technical standards bodies, AI assurance mechanisms and devisers of algorithmic impact assessment and audit processes give due regard to human rights when developing and monitoring standards for AI governance;
- Consider cross-cutting regulation to ensure that AI deployed by both the public and private sectors meets human rights standards;
- Put in place human rights-compatible standards and oversight for AIAs and audits, as well as adequate provision of remedy for alleged breaches;
- Educate the public on the vital role of human rights in protecting individual freedoms as AI technology develops. Offer guidance to schools and teachers so that children have an understanding of human rights before they encounter AI;
- Ensure that all uses of AI are explainable and transparent, such that people affected can find out how an AI or AI-informed decision was, or will be, made;
- Provide adequate resources for national human rights bodies and regulators, such as the UK Equalities and Human Rights Commission, to champion the role of human rights in AI governance. Ensure these bodies are included in discussions on emerging tech issues;
- Incentivize AI development that benefits society as widely as possible and contributes to implementation of the UN’s SDGs; and
- Liaise with other governments and international organizations with a view to harmonizing understanding of the impact of international human rights law on the development and implementation of AI (for example, through use of soft law and guidance).
For the UN and other international/regional organizations:
- Adopt consensus principles on AI and human rights that clarify the duties of states and responsibilities of companies in this field, as well as the requirements for remedy. Publish a sister document to the UN’s Guiding Principles on Business and Human Rights to outline these principles, accessible to all stakeholders including software developers and engineers;
- Establish a new multi-stakeholder forum that brings together the tech and human rights communities, as well as technical standards bodies, to discuss challenges around the interaction of human rights and technology, including AI. A regular, institutionalized dialogue would raise levels of understanding and cooperation on all sides of the debate, and would help prevent business exploitation of legal grey areas;
- Ensure, via the UN secretary-general’s envoy on technology, that all parts of the UN (including technical standards bodies and procurement offices) align with the OHCHR in placing human rights at the centre of their work on technology;
- Continue to promote UNESCO’s Recommendation on the Ethics of Artificial Intelligence, including the international human rights obligations and commitments to which it refers, facilitating knowledge-sharing and capacity-building to enable effective implementation in all states;
- Advance dialogue and coherent approaches to the implications of AI for human rights, via treaties or soft law, and support national governments in their governance of AI;
- Conduct human rights due diligence before deploying AI; and
- Integrate AI into development and capacity-building activities to accelerate implementation of the SDGs.
For civil society and academics:
- Push for inclusion in the AI governance conversation, including by fostering connections with the software development community and corporate public policy teams;
- Debunk human rights myths. Explain to a wide array of audiences (including business leaders, investors and governments) that human rights are reasonable not radical; that human rights do not stymie innovation but establish a level playing field in guarding against egregious development.
- Demonstrate the positive role of human rights as a regulatory system by reference to existing processes of human rights due diligence and remedy;
- Encourage inter-disciplinary engagement at universities and raise awareness of human rights in technology-focused studies – for example, by introducing human rights as an element of computer science degrees and coding ‘bootcamps’;
- Facilitate collaboration between civil society and the software development community on the development and use of AI to achieve of the SDGs; and
- Test the implications of human rights for AI through strategic litigation.
For investors:
- Include assessment of the implications of AI for human rights in ESG or equivalent investment metrics.