Regulators and companies should follow human rights process requirements as they devise and implement AI governance processes.
5.1 Processes: the landscape
The processes that governments and companies should follow in order to meet AI governance standards are evolving rapidly.
5.1.1 Regulation
Governments are increasingly considering cross-sectoral regulation of AI on the basis that statutory obligations would help create a level playing field for safe and ethical AI and bolster consumer trust, while mitigating the risk that pre-AI regulation applies to AI in haphazard fashion. The EU is furthest along in this process, with its draft Artificial Intelligence Act that would ban the highest-risk forms of AI and subject other ‘high risk’ AI to conformity assessments. In the US, Congress is considering a draft Algorithmic Accountability Act. The British government, having considered the case for cross-cutting AI regulation, has recently announced plans for a non-statutory, context-specific approach that aims to be pro-innovation and to focus primarily on high-risk concerns.
While the British government, among others, has expressed concern that general regulation of AI may stifle innovation, many researchers and specialists make the opposite argument. Sector-specific regulation may not tackle AI risks that straddle sectors, such as the impact of AI in workplaces. Well-crafted regulation should only constrain undesirable activity, and should provide scope for experimentation without liability within its parameters, including for small companies. Moreover, it is argued that responsible businesspeople would rather operate in a marketplace regulated by high standards of conduct, with clear rules, a level playing field and consequent consumer trust, than in an unregulated environment in which they have to decide for themselves the limits of ethical behaviour. Most decision-makers in industry want to do things the right way and need the tools by which to do so.
Without… clear standards and external involvement or accountability, there is a risk of ‘ethics-washing’ rather than genuine mitigation of risks.
In addition to regulating AI itself, there are also calls for regulation to ensure that related products are appropriately harnessed for the public good. For example, the UK-based Ada Lovelace Institute has called for new legislation to govern biometric technologies. Similarly, there is discussion of regulation of ‘digital twins’ – i.e. computer-generated digital facsimiles of physical objects or systems – to ensure that the vast amounts of valuable data they generate is used for public good rather than for commercial exploitation or even public control.
Some sector-specific laws are already being updated in light of AI’s expansion. For example, the European Commission’s proposal to replace the current Consumer Credit Directive aims to prohibit discrimination and ensure accuracy, transparency and use of appropriate data in creditworthiness assessments, with a right to human review of automated decisions. An analysis of legislation in 25 countries found that the pieces of primary legislation containing the phrase ‘artificial intelligence’ grew from one in 2016 to 18 in 2021, many of these specific to a sector or issue. Governments are also considering amendments to existing cross-sectoral regulation such as GDPR, which does not fully anticipate the challenges or the potential of AI.
5.1.2 Impact assessments and audit
The most rapid area of growth concerns algorithmic impact assessments (AIAs) and audits, which attempt to assess and manage ethical risks in the operation of algorithmic systems. While the terminology is not used consistently, AIAs tend to assess impact prospectively (i.e. before a system is in use), while audits are retrospective (i.e. looking back at a period of use).
A number of bodies are currently developing template risk assessments for use by creators or deployers of AI systems. For example, the US National Institute of Standards and Technology (NIST) has released a draft AI Risk Management Framework. The Singapore government is piloting a governance framework and toolkit known as AIVerify. The EU’s Artificial Intelligence Act will encourage conformity assessment with technical standards for high-risk AI. The British government is keen to see a new market in AI assurance services established in the UK, by which assurers would certify that AI systems meet their standards and so are trustworthy. The UK’s Alan Turing Institute has proposed an assurance framework called HUDERIA. Technical standards bodies are developing frameworks, such as the IEEE’s Standard Model Process. There are academic versions, such as capAI, a conformity assessment process designed by a consortium of Oxford-based ethicists, and the European Law Institute’s Model Rules on Impact Assessment. There are also fledgling external review processes such as Z-Inspection.
Larger businesses have, meanwhile, established their own assessment processes. For example, Google conducts ethical reviews of AI applications it plans to launch. IBM has an AI Ethics Board providing centralized governance, review and decision-making. Rolls-Royce’s Aletheia Framework comprises a 32-step practical toolkit for organizations developing and deploying AI.
Typically, AIA processes invite AI developers, providers and users to elicit the ethical values engaged by their systems, refine those values and then assess their proposed or actual AI products and systems (both data and models) against those values, identifying and mitigating risks. Some models take a restrictive view of ethics, focusing primarily on data governance, fairness and procedural aspects rather than all rights. A further tool proposed for data governance is data sheets or ‘nutrition labels’ that summarize the characteristics and intended uses of data sets, to reduce the risk of inappropriate transfer and use of datasets.
Some governments are introducing impact assessments which are either mandatory or carry strong incentives for compliance. For example, Canada’s Directive on Automated Decision-Making requires Canadian government departments to complete and publish an AIA prior to production of any automated decision system. The US’s draft Algorithmic Accountability Act, proposed in Congress in 2019 and again in 2022, would require impact assessment of significant automated decisions taken by larger entities. In the UK, the Ada Lovelace Institute has published a detailed proposal for an AIA to be completed by any organization seeking professional access to the National Health Service (NHS)’s proposed National Medical Imaging Platform – the first known AIA for data access in a healthcare context.
While the identification and addressing of ethical risks is a positive step, these processes come with challenges. Risk assessment of AI can mean identifying and mitigating a broad range of impacts on individuals and communities – a task that is potentially difficult, time-consuming and resource-intensive. The identification and mitigation of ethical risks is not straightforward, particularly for teams whose prior expertise may be technical rather than sociological. Extensive engagement with stakeholders may be necessary to obtain a balanced picture of risks. Resourcing challenges are magnified for smaller companies.
Identification of risks may not even be fully possible before an AI system enters into use, as some risks may only become apparent in the context of its deployment. Hence the importance of ongoing review, as well as review at the design stage. Yet, once a decision has been made to proceed with a technology, many companies have no vocabulary or structure for ongoing discussion of risks. In cases where an AI system is developed by one organization and implemented by another, there may be no system for transferring the initial risk assessment to the recipient organization and for the latter to implement ongoing risk management.
Once risks have been identified, the models offer limited guidance on how to balance competing priorities, including on how to weigh ethical considerations against commercial advantage. Subtle calculations cannot easily be rendered into the simple ‘stop’ or ‘go’ recommendation typically required by corporate boards.
Similarly, the audit process presents challenges: auditors may require access to extensive information, including on the operation of algorithms and their impact in context. There is a lack of benchmarks by which to identify or measure factors being audited (such as bias), while audits may not take account of contextual challenges.
British regulators have identified various problems in the current AIA and audit landscape, including a lack of agreed rules and standards; inconsistency of audit focus; lack of access to systems being audited; and insufficient action following audits. There is often inadequate inclusion of stakeholder groups; a lack of external verification; and little connection between these emerging processes and any regulatory regimes or legislation. Recent UK research concluded that public sector policymakers should integrate practices that enable regular policy monitoring and evaluation, including through institutional incentives and binding legal frameworks; clear algorithmic accountability policies and clear scope of algorithmic application; proper public participation and institutional coordination across sectors and levels of governance.
It may be that many algorithms designed without regard to human rights will fail AIAs or audits. As awareness of human rights grows, so much current AI may need adjusting. The Netherlands Court of Audit, having developed an audit framework, recently audited nine algorithms used by the Dutch government. It found that six of those nine failed to meet the requirements of the audit framework on such matters as privacy protection, absence of bias and governance processes.
Overall, without rigorous implementation of clear standards and external involvement or accountability, there is a risk of ‘ethics-washing’ rather than genuine mitigation of risks.
5.1.3 Prohibition
Governments and companies are beginning to prohibit forms of AI that raise the most serious ethical concerns. However, there is no consistency in such prohibitions and the rationale behind them is often not openly acknowledged.
For example, some US states have banned certain uses of facial recognition technology, which remain in widespread use in other states. The EU’s Artificial Intelligence Act would prohibit certain manipulative AI practices and most use of biometric identification systems in public spaces for law enforcement purposes. Twitter decided to ban political advertising in 2019.
5.1.4 Transparency
A further approach is public transparency measures through registries, release of source code or algorithmic logic (required in France under the Digital Republic Law). In November 2021, the UK government launched the pilot of an algorithmic transparency standard, whereby public sector organizations provide information on their use of algorithmic tools in a standardized format for publication online.Several government algorithms have since been made public as a result.
5.1.5 Procurement conditions
There is likely to be a rapid growth in the imposition of conditions in the sale of algorithmic systems, particularly where purchasers such as governments and local authorities will be seeking to use those systems in the public interest. Authorities are likely to impose contractual conditions requiring the system to respect stipulated criteria on such matters as bias and transparency. For example, the City of Amsterdam has developed contractual terms requiring suppliers of AI and algorithmic systems to meet standards of explainability and transparency, including on what data is used and how bias is counteracted. Such conditions imposed by the public sector may have the effect of driving up standards more widely.
5.2 Processes: human rights law
5.2.1 Governmental duty to protect against breaches
Governments have a duty both to comply with human rights in any uses of AI they adopt – for example, in public decision-making – and to protect individuals from abuses of human rights by companies and other non-state actors. States must take ‘appropriate steps to prevent, investigate, punish and redress such abuse through effective policies, legislation, regulations and adjudication’.
Governments are expected to find the appropriate mix of laws, policies and incentives to protect against human rights harms. A ‘smart mix’ of national and international, mandatory and voluntary measures would help to foster business respect for human rights. This includes requiring companies to have suitable corporate structures to identify and address human rights risk on an ongoing basis, and to engage appropriately with external stakeholders as part of their human rights assessments. Where businesses are state-owned, or work closely with the public sector, the government should take additional steps to protect against human rights abuses through management or contractual control.
Governments’ human rights obligations mean that they cannot simply wait and see how AI develops before engaging in governance activities. They are obliged to take action, including via regulation and/or the imposition of impact assessments and audits, to ensure that AI does not infringe human rights. Governments should ensure that they understand the implications of human rights for AI governance, deploying a dedicated capacity-building effort or technology and human rights office where a gap exists.
There is an urgent need for governments to devise regulation that is both effective in ensuring that companies do not infringe individuals’ human rights when designing and implementing AI systems, and that provides for effective remedies in the event of any such infringement. Given the ambiguity of commitments to ethics and the strength of countervailing commercial considerations, a purely voluntary approach is unlikely to protect individuals’ human rights adequately. Indeed, some argue that states are obliged to enact legally binding norms to protect human rights in light of the challenges posed by AI systems. Governments should regulate to either prohibit or require constraints on applications of AI, such as biometric technologies, that risk interfering with human rights in a manner clearly disproportionate to any countervailing legitimate interest.
Governments should ensure that AIA and audit processes are conducted systematically, employing rigorous standards and due process, and that such processes pay due regard to potential human rights impacts of AI: for example by making assessment of human rights risks an explicit feature of such processes. To incentivize corporate good practice, demonstrate respect for human rights and facilitate remedy, states should also consider requiring companies to report publicly on any due diligence undertaken and on human rights impacts identified and addressed.
Supervision by regulatory and administrative authorities is an important element of accountability for compliance with human rights responsibilities, in parallel with legal liability for harms.
Supervision by regulatory and administrative authorities is an important element of accountability for compliance with human rights responsibilities, in parallel with legal liability for harms. As some European countries and the EU begin to implement mandatory human rights and environmental due diligence obligations for larger businesses, human rights experts are exploring administrative supervision of corporate duties as a complement to liability for harms in the courts.
Governments have legal obligations not to breach human rights in their provision of AI-assisted systems. Anyone involved in government procurement of AI should have enough knowledge and information to understand the capacity and potential implications of the technology they are buying, and to satisfy themselves that it meets required standards on equality, privacy and other rights (such as the Public Sector Equality Duty in the UK). Governments should negotiate the terms of public–private contracts and deploy procurement conditions to ensure that AI from private providers is implemented consistently with human rights. They should also take steps to satisfy themselves that this requirement is met. Public procurement is a means of encouraging improvements to human rights standards in the AI industry as a whole. It is important also to ensure that AI systems already adopted comply with human rights standards: the experience of the Netherlands demonstrates that systems adopted to date can be problematic.
5.2.2 Corporate responsibility to respect human rights
The UN’s Guiding Principles on Business and Human Rights are clear that ‘business enterprises should respect human rights’. In other words, companies (particularly large ones) should avoid infringing human rights and should address any adverse human rights impacts resulting from their activities. Companies should have a policy commitment to meet their human rights responsibilities, approved at senior level, publicly available and embedded in the culture of the business. Companies must also have an ongoing due diligence process of human rights impact assessment, tracked for responsiveness and reported externally, which allows them to identify, mitigate and remedy human rights impacts. By deploying a responsible business agenda, identifying and mitigating risks, companies can forestall problems and save themselves the time, money and acrimony of litigation.
Due diligence in the AI context is particularly challenging because of two distinguishing features. First, AI’s capacity for self-improvement may make it difficult to predict its consequences. Second, AI’s human rights impact will depend not only on the technology itself, but also on the context in which it is deployed. In light of both these factors, due diligence on AI applications that may affect human rights must be extensive and involve as wide a set of stakeholders as may be affected by the AI. Further, given the risk of unanticipated consequences, AI must be reviewed regularly once in operation. Hence, the former UN high commissioner on human rights called for comprehensive human rights due diligence to be conducted ‘when AI systems are acquired, developed, deployed and operated’, with that due diligence to continue ‘throughout the entire life cycle of an AI system’ and to include consultations with stakeholders and involvement of experts. At present, many companies lack structures and processes to detect and act on human rights issues on an ongoing basis. The former UN high commissioner also called for the results of due diligence to be made public.
Some companies’ AIAs are labelled as human rights assessment, like Verizon’s ongoing human rights due diligence. Other AI ethics assessments, such as that adopted by the IEEE and the proposed AIA for the National Medical Imaging Platform, look similar to human rights due diligence, but are not labelled as such. Google reviews proposals for new AI deployment by reference to its AI Principles, a process that can include consultation with human rights experts.
Whatever the labelling, certain features of human rights impact assessment are commonly omitted from corporate processes:
- Transparency. General statements of corporate intention and activity are easier to find than public statements of human rights risks actually identified and mitigated through due diligence processes.
- Scope. Some corporate processes only cover specific issues, such as bias and privacy, rather than the full range of human rights, or make only brief mention of other rights.
- Effect. It is often not clear what effect impact assessments have on the company’s activities. Human rights due diligence requires that human rights risks be mitigated, whereas some business processes seem to entail balancing risks against perceived benefits.
- Duration. Human rights due diligence includes a requirement for ongoing review post-implementation, whereas many corporate reviews appear to focus only on product development. Ongoing review is particularly important in light of AI’s capacity for self-improvement over time. Otherwise, there is a risk that assessments give algorithmic processes a veneer of legitimacy rather than genuinely having an impact on activities. This risk is amplified when there is no transparency about the process, its results or impact.
In addition to ensuring the adequacy of their impact assessment processes from a human rights perspective, companies should foster a pro-human rights culture throughout their organization. This means ensuring that AI teams are representative of society’s diversity and the diversity of intended consumers, such that equality is ‘baked in’ to system design. It means engaging adequate internal and external expertise to conduct human rights due diligence and impact assessments, including through involvement of stakeholders, and commitment at board level to addressing human rights impacts identified. It also means public reporting of any human rights risks and impacts identified and measures taken. It may mean providing training on human rights for all those working on AI – including technical experts, engineers and devisers of technical standards. It must include ongoing monitoring of human rights impacts over time and preparedness to address new concerns that may arise.