AI has capacity to transform human life – both for better and for worse.
AI is increasingly present in our lives, and its impact will expand significantly in the coming years. From predictive text, to social media news feeds, to virtual homes and mobile phone voice assistants, AI is already a part of everyday life. AI offers automated translation, assists shoppers buying online and recommends the fastest route on the drive home. It is also a key component of much-debated, rapidly developing technologies such as facial recognition and self-driving vehicles.
There is no single agreed definition of AI: it is a general term referring to machines’ evolving capacity to take on tasks requiring some form of intelligence. The tasks that AI performs can include generating predictions, making decisions and providing recommendations. This means that AI may make decisions itself, or provide information for use in human decision-making.
AI systems are algorithmic – the algorithm being the computational process or set of rules that the computer follows to calculate a result. To learn, AI generally relies on synthesising and making inferences from large quantities of data. It is the machine’s capacity to learn by itself how to do tasks better, rather than simply following instructions, that distinguishes AI from traditional computer programmes. Contrary to popular myth, self-improvement does not prevent AI from being constrained by rules.
Governments are among the largest adopters of AI, deploying it to assist in making decisions that can have major consequences for the lives of individual citizens. For example, governments are using AI to assist with decisions on entitlement to immigration status, welfare benefits, school entry and priority vaccinations. They are adopting it to assist with provision of justice, in both civil and criminal processes. And they may be using AI to assist in delivery of critical infrastructure and national security.
AI is likely to pervade almost every domain of human activity, and to become increasingly important as technology evolves towards greater interoperability, including through the development of the metaverse. This paper discusses general features of AI, but by no means diminishes the need for parallel sector-specific discussion. The use of AI in the healthcare system, in social media or in the criminal justice process, for instance, each raise specific human rights issues that need to be addressed in context, alongside the overarching issues discussed here.
2.1 What potential does AI hold for human rights and the common good?
Due to its speed and its power of self-learning, AI has the capacity to transform our societies. It can operate faster – and potentially better – than any human. It can achieve scientific breakthroughs, calculate fair distributions and outcomes, and make more accurate predictions.
AI holds enormous potential to enable human development and flourishing. For example, AI is accelerating the battle against disease and mitigating the impact of disability; it is helping to tackle climate change and optimize efficiency in agriculture; it can assist distribution of humanitarian aid; it has enormous potential for improving access to, and quality of, education globally; and it can transform public and private transport. AI could help to ensure that policing is fair and respectful of human dignity. It may make workplaces more productive, reduce the load of manual labour and help developed countries to manage the challenges of an ageing population. To give a specific example of the benefits, the AI programme AlphaFold is predicting the structures of both human and animal proteins with tremendous speed and remarkable accuracy, with potentially transformative effects on medical treatments, crop science and plastic reduction.
In short, when properly managed, AI can enable delivery of the UN’s Sustainable Development Goals (SDGs) by the 2030 deadline, boost the implementation of economic, social and cultural rights worldwide, and support improvements in many areas of life.
To achieve these aims, AI must be harnessed for the good of all societies. Doing so involves not only goodwill, but also ensuring that commercial considerations do not dictate the development of AI entirely. Provision of funding for AI research and development outside the commercial sector will be invaluable, as will access to data for AI developers such that they may generate applications of AI that benefit people in all communities.
Just as the industrial revolution brought progress at the expense of upheaval in traditional ways of living, so will AI bring change to our societies. Work must be done now to mitigate the risk of negative impacts. Governments must anticipate and manage the changes that widespread use of AI will herald. They must consider both the implications of AI for their own public policymaking, which may be subject to judicial review, and how to govern a society in which AI is increasingly being developed by the private sector and becoming a feature of life for the world’s population. This includes governance not only of AI itself but of its implications for current ways of life. For example, governments should address the risk that AI will upend current practices and norms in the workplace, through mass unemployment and an undermining of bargaining power between employers and employees. Governments should be taking active steps to ensure the benefits of AI are distributed equitably, avoiding the division of society into ‘winners’ and ‘losers’ from emerging technology. To preserve and promote public interest, governments must not allow companies to develop AI in a policy and regulatory vacuum.
2.2 What are the key human rights and ethical challenges posed by AI?
Evidence abounds of problematic uses of AI. At one end of the spectrum, AI is being deliberately used as a tool of suppression: for example, the Chinese government’s use of AI to conduct mass surveillance of its Uyghur minority. Some types of AI could be used deliberately to limit people’s freedom to express themselves and to meet with others, to monitor the general public for compliance with behavioural rules, to detect ‘suspicious behaviour’ or to restrict access to society’s benefits to a privileged few.
Many AI tools abuse human rights as a collateral consequence of their operation. AI risks embedding and exaggerating bias and discrimination, invading privacy, reducing personal autonomy and making society more, rather than less, unequal. For example, AI sentencing tools may discriminate against minorities, potentially turning back decades of progress towards equality. AI in healthcare may harm human health if algorithms are incorrect or biased, while AI in welfare-provision or migration may make unfair decisions on eligibility. AI tools may infer sensitive information about individuals in violation of their privacy.
Even an AI tool designed with the intention of implementing scrupulous standards of fairness will fail if it does not replicate the complex range of factors and subtle, context-specific decision-making processes of humans. Unchecked, AI systems tend to exacerbate structural imbalances of power and to disadvantage the most marginalized in society.
Further, some AI tools may have outputs detrimental to humanity through their potential to shape human experience of the world. For example, AI algorithms in social media may, by distorting the availability of information, manipulate audience views in violation of the rights to freedom of thought and opinion, or prioritize content that incites hatred and violence between social groups. AI used to detect aptitudes or to select people for jobs, while intended to broaden human horizons and ambition, risks doing the opposite. Without safeguards, AI is likely to entrench and exaggerate social divides and divisions, distort our impressions of the world and thus have negative consequences on aspects of human life. These risks are amplified by the difficulty of identifying when AI fails, for example when it is malfunctioning, manipulative, acting illegally or making unfair decisions. At present, companies rarely make public their identification of mistakes or errors in their AI. Consumers cannot therefore see which standards have been met.
Finally, AI may entrench and even exacerbate social divides between rich and poor, worsening the situation of the most vulnerable. As AI development and implementation is largely driven by the commercial sector, it risks being harnessed for the benefit of those who can pay rather than to resolve the world’s most significant challenges, and risks being deployed in ways that further dispossess vulnerable communities around the world.