Challenges of AI

What are the practical, legal and ethical implications of artificial intelligence (AI) and how can regulation help meet these challenges?

Explainer Updated 19 January 2023 7 minute READ

This article explains the challenges associated with the funding, development, supply and regulation of artificial intelligence (AI). It deals with narrow AI, that is systems and applications that are task-specific.

The article is not concerned with the concept of artificial general intelligence, or AGI, that is an AI which could meet and exceed the full capabilities of the human mind in the future.

Definition of AI

There is no universally accepted definition of AI, but in the UK’s Industrial Strategy White Paper, AI is defined as ‘technologies with the ability to perform tasks that would otherwise require human intelligence’. 

It is a technology which is likely to be as transformative to human history as was the Industrial Revolution.

AI makes decisions using algorithms that either follow rules or, in the case of machine learning, review large quantities of data to identify and follow patterns. Because machine learning consists of multiple layers, and machines develop their own learning and patterns, it is opaque compared to traditional rule-following computing.

Today AI applications are common in many economic activities including online shopping and advertising, web search, digital personal assistants, language translation, smart homes and infrastructure, health, transport and manufacturing. 

Risks and benefits of AI

AI has the potential to bring huge advantages, for example in medical science, education, food and aid distribution, more efficient public transport and in tackling climate change. 

Used well, it could help humanity meet the UN’s 2030 Sustainable Development Goals and make many processes swifter, fairer and more efficient. It is a technology which is likely to be as transformative to human history as was the Industrial Revolution.

However, there are serious ethical, safety and societal risks associated with the rapid growth of AI technologies. 

Will AI be a tool that makes rich people richer? Will it exaggerate bias and discrimination? Will AI decision-making create a less compassionate society? Should there be limits to what decisions an AI system can take autonomously, from overtaking a car on the motorway to firing a weapon?

And if AI goes wrong – for example if a self-driving car has an accident – who should be liable? 

To ensure AI is used safely and fairly, up-to-date and rigorous regulation is needed. 

Regulation of AI

AI creates serious regulatory challenges due to the way it is funded, researched and developed.  

The private sector drives progress in AI, and governments mostly rely on big tech companies to build their AI software, furnish their AI talent, and achieve AI breakthroughs. In many respects this is a reflection of the world we live in, as big tech firms have the resources and expertise required.

However, without government oversight the future application of AI’s extraordinary potential will be effectively outsourced to commercial interests. That outcome provides little incentive to use AI to address the world’s greatest challenges, from poverty and hunger to climate change.

Government policy on AI

Currently governments are playing catch-up as AI applications are developed and rolled out. Despite the transnational nature of this technology, there is no unified policy approach to AI regulation, or to the use of data. 

Currently governments are playing catch-up as AI applications are developed and rolled out.

It is vital that governments provide ‘guardrails’ for private sector development through effective regulation. But this is not yet in place, either in the US (where the largest amount of development is taking place) or in most other parts of the world. This regulation ‘vacuum’ has significant ethical and safety implications for AI. 

Some governments fear that imposing stringent regulations will discourage investment and innovation in their countries and lose them a competitive advantage. This attitude risks a ‘race to the bottom’, where countries compete to minimize regulation in order to lure big tech investment. 

The EU and UK governments are beginning to discuss regulation but plans are still at an early stage. Probably the most promising approach to government policy on AI is the EU’s proposed risk-based approach. It would ban the most problematic uses of AI, such as AI that distorts human behaviour or manipulates citizens through subliminal techniques. 

And it would require risk management and human oversight of AI that poses high risk to safety or human rights, such as AI used in critical infrastructure, credit checks, recruitment, criminal justice, and asylum applications.

Meanwhile, the UK is keen to see the establishment of an AI assurance industry that would provide kitemarks (or the equivalent) for AI that meets safety and ethical standards.

Despite these policy developments, there remain fundamental questions about how to categorize and apply risk assessments, what an AI rights-based approach could look like, and the lack of inclusivity and diversity in AI.

AI ethical issues

AI has serious ethical implications. Because AI develops its own learning, those implications may not be evident until it is deployed. The story of AI is littered with ethical failings: with privacy breaches, with bias, and with AI decision-making that could not be challenged.

It’s therefore important to identify and mitigate ethical risks while AI is being designed and developed, and on an ongoing basis once it is in use. 

But many AI designers work in a competitive, profit-driven context where speed and efficiency are prized and delay (of the kind implied by regulation and ethical review) is viewed as costly and therefore undesirable. 

It’s important to identify and mitigate ethical risks while AI is being designed and developed

Designers may also not have the training, tools or capacity to identify and mitigate ethical issues. The majority are from an engineering or computing background, and do not reflect the diversity in society.

Shareholders and senior management will also naturally be hostile to criticism which could affect profits.

Once an AI application has been designed, it is often sold to companies to fulfil a task (for example, sifting employment applicants) without the buyer being able to understand how it works or what risks may come with it.

Ethical frameworks for AI

Some international bodies have made efforts to create an ethical framework for AI development, including UNESCO’s Recommendation on the Ethics of Artificial Intelligence, and the IEEE’s Global Initiative on Ethics of Autonomous and Intelligent Systems. And some companies have developed their own ethical initiatives.

But each of these proposals naturally overlaps, is slightly different and is voluntary. They set out principles for creating ethical AI, but provide no accountability in the event that an AI goes wrong.

Ethical roles in the AI industry are a potentially important new profession, but the field is underfunded and under resourced. There is widespread agreement that ethics is important, but a lack of consensus on how it should be enforced.

Government use of AI

It’s equally important that the way governments use AI is understood, consensual and ethical, complying with human rights obligations. Opaque practices by governments may feed the perception of AI as a tool of oppression. 

China has some of the clearest regulation of AI private industry in the world, but the way the government has deployed AI tools in the surveillance of its citizens has serious civil liberties implications.

China’s exports of AI to other countries are increasing the prevalence of government surveillance internationally.

Privacy and AI

Probably the greatest challenge facing the AI industry is the need to reconcile AI’s need for large amounts of structured or standardized data with the human right to privacy. 

AI’s ‘hunger’ for large data sets is in direct tension with current privacy legislation and culture. Current law, in the UK and Europe limits both the potential for sharing data sets and the scope of automated decision-making. These restrictions are limiting the capacity of AI. 

During the COVID-19 pandemic, there were concerns that it would not be possible to use AI to determine priority allocation of vaccines. (These concerns were allayed on the basis that GPs provided oversight on the decision-making process.)

More broadly, some AI designers said they were unable to contribute to the COVID-19 response due to regulations that barred them from accessing large health data sets. It is at least feasible that such data could have allowed AI to offer more informed decisions about the use of control measures like lockdowns and the most effective global distribution of vaccines.

Better data access and sharing are compatible with privacy, but require changes to our regulation. The EU and UK are considering what adjustments to their data protection laws are needed to facilitate AI while protecting privacy.

Article end

Bias in AI

Cases of bias have continually arisen in AI applications. 

Facial recognition is an area where research has highlighted significant risks of bias and discrimination. Many such systems have been trained on culturally biased image data sets representing mostly Caucasian male faces.

Google’s image databases for example have been shown to be US and Western-centric and are often accused of reinforcing racist and sexist stereotypes.

As a consequence of latent bias in most common datasets, rates of inaccuracy or false identification are significantly higher for non-Caucasian groups as well as women. 

Another case of AI bias has seen unions take legal action against ride-hire firm Uber. The case alleges racial bias in Uber’s driver verification software, leading to the unfair dismissal of drivers. Some studies have shown that facial recognition software has a lower success rate with darker skin tones.  

Additionally, AI bias can manifest when AIs are used as proxies in important decision-making.

It is highly unlikely that bias can be entirely eliminated from AI

In 2018 Amazon was forced to admit that its AI recruitment tool was deeply flawed. The AI had been developed to efficiently filter job applicants by observing patterns in resumes submitted to the company over the previous decade.

Most of these historic applications came from men, leading the AI to effectively teach itself that applications by males were superior to those submitted by females.

It is highly unlikely that bias can be entirely eliminated from AI. Historical data will always incorporate inbuilt prejudices, and society will never be entirely free of bias. Finally, attempts to over-compensate and remove data points may result in new, unforeseen discrimination. The best way to manage bias is therefore to subject AI models to continuous review.

AI and climate change

AI could have both positive and negative impacts on the environment.   

AI has beneficial applications in efforts to reduce carbon emissions, helping to design lower carbon manufacturing methods, ‘smart’ power grids and more efficient infrastructure.  

But AI is also a carbon emitter, requiring considerable computing power. There are concerns that current machine learning models are using and storing more and more data, generating significant carbon emission and electricity costs in the process.  

AI practitioners are attempting to better consider emissions in their algorithm design, but it is difficult to measure the exact carbon impact of the infrastructure around the development and deployment of AI. 

Environmental responsibility remains largely absent from ethical concerns regarding AI systems. So far, most AI strategies focus on harnessing the economic potential of AI rather than safeguarding the environment. 

AI and social media

The algorithms that decide what we see in our social media feeds are forms of AI. They are largely driven by the commercial interests of advertisers, demonstrated through metrics like clicks. 

In pursuit of clicks, social media firms have the power, and arguably the incentive, to use AI to not only predict their users’ behaviour but to influence it too – shaping the terms upon which people discover goods and services, and even the way they participate in political debate. 

There is a risk that AI algorithms in social media intensify existing bias

There is a risk that AI algorithms in social media intensify existing bias, prioritize sensational, extreme and misleading content and shelter users from alternative viewpoints.

They have capacity to manipulate our personal and political viewpoints and so to distort our democracies and our societies.

It’s important for everyone to be alive to this risk and so to be careful how much weight they attach to what they read online. Both regulators and companies are seeking ways to minimize these risks, through new laws, processes and oversight. The media and civil society have important roles in fact-checking and providing authentic and diverse news, comment and viewpoints.

How to build trust in AI

It will be hard to build trust in AI while there is no regulatory framework governing its funding, design and use, yet so many challenges exist around ethics, bias, privacy and other factors.

Public trust has already been eroded in big technology firms’ use of data. Scandals like Cambridge Analytica’s harvesting of Facebook profiles have shown how data can be used as a powerful tool of manipulation, which AI will only make more efficient.

Other threats are less well-known or debated. But global action to establish clear, effective regulation and accountability could help build confidence in the safe and ethical use of AI. 

AI remains an expert-led domain. The debate on its risks and benefits is dominated by technical and legal communities, and citizens are often unaware of common applications and when an AI has been involved in a transaction or choice. 

There is therefore an urgent need to launch an inclusive dialogue to raise awareness about how this technology is used today and to collectively define where AI should or should not be deployed. 

In parallel, the benefits associated with safe and well-regulated AI systems should be clearly understood by all. Trust in AI will be best delivered by clear regulation informed by an engaged and consulted public.