This article explains the challenges associated with the funding, development, supply and regulation of artificial intelligence (AI). It deals with narrow AI, that is systems and applications that are task-specific.
The article is not concerned with the concept of artificial general intelligence, or AGI, that is an AI which could meet and exceed the full capabilities of the human mind in the future.
Definition of AI
There is no universally accepted definition of AI, but in the UK’s Industrial Strategy White Paper, AI is defined as ‘technologies with the ability to perform tasks that would otherwise require human intelligence’.
AI makes decisions using algorithms that either follow rules or, in the case of machine learning, review large quantities of data to identify and follow patterns. Because machine learning consists of multiple layers, and machines develop their own learning and patterns, it is opaque compared to traditional rule-following computing.
Today AI applications are common in many economic activities including online shopping and advertising, web search, digital personal assistants, language translation, smart homes and infrastructure, health, transport and manufacturing.
Risks and benefits of AI
AI has the potential to bring huge advantages, for example in medical science, education, food and aid distribution, more efficient public transport and in tackling climate change.
Used well, it could help humanity meet the UN’s 2030 Sustainable Development Goals and make many processes swifter, fairer and more efficient. It is a technology which is likely to be as transformative to human history as was the Industrial Revolution.
However, there are serious ethical, safety and societal risks associated with the rapid growth of AI technologies.
Will AI be a tool that makes rich people richer? Will it exaggerate bias and discrimination? Will AI decision-making create a less compassionate society? Should there be limits to what decisions an AI system can take autonomously, from overtaking a car on the motorway to firing a weapon?
And if AI goes wrong – for example if a self-driving car has an accident – who should be liable?
To ensure AI is used safely and fairly, up-to-date and rigorous regulation is needed.
Regulation of AI
AI creates serious regulatory challenges due to the way it is funded, researched and developed.
The private sector drives progress in AI, and governments mostly rely on big tech companies to build their AI software, furnish their AI talent, and achieve AI breakthroughs. In many respects this is a reflection of the world we live in, as big tech firms have the resources and expertise required.
However, without government oversight the future application of AI’s extraordinary potential will be effectively outsourced to commercial interests. That outcome provides little incentive to use AI to address the world’s greatest challenges, from poverty and hunger to climate change.
Government policy on AI
Currently governments are playing catch-up as AI applications are developed and rolled out. Despite the transnational nature of this technology, there is no unified policy approach to AI regulation, or to the use of data.
It is vital that governments provide ‘guardrails’ for private sector development through effective regulation. But this is not yet in place, either in the US (where the largest amount of development is taking place) or in most other parts of the world. This regulation ‘vacuum’ has significant ethical and safety implications for AI.
Some governments fear that imposing stringent regulations will discourage investment and innovation in their countries and lose them a competitive advantage. This attitude risks a ‘race to the bottom’, where countries compete to minimize regulation in order to lure big tech investment.
The EU and UK governments are beginning to discuss regulation but plans are still at an early stage. Probably the most promising approach to government policy on AI is the EU’s proposed risk-based approach. It would ban the most problematic uses of AI, such as AI that distorts human behaviour or manipulates citizens through subliminal techniques.
And it would require risk management and human oversight of AI that poses high risk to safety or human rights, such as AI used in critical infrastructure, credit checks, recruitment, criminal justice, and asylum applications.
Meanwhile, the UK is keen to see the establishment of an AI assurance industry that would provide kitemarks (or the equivalent) for AI that meets safety and ethical standards.
Despite these policy developments, there remain fundamental questions about how to categorize and apply risk assessments, what an AI rights-based approach could look like, and the lack of inclusivity and diversity in AI.
AI ethical issues
AI has serious ethical implications. Because AI develops its own learning, those implications may not be evident until it is deployed. The story of AI is littered with ethical failings: with privacy breaches, with bias, and with AI decision-making that could not be challenged.
It’s therefore important to identify and mitigate ethical risks while AI is being designed and developed, and on an ongoing basis once it is in use.
But many AI designers work in a competitive, profit-driven context where speed and efficiency are prized and delay (of the kind implied by regulation and ethical review) is viewed as costly and therefore undesirable.
Designers may also not have the training, tools or capacity to identify and mitigate ethical issues. The majority are from an engineering or computing background, and do not reflect the diversity in society.
Shareholders and senior management will also naturally be hostile to criticism which could affect profits.
Once an AI application has been designed, it is often sold to companies to fulfil a task (for example, sifting employment applicants) without the buyer being able to understand how it works or what risks may come with it.
Ethical frameworks for AI
Some international bodies have made efforts to create an ethical framework for AI development, including UNESCO’s Recommendation on the Ethics of Artificial Intelligence, and the IEEE’s Global Initiative on Ethics of Autonomous and Intelligent Systems. And some companies have developed their own ethical initiatives.
But each of these proposals naturally overlaps, is slightly different and is voluntary. They set out principles for creating ethical AI, but provide no accountability in the event that an AI goes wrong.
Ethical roles in the AI industry are a potentially important new profession, but the field is underfunded and under resourced. There is widespread agreement that ethics is important, but a lack of consensus on how it should be enforced.
Government use of AI
It’s equally important that the way governments use AI is understood, consensual and ethical, complying with human rights obligations. Opaque practices by governments may feed the perception of AI as a tool of oppression.
China has some of the clearest regulation of AI private industry in the world, but the way the government has deployed AI tools in the surveillance of its citizens has serious civil liberties implications.
China’s exports of AI to other countries are increasing the prevalence of government surveillance internationally.
Privacy and AI
Probably the greatest challenge facing the AI industry is the need to reconcile AI’s need for large amounts of structured or standardized data with the human right to privacy.
AI’s ‘hunger’ for large data sets is in direct tension with current privacy legislation and culture. Current law, in the UK and Europe limits both the potential for sharing data sets and the scope of automated decision-making. These restrictions are limiting the capacity of AI.
During the COVID-19 pandemic, there were concerns that it would not be possible to use AI to determine priority allocation of vaccines. (These concerns were allayed on the basis that GPs provided oversight on the decision-making process.)
More broadly, some AI designers said they were unable to contribute to the COVID-19 response due to regulations that barred them from accessing large health data sets. It is at least feasible that such data could have allowed AI to offer more informed decisions about the use of control measures like lockdowns and the most effective global distribution of vaccines.
Better data access and sharing are compatible with privacy, but require changes to our regulation. The EU and UK are considering what adjustments to their data protection laws are needed to facilitate AI while protecting privacy.