AI governance must balance creativity with sensitivity

Artificial intelligence (AI) can transform lives but policy and regulation must ensure it benefits all rather than maintains the status quo for a privileged few.

Expert comment Published 7 June 2023 3 minute READ

Yasmin Afina

Former Research Fellow, Digital Society Initiative

The mass adoption of generative AI – programmes designed to generate ‘new’ content – in some parts of the world has sparked important debates on the responsible development and deployment of this technology, and how to mitigate its risks.

Just as Rishi Sunak is set to discuss the UK’s role in developing international regulation of AI with Joe Biden, a number of industry leaders from the field have co-signed a one-sentence statement warning of the ‘risk of extinction from AI’.

While regulatory responses have been slow compared to the mass deployment of generative AI, policies and legislative processes are starting to take shape across many jurisdictions.

But despite the unprecedented and transnational nature of this technology, common approaches to its governance are still few and far between with solutions mainly conceptualized in line with domestic or regional policy frameworks.

Different approaches are starting to emerge

The European Union (EU) is about to adopt the much-anticipated EU AI Act which complements its already significant body of tech regulations, such as the recent Digital Services and Digital Markets Acts.

Taken together, these regulations reveal a characteristically generalist, careful, and protective approach adopted by EU lawmakers to tech regulation, as opposed to their US and UK counterparts, whose approach is more sector-based and self-regulatory (i.e., industry-led, as opposed to being led by governments).

Despite the unprecedented and transnational nature of this technology, common approaches to its governance are still few and far between

Similarly, on oversight and enforcement, the proposed establishment by the EU of a centralized body (the ‘AI Board’) reflects its more prescriptive approach compared to the US and the UK’s more decentralized approach.

The UK’s stance is laid out in the government’s policy paper released early in 2023 as well as its broader international technology strategy. The latter puts in the spotlight ‘the role of interoperable non-regulatory governance tools, such as assurance techniques and technical standards, in enabling the development and deployment of responsible AI’.

This discrepancy in regulatory approaches raises a number of important questions about implementation and enforcement, particularly given that a high concentration of AI research is in the US.

Against the more light-touch approach from the US to risk assessments, compliance with ethical and legal considerations may come more as an afterthought instead of being embedded in the technology’s development and testing phases.

There are also concerns regarding inclusion – particularly those most at risk of AI technology, such as ethnic minorities, populations in developing countries, and various vulnerable groups.

Such trends are at risk of spilling over the US borders and creeping into AI labs across the world, especially as AI technologies are usually designed to be used at massive levels, plus there is the ‘arms race’ dynamic to also consider.

Governance exists, but needs reworking 

It is important proposed governance approaches to regulate these technologies account for the unique nature of AI as traditional governance alone may not work for such a general-purpose and disruptive technology.

Existing regulatory frameworks, notably human rights, provide a strong foundational basis to address these issues, but they require a margin of creativity to fill in the gaps, provide clarity, and make for robust and equitable governance tools.

New domestic and international institutions may be needed to monitor and implement this technology moving forwards. And policymakers must show creative thinking and open-mindedness as they seek to reconcile the novel issues AI technologies present alongside deep-rooted societal issues exacerbated by their development and deployment.

Because of AI’s experimental nature, there is a need for agile and continuous monitoring mechanisms. In line with the open letter signed by major stakeholders in this space, industries must not over-prioritize the rapid and premature commercialization of AI products at the expense of safety, ethical, and human rights considerations.

One proposed approach to striking a balance between risk mitigation and commercialization is a model like the US Food and Drug Administration (FDA). This delivers scaled launching of products alongside technical understandings and risk assessments, coupled with continuous oversight and auditing to monitor and evaluate direct and side effects.

Such an approach enables a holistic review of the risks stemming from AI technologies and their (unintended) consequences – ranging from social justice such as OpenAI’s reported exploitation of underpaid Kenyan workers to filter traumatic content from ChatGPT’s training datasets to intellectual property issues and the environmental costs of developing and running these technologies.

Industries must not over-prioritize the rapid and premature commercialization of AI products at the expense of safety, ethical, and human rights considerations

Policymakers must also consider early on any adequacy issues in the light of diverging approaches to AI regulation. More research, such as the International Centre of Expertise in Montreal on AI (CEIMIA)’s latest report, is needed to draw lessons and best practices across jurisdictions on how to ensure legislations and frameworks complement – not compete against – one another.

Possible avenues could include bilateral and multilateral agreements, both binding and non-binding, laying out shared understandings and mechanisms to contentious situations.

Finally, there is a need for new organizations to monitor AI development across sectors, foster cross-pollination across applications, and build on success stories of AI governance. Notable examples include the research and education sector grappling with ChatGPT and copyright/plagiarism and transparency issues, and how cities approach transparency and democratic oversight on deployment of AI.

AI governance must balance creativity with sensitivity 2nd part

To facilitate such discussions, one approach could be similar to the Intergovernmental Panel on Climate Change (IPCC), with a scientific mandate to inform states of current knowledge and advance possible responses. This intergovernmental process can help the policymaking community keep abreast of technological progress and devise well-grounded and evidence-based responses in such a noisy and highly dynamic space.

The will for regulation is strong

AI is everywhere all at once but, unlike previous technological frenzies such as the rise of social media, governments across the world are taking concrete steps to regulate the development and deployment of AI in advance of its rapid development.

High profile key players in the industry are also showing support for regulation, flagging personal concerns, and engaging in the debate – a rather uncommon phenomenon in the tech space.

Countries and communities may each have their own and unique relationship with AI but it is important to maintain a set of universal guarantees and human rights safeguards for all, especially those most at risk from AI development. A creative but sensitive, realistic, and scientifically-grounded approach to governing AI is a vital next step.