Future AI regulation needs to be global in reach yet agile enough to allow each jurisdiction to tailor laws to local circumstances. The Council of Europe’s new AI treaty offers a binding framework for ensuring AI regulation upholds existing standards on human rights, democracy and the rule of law – not just in Europe, but in all countries that share the same values.
A generation-defining technology and its challenges
Artificial intelligence (AI) is not in itself a new phenomenon. AI is already at the core of most of our everyday digital tools, including social media platforms, anti-virus software, virtual assistants and navigation software. But the rapid rise of new ‘generative AI’ models – which can produce various types of content, including text, imagery, audio and video – has captured the headlines, leading to alarmed calls from some quarters for caution and even for bans on AI’s use. Suddenly, AI isn’t just running in the background: it is a disruptive power in need of attention.
New technologies often transform societies and economies, and may demand new governance models. The industrial revolutions of the 18th to 20th centuries offer parallels to what we are witnessing today. Then, too, the reaction to new technologies was sudden and mixed, ranging from euphoria to panic. In 1832, textile home workers in the Swiss region of Zürich set fire to a mechanical weaving factory out of fear of losing their jobs. Wilhelm II, the last German kaiser, is quoted as saying ‘the car has no future, I believe in the horse’, while people chased cars in the street because they loved the smell of the exhaust fumes and the oil.
Data has often been described as the ‘new oil’ of the digital revolution. To extend the metaphor, perhaps AI systems are its ‘new engines’: machines that process data to power applications, with the promise (or threat) of automating or replacing repetitive or laborious cognitive work. This includes highly skilled work, from writing and translation to marketing and decision-making in specialized fields. The implications are clear: like generation-defining technologies of the past, AI-driven tools will drastically change societies and economies, leading to the elimination of professions and the emergence of new ones. AI tools will lead to shifts in the balance of economic and political power. They will challenge existing orders, both locally and globally. And as with any technological revolution, they will produce not only winners but also losers.
The long-term risks of AI development are still uncertain. But 20 years of digital technology, data capture and machine-processing all point to changes in almost all industries. Traditional services providers and products will be squeezed or forced out of the market by newer, more efficient ones. Automated decision-making can be mysterious, and if decisions are no longer comprehensible or predictable, challenges around the rule of law, liability and autonomy are likely to emerge. First movers in AI-driven fields may establish dominant market positions through economies of scale, building on the emergent data monopolies of the past decade. Should data-rich and resourceful private companies continue to lead the way, there is a risk that society will become more dependent on powerful tech giants for stewardship of education, social services and healthcare provision, or for management of complex transport systems and energy flows.
Regulating AI, regulating the uses of AI
Whether addressing short-term risk or long-term uncertainty, well-conceived AI regulation is a necessity. The question now is what effective AI regulation might look like. And for that, we might draw lessons from one of the last great technological upheavals: the development of the internal combustion engine.
Today, no single ‘engine law’ regulates all aspects and impacts of engines. Rather, and over time, we have created a sophisticated system of technical, legal and social norms that regulate the use of engines, depending on context. The focus of regulation is mostly not on the engines themselves, but on the machinery they power and the risks associated with its uses. We have different regulations for the people who operate machinery. We also have rules for the fuels and infrastructure involved. These multiple sets of rules vary according to the domain of application, and they can also differ from country to country. They necessarily take into account different levels of cultural tolerance of risks, different levels of aversity to regulation, different approaches to dealing with risk, and different cultural approaches to the role of the state and individuals in this process. In addition, the extent to which rules are harmonized between jurisdictions depends significantly on the area of application: for example, local road traffic regulation varies far more widely than international air traffic regulation.
However, this analogy is far from perfect. AI systems have many properties quite unlike those of physical engines. They are digital tools that can proliferate quickly, can be copied at will, and can be transported more or less instantaneously across national borders. Above all, they can evolve and learn. Their functioning is more abstract than that of engines, and at the same time more complex. Sometimes not even the creators of AI systems understand what the systems do or how they produce their results. Moreover, the same AI systems can be used in very different contexts and for different purposes.
Any set of rules for AI that seeks to do justice to the nature of AI, and to be appropriate to the risks, must be just as dynamic and agile as the technology itself.
Any set of rules for AI that seeks to do justice to the nature of AI, and to be appropriate to the risks, must therefore be just as dynamic and agile as the technology itself. AI technologies pose global issues across states and regions, and therefore require a concerted response. But a ‘concerted’ or harmonized response is not the same as a uniform one: we may need to develop a system of technical, legal and cultural norms for AI applications that is at least as differentiated as those for engines. These norms will need to be based not solely on the technical features and capabilities of each system, but also on the risks associated with its application in any specific context.
The need for a common framework
When discussing the need for ‘new’ rules of the game for AI, we must not forget that existing national and international norms – including those protecting fundamental rights, human dignity and democracy – are applicable to new technologies.
For a number of years, international organizations such as the OECD, the Council of Europe, UNESCO and the International Telecommunication Union (ITU) have worked on AI to understand its challenges and identify regulatory gaps, and have developed various soft law instruments accordingly. For instance, since 2018 the Council of Europe has developed soft law instruments on, inter alia, the use of AI in the judicial system and the human rights impacts of algorithmic systems. Since 2021, the EU has also worked on an AI Act. Designed to regulate AI in the internal market of the EU while respecting fundamental human rights and democracy, the act was approved by the Council of the EU in May 2024. It contains, among other protections, special safeguards for some general-purpose (‘horizontal’) systems capable of being adapted to many uses, and for AI tools and applications deemed high-risk.
While the AI Act is a milestone in its own right within the EU, equally significant has been a parallel push by the Council of Europe to establish the building blocks of a global regulatory regime for AI. From 2019 to 2021, the Council’s Ad Hoc Committee on Artificial Intelligence (CAHAI) examined the feasibility and potential elements of a legal framework covering the development, design and application of AI. Drawing on multi-stakeholder consultations, this work was informed by the Council’s own standards on human rights, democracy and the rule of law as these pertain to AI, as well as by equivalent standards elsewhere. As a consequence of the CAHAI’s findings, in June 2022 the Committee of Ministers of the Council of Europe mandated the Committee on Artificial Intelligence (CAI) – a new committee superseding the CAHAI – to negotiate a binding international agreement on the development, design and use of AI. The terms of this mandate required the framework to be based on the Council’s existing norms on human rights, democracy and the rule of law while also being conducive to innovation.
A common framework with a global reach
From the beginning, the ambition of the Council and its member states had been to develop not just a legal framework for Europe, but the first legally binding international AI treaty of global reach. The idea was that such a treaty, though European in origin, would be open to any countries that uphold the principles of human rights, democracy and the rule of law.
This ability of a European treaty to shape global AI governance is supported by precedent in other domains. Council of Europe frameworks have a history of success: the Convention on Cybercrime (2001), ‘Convention 108’ on data protection (1981) and its revised version, ‘Convention 108+’ (2018), provide exemplary global vehicles for cooperation among around 100 states. Such instruments are binding intergovernmental agreements which democratic states around the world can sign up to.
Input from diverse stakeholders shaped the drafting of the AI framework convention from the outset. A number of non-European states participated in its early development, while others joined later during the negotiations. (By the time the CAI had agreed a draft treaty on 14 March 2024, after 19 months of intense negotiations, the list of official non-European participants consisted of Argentina, Australia, Canada, Costa Rica, Israel, Japan, Mexico, Peru, the United States and Uruguay.) The CAI also included observers from civil society, academia, the business sector and the technical community. Reflecting the rationale that individual states would need the approval of their parliaments to ratify the convention, a subgroup of potential future state parties was created and tasked with drafting the articles of the convention. Drafts were presented and explained in plenary sessions to all stakeholders, who were able to submit their own written and oral comments and propose text changes before and after every drafting group meeting. In this way, wide-ranging input and feedback on the text of the draft convention were ensured from all stakeholders until the very end of the negotiations.
Contrary to some expectations, the negotiating parties intended neither to create substantive new human rights nor to undermine the scope and content of existing applicable protections. Instead, they agreed a set of legally binding obligations and principles under which each party’s existing applicable obligations in respect of human rights, democracy and the rule of law would be applied in the context of the new challenges raised by AI. Agreement by all parties on the need for a graduated and differentiated approach was important for ensuring that any future regulation and related measures would address, and be proportionate to, context-specific risks and impacts.
It was also clear that a framework convention designed to set the tone for AI governance in many jurisdictions over the coming decades could never anticipate, and was not intended to regulate, all aspects of AI in detail. Rather, it was (and is) meant to be supplemented – as in the previously mentioned illustrative example of ‘engines’ – by further technical, legal and sociocultural norms on aspects of AI used in specific contexts and countries. These norms will need to be developed and adapted continuously in each country or jurisdiction.
In addition to bridging gaps between legal systems, one of the biggest challenges during the negotiations for the AI convention had been to manage the expectations of some European states and civil society actors on regulation of the private sector in important non-European nations. Governments and civil society in Europe needed to realize that it is not possible simply to transfer the European system and its unique logic – based on the European Convention on Human Rights and the European Court of Human Rights in Strasbourg – to a global instrument. In order for the new AI convention to become an instrument of global reach, it needed to leave as much flexibility as possible for potential future parties to implement its principles while remaining compliant with their own national legal and regulatory frameworks. The more flexible the convention, in other words, the more countries would likely be able (and willing) to accede to it. Notwithstanding these factors, consensus on a set of core principles remained essential for brokering agreement and ensuring the alignment of signatories.
In order for the new AI convention to become an instrument of global reach, it needed to leave as much flexibility as possible for potential future parties to implement its principles while remaining compliant with their own national legal and regulatory frameworks.
Despite several critical moments during the negotiations, when it seemed impossible to bridge differences between the expectations of some European states/stakeholders and the realities in other countries, in the end the will and commitment on all sides to draw up an agreement with a global reach prevailed. The Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law was adopted in Strasbourg on 17 May 2024. The convention obliges all future parties to address the risks from activities by both public and private actors within the lifecycle of AI, taking into account the respective roles and responsibilities of all stakeholders. It gives parties the flexibility to meet their obligations under the convention according to their own domestic legal and institutional frameworks. A periodic reporting mechanism will cover the measures taken by each signatory; this should both increase the accountability of states and help to ensure a dynamic approach to AI in the future. The convention’s follow-up mechanism will also offer new opportunities for cooperation with states that have not yet ratified the treaty – this will further contribute to its potential global reach.
Beyond a common framework
Yet while establishing a common language and a binding commitment to shared values and fundamental principles is a necessary first step for regulating AI globally, it is not a sufficient one. New technologies demand agile and adaptive approaches to their governance. New AI applications are emerging every day. The boundaries between the state and the private sector, between the national and international, between different sectors, and even between science and business are becoming increasingly blurred. Beta versions and trial applications of AI are almost certain to have seismic effects on societies, and the speed of change may present difficulties for rigid and slow-moving decision-making bodies. Many of society’s governance mechanisms have barely changed in decades or even centuries, and are reaching their limits in terms of keeping up with the evolution of digital technology.
Debate continues on the forms that AI governance and regulation might best take. Solutions could include the use of observatory models, risk mitigation, standards or watchdogs, among other options. However, this author’s instincts and the evidence from other successes in technology governance suggest that best practice might include the following elements: interdisciplinary, multi-stakeholder processes; the establishment of sector- and application-specific regulatory priorities; and dynamic and agile legislative and executive processes that embrace the logic of the digital revolution rather than rejecting it. Digital governance must develop in smaller and faster steps, perhaps through regulatory ‘updates’ or ‘releases’ – similar to those seen in software – that react to technical developments immediately. We may even need to use AI systems themselves to develop regulatory frameworks that can cope with AI.
Whatever options are considered, given the fast-evolving and transnational nature of AI, collective governance requires shared international values and norms as well as binding commitments to respect and live up to them. The Council of Europe convention on AI provides a compelling route to get us there.