The European Parliament today passed its landmark AI Act – a sweeping piece of legislation targeting the risks posed by the fast-moving technology. It threatens an outright ban on artificial intelligence (AI) applications which carry unacceptable risks for the safety, livelihoods and rights of EU citizen (this includes for example cognitive behavioural manipulation, social scoring or biometric identification).
It also places significant obligations on the use of AI in ‘high risk’ applications, such as health, critical infrastructure, border control, education, justice and the everyday services relied on by European citizens. The law will apply to businesses operating in the EU and, critically, the tech giants behind the AI products used by Europeans every day.
Following the Digital Market and Digital Services Acts in 2022, the AI Act is the last technology-related legislation passed under the 2019-24 European Parliament and Commission as part their mission to create a ‘Europe fit for the Digital Age’. It concludes a mandate characterized by matching increased scrutiny of tech with efforts towards innovative digital policymaking.
The big question will be whether the so-called Brussels effect will be felt in AI, and whether or not the new regulation will have global consequences for the development of this technology.
There is an important precedent of EU regulation having an impact outside EU borders. The 2016 General Data Protection Regulation (GDPR) gradually led to global changes as platforms rolled out compliance globally. There remains debate as to how significant its impact on privacy has been as internet users instead drowned in a wave of opt-in consent popups, but the European regulation’s global impact is unquestionable. Within two years, global technology giants like Meta and Microsoft had updated their services, and privacy standards and awareness are par for the course in most jurisdictions.
Above perhaps all else, the critical role played by EU regulation globally is in raising the profile of its subject. The AI Act will do precisely that. As AI filters into everyday life, its application in surveillance, health, education and law enforcement will be more closely scrutinized as a result of the EU’s decision to flag the risks. Whether for governments looking for solutions, or citizens looking for recourse, the AI Act will shine a clear light on some of the risks associated with AI applications.
The EU’s new AI Office will also help lead global change, though not perhaps through its enforcement powers. Sceptics have pointed out that initial estimates for its budget are less than half the £100m committed to the UK’s AI Safety Institute.
But in its ambition to break with tradition and focus on hiring serious technology talent (at serious technology salaries) – ‘Oppenheimers’, in the words of one of the act’s architects – it joins the UK in prototyping a novel institution for the governance of fast-moving technology. These institutes could both become the global standard for technology regulation, and their networking could lead to a greater degree of global coordination on AI.
Lastly, the act is a clear statement that Europe believes it can both regulate AI and remain open for business. Significant debate around the bill has focused on the risk of hamstringing the continent’s AI businesses. The act has changed time and again to accommodate the concerns of France, Germany, Italy and others for whom the total domination of early digital technology markets by US companies is a mistake they were unwilling to make twice.
The hope is that a commitment to co-regulatory approaches, iterative change and novel policy instruments like ‘sandboxes’, will strike the right balance between support to new markets for rigorous and safe AI applications, and freedom for industry to explore and experiment with new products, services or businesses under regulators’ supervision.