In November 2023, the UK will host a major global summit on artificial intelligence (AI), recognizing the transformative impact that AI could have on our economy, society and international affairs.
The summit will focus on ‘frontier risks’ from AI - these are the risks that arise from the training and development of the most advanced AI models, rather than the risks arising from specific applications. In May 2023, over 350 industry leaders signed an open letter warning that AI could pose an existential threat to humanity.
International cooperation appears limited. Diverging approaches to regulation and AI governance have taken root across the world. Competing structures are now evident, from the US and UK’s self-regulatory, decentralised model to Europe’s risk-based, prescriptive approached enshrined in the EU AI Act. There is also the state-led, information control model adopted by China.
This discussion will lean on experts to explain the extent of the threat and provide recommendations to participants of the AI summit.
Key questions the panel will cover include:
What should we consider as a success from the UK Summit?
How should states work with private companies and civil society to manage the opportunities and risks of AI?
Is the focus on ‘frontier’ risks set by the UK government resonant with other countries, and how could a shared international picture of AI risk be developed?
As with all member events, questions from the audience drive the conversation.