Olivia O’Sullivan
Alright, good evening, everyone, and a very, very warm welcome. Thank you so much for joining us. My name is Olivia O’Sullivan, and I am the Director of Chatham House’s UK in the World Programme. So, we’re gathering here a week before the UK hosts its AI Safety Summit at Bletchley Park, but we’re also gathering here in a year that has been possibly the biggest ever for the development of AI as a technology. So, as many people in the audience will know, large language models can now pass legal exams, AI systems can detect the structure of proteins in ways that have evaded Scientists for years. And this could mark a turning point, not just in the history of the technology, but in the history of our human relationship with technology.
The potential benefits are massive, but so are the risks. As the UK Government has said, of potentially synthesising new bioweapons, potential for new highly effective disinformation, or even, some argue, of AI systems themselves becoming difficult for humans to supervise or control. Meanwhile, the world is geopolitically fractured into different blocks on this. The US is a leader in innovation and digital technology. China is an industrial powerhouse, developing its own advanced AI industry, and the EU is the world’s largest regulator, and one of the biggest consumer markets for digital technologies in the world.
But most of the world doesn’t fit into those blocks and nor does the UK. So, within that context, the UK is seeking to play a role in the governance of this technology, and hosting the world’s first AI Safety Summit at Bletchley Park next week, bringing together over 100 different reps, including, from the US, EU, potentially China, from the private sector, experts, and civil society. The focus of the Summit is on frontier risks, so specifically risks arising from the development of the most advanced AI models.
Now, as an international affairs think tank, Chatham House is not a, kind of, technological research centre, but we see technology as fundamentally geopolitical. We want to track the latest technological developments that could reshape the Global Order, and we see AI in that frame. So, colleagues in programmes like International Law, International Security, and our Digital Society Initiative are doing great work on this issue. I encourage you to check it out. We, in the UK in the World Programme, are particularly interested in how the UK is seeking to position itself as a leader in AI governance and to explore what this might look like.
So, today’s discussion is going to focus on, firstly, what are frontier risks from AI? Why do they matter? Why should we care? What does a successful Summit look like in the opinion of our panel, and what sort of institutions and processes do we want to see come out of the Summit, in order to govern these kinds of frontier risks?
So, we’re delighted to have a really strong panel to speak to you about this today. We have a couple of our panel joining online, and others here in person. So, firstly, we have Professor Yoshua Bengio, who’s recognised worldwide as one of the leading experts in international intelligence, known for this pioneering work in deep learning. He’s a full Professor at the University of Montreal, and a winner of the Turing Award, the Nobel Prize of Computing. In 2022, he became the most cited Computer Scientist in the world. Francine Bennett is the Interim Director of the Ada Lovelace Institute. Prior to that she worked at a biotech company, which uses AI to find treatments for rare diseases.
Zoe Kleinman is Technology Editor at the BBC, a leading Technology Journalist, with more than ten years of broadcasting experience. She brings tech stories to global audiences across BBC News, World Service, and Radio 4’s Today Programme. Jean Innes is the CEO of the Alan Turing Institute, she’s worked across the public, private and non-profit sectors, to use data science and AI to solve real-world challenges. Last but not least, it is a big panel today, Katie O’Donovan is Director of Public Policy at Google UK, and she’s responsible for engagement with the UK Government, and also work on Google’s, kind of, approach to responsible innovation.
So, before I kick off and put questions to each of the panel, a brief word on how today will work. Apologies if you’re an old hand at these events, but today’s discussion is on the record, it is being recorded. Please do feel free to tweet about it, using the #CH_Events, and the handle, @ChathamHouse. I’m going to turn to each speaker on our panel and ask them a question, and then we’ll have a, kind of, brief discussion within the panel, but I will get to the point where I open up to audience questions as quickly as possible after that. So, do be thinking about the questions that you would like to ask. If you’re here in person, when we get to that point, please raise your hand and a microphone will come to you. If you’re online, please submit questions using the Q&A box that will appear at the bottom of your Zoom feed.
I think that’s all the housekeeping that I need to do. Once again, a really warm welcome. We’re very grateful to have you all here, we’re very grateful to our panel for addressing this issue at such a timely moment. So, I’m going to start, if that’s all right, with Professor Bengio. Professor Bengio, I’m going to put – I’m going to sneak in a – two questions to you. So, firstly, as I’ve said, the UK Summit is focusing on risks from frontier AI. Can you tell us a bit about, what does a frontier risk look like in practice and why should ordinary people care? And then on the basis of that, what would you like to see come out of the UK’s Summit? Over to you.
Professor Yoshua Bengio
The risks are pretty broad, and we can talk about examples, but it – you know, I won’t be able to cover all the things that can go wrong. So, let me mention a couple that you already talked about. So, the programming abilities of AI systems is growing rapidly and right now, they’re not as strong as the best programmers, but when that happens, which could be anytime in the coming few years, there are clear risks in terms of cybersecurity. So, that’s an example of a particular kind of risk, more broadly, which is misuse, bad actors, terrorists, using these AI systems for purposes that could really harm society.
You mentioned bioweapons. This is another area where there is a lot of concern. There was a recent paper on the use of AI for designing toxic chemicals that could be chemical weapons, and actually is very easy to do using the current AI systems. You don’t even need to look into the future. One of the things I’m most worried about is systemic risks to our society, maybe destabilising our financial systems, democracy, job markets.
For democracy, you can think of the misinformation and disinformation that already exists but could be amplified with AI tools. We now have AI systems that can manipulate language well enough to pass for human, so they could be used to scale-up the troll army of various entities. They could be used to fake even better, with video and speech, what Politicians don’t say or don’t do. So, there are many, many such misuse and systemic risks that need to be better understood.
And then, as you mentioned, there’s the issue of loss of control. The – actually, one thing that relates many of these risks is that we don’t know right now how to build an AI system that will behave as intended. So, if “intended” means some particular task, but also, acting in a way that’s aligned with our values, norms, ethics, laws and so on, we don’t know how to do that, and we don’t see, like, oh, this is something we’re going to fix next year, or something. And yet, these systems are being developed very quickly, there’s a lot of competition, maybe a race to the bottom, where safety isn’t the priority right now, it’s winning that competition. And maybe this is going to transform into a competition between countries, so there are geopolitical questions, as well, and this is very worrisome for a lot of people, including myself, and many experts in the field.
Olivia O’Sullivan
Given those challenges, what would you like to see come out of the UK Summit? I mean, it sounds like it’s going to be very difficult to create a global governance system that will manage, or govern, those risks? So, what would you like to see?
Professor Yoshua Bengio
Well, we’re going to need to start with small steps that can be implemented quickly. International treaties and agreements take a lot more time than national efforts, regulation. And even national regulation, you know, some could be very bulky, I’m thinking, for example, about the EU AI Act, which is nice and it’s moving in the right direction, but it took too many years to, you know, build it, and it’s not yet really adapted to the situation with frontier AI systems.
So, what we need in general, are very high-level principles in those laws, that then a regulator could quickly adapt, and a regulator would have enough power – just, like, thinking about in the US, the FAA and the FDA and things like that. They can react quickly to something that goes bad, a bad chemical, a bad some – a problem with a plane and so on. There will be new misuses or dangers, or things we didn’t foresee, and we need the regulators to be able to do that, to adapt quickly.
And also, there are simple things that can be done quickly, like, registration of the largest models and the computational capabilities that are necessary for training these systems. Now, we’re talking about, like, billion dollar costs for training these next generation of systems. So, there are not many companies that can do it. We need to make sure we track what they’re doing, and “we” being society, democratic governance, our governments, so that we create a licencing and registration regime that can be pulled out if a system is not safe. And as the regulator gets to understand better, because of the progress of science, how to evaluate potential harms and decide the thresholds of what is acceptable, what is not acceptable, you know, these – that system, that regulatory body, can become more complex.
But clearly, we’ll need governments to take ownership, to develop their internal capabilities, to do that regulation, to do the research, to figure out how we should regulate, how we should make sure that, more broadly, these systems are under democratic oversight. Not just from the other countries’ government, but, more broadly, like social – civil society, academics with expertise, who are neutral, independent Auditors, the international community. We need to make sure that developing countries, maybe through the UN, have a voice in how these systems are developed. There’s a lot that needs to be done, but we should start small and not wait to have built a very complicated, global governance system before we start doing things.
Olivia O’Sullivan
Thanks very much for that, Professor Bengio. So, I’m going to turn to Francine now from the Ada Lovelace Institute. So, Professor Bengio framed some really challenging risks there that we need to, kind of, somehow gain democratic oversight of. At an event last week on this topic, at Chatham House, a participant said, “It’s difficult to govern AI at the speed of democracy, let alone at the speed of multilateral governance.”
Francine Bennett
Hmmm hmm.
Olivia O’Sullivan
And I know Ada Lovelace has published, you know, some thoughts on what they would like to see from the Summit. Can you tell us what you would see as a successful outcome from the Summit, and do you think it’s focusing on the right things?
Francine Bennett
Yeah, it’s a really good question, and I think just take the second question first. We’re really happy to see engagement and interest by regulators in regulating AI and regulating technology better. Ada Lovelace’s – the Ada Lovelace’s miss – Institute’s mission is to make data and AI work for people in society. Obviously, a big part of that is it being safe. We would say that the focus of the Summit is actually a bit too narrow in that regard. So, Professor Bengio gave – made a very good case for the frontier risks. We would say frontier models or those, sort of, most advanced models are part of what we want to think about in terms of safety, but if we only think about that, we risk just forgetting about the broader set of risk and safety that we want to think about near at hand. The algorithms that are part of our everyday lives already, and that with increasingly capable models, will become increasingly part of our lives. And we don’t have to think about catastrophic risks to need to think about risks and harms and to have a better life for ourselves now.
So, I think you only get good outcomes by thinking about the broadest range of benefits and risks and not just focusing on the outer edge, and actually, focusing only on the frontier risks, not getting on with the national regulation, and the, sort of, near at hand things that we know we need to do. So, we know that, for example, our regulators probably aren’t capable, right now, of regulating uses of algorithms in their scope. We should get on with that. We know how to fix that, we’ve got some institutions, but they need more capacity. And we know that it would be helpful to understand more about how AI and algorithmic systems are used in society, and have more of a vision of what positive would look like to work towards, and to try and build that shared vision.
So, actually, I think Professor Bengio and I probably say a lot of the same things about the outcomes that we would want, even though we come from a lightly different framing of the risks that we would pay attention to.
Olivia O’Sullivan
That’s really useful to understand. I mean, I think, would you see – are there things that the Summit can achieve? You were, kind of, getting at this already, but that would – you know, the types of measures that Professor Bengio was talking about, independent Auditors, more democratic oversight, regulatory control of these, kind of, private – it’s only a few private labs that right now have the capacity to develop these really powerful models. Do you think outcomes from the Summit can, sort of, valuably govern those risks and the more everyday risks that you talked about, or do you think those processes need to be distinct?
Francine Bennett
I think they shouldn’t be distinct, they’re very intertwined, actually. And, you know, by understanding much more about these models, you both understand the outer – oh, I’m getting a thumbs up, fantastic – the, sort of, outer, catastrophic risks, but also the day-to-day. What – how are we going to use this tomorrow? How do we want this to be managed tomorrow? What path do we want to be on, as a society, to make these tools work for us, in whatever sense we mean working for us?
Olivia O’Sullivan
Thanks. I’m pleased we’ve got some agreement on the panel already. I might try to get you to disagree with each other later, but…
Francine Bennett
I think you might have.
Olivia O’Sullivan
…I’m going to turn – Jean, I’m going to turn to you. So, a lot of the, kind of, challenges here, you know, seem to be around how we get governments to work with private sector, to work with civil society, to, kind of, manage these risks. Do you think there are, sort of, specific – what do you think are some, kind of, good ways for all of those actors to work together? What are some best practices that governments can encourage, from, kind of, both private and civil society actors in this area?
Jean Innes
Can I throw another risk on the table?
Olivia O’Sullivan
Yeah.
Jean Innes
I worry about us worrying so much about risk that we don’t use these technologies, and the Turing Institute, the Alan Turing Institute, is fundamentally optimistic about what these technologies can bring to society, but we need to manage the risks, in order to unlock those benefits. You asked about the role of government working with the private sector, with civil society, and I think, there’s a very lively debate about risks, which can get polarised, but it’s not either/or. We have to address all of them, and the thing is, that’s quite hard and so, you need to bring all the voices to the table. So, Big Tech, start-up scene, civil society. We’re here at Chatham House, which has a fine tradition of, sort of, debating how we should run ourselves as a society. Start-ups, we’ve got a very vibrant start-up scene in the UK.
And so, it’s when you bring those parties together that you start navigating what’s essentially – it’s properly hard, but really worth it and we need to get on with it. Which, to pick up the other point about pace, having worked in government, I think it is – your previous – you mentioned a previous question, there is a fundamental difference in pace between the old world of you regulate, review in three to five years and then, you know, sort of, consider what your next steps are. We need to move a lot faster than that, and that’s why I’m really pleased that this group of – this focus that the government has brought to this complicated set of questions.
Olivia O’Sullivan
Thanks very much, Jean. Zoe, I’m going to turn to you now. So, Jean and some of the other panellists have talked about the value of bringing in lots of different voices. One of the, kind of, maybe slightly controversial things that the UK Government has sought to do with the AI Summit is involve China in various ways. You know, this isn’t just about, sort of, our domestic, civil society, private sector, government relationship, it’s also a geopolitical question. The UK hosting this Summit suggests that we as the UK as looking to play a global role in governing the risks from AI. Do you think that’s realistic, or is AI governance going to be dominated by actors like China or the US, who have, you know, more capacity to develop these systems, and are arguably, locked in quite significant geopolitical tension with each other? How do you see that playing out?
Zoe Kleinman
Well, I think it’s very easy to have a bit of a downer on the UK Government’s ambitions here, and to think, you know, we’re small, we can’t compete, why are we even doing this? But actually, I think it’s very much being driven by the Prime Minister, Rishi Sunak, who is obsessed with AI. You know, people who know him will say that he is obsessed with it, and I think he’s absolutely right to be, because it is coming down the track at all of us very fast. And the suppose the arg – thumbs up from Professor Bengio. The argument is, you know, if it’s coming down the track at you anyway, you might as well try and be involved in attempting to, sort of, harness it, and make sure that it’s coming at you in the right way.
I think the UK’s being ambitious, absolutely, but, you know, I think it is a player here. It does have a presence here, we have a lot of R&D here. We’re not big, we don’t have the deep pockets of Big Tech, we don’t have the enormous infrastructure of Big Tech. There is no UK Amazon Web Services, for example. However, what we do have is innovation, we’ve got brains, we’ve got talent here, and I keep hearing this over and over again, that, you know, we are certainly not in the same league, but we are at the table. And I think it’s a very admirable attempt to place us in this race, in a position that we can play as a, sort of, part of an arbiter.
I mean, lots of people are saying to me, you know, “It shouldn’t really be geopolitical at all. We’ve got all of these different AI Acts and regulations and things, flying in from different territories.” And what we should really have people say is a UN style regulator. You know, Mustafa Suleyman, the co-founder of DeepMind, has described a, sort of, need for a, kind of, climate change type body. He’s compared the risks of AI to the risks of climate change and said it needs to be managed in that way. It’s a global thing, it’s not a UK thing, or a US thing.
You mentioned China being a controversial guest. I don’t actually think it’s controversial, at all. I think it would be mad to leave China out, because China is a massive, massive player here, and traditionally, far more secretive than the West. And I think the danger, really, that was face with the Summit next week is more the other way. You know, what if China doesn’t come? What if all we’ve got is a cosy room full of US Big Tech mates, who all know each other anyway and are all talking about this anyway? We’re not going to get the diversity of thought that’s really going to bring about change, unless we invite, you know, other people who maybe we have a difficult relationship with, but we’re all still – in this particular scenario, we’re all trying to do the same thing. And I think it’s really important that we hear from them, hear what their thoughts are, and hear what they are doing.
You know, in a way, it won’t happen, but, you know, we should invite Russia, we should invite North Korea. We need to know what those guys are doing, right? We don’t know, and they’re not going to tell us, but this – if this is truly going to be a global conversation, then these are also big players and we mustn’t forget that, and we mustn’t make it too cosy, I think.
Olivia O’Sullivan
Thanks very much for that, Zoe. I heard some murmurs from the audience at some of those suggestions, so do hold your thoughts for questions. And I know some other members of the panel want to come in on the geopolitics of this, but first I’m just, I’m going to turn to Katie. So, certainly, the geopolitics of this are controversial, but arguably, sort of, the relationship between the state and private sec – and the private sector, especially Big Tech, has, sort of, been controversial over the decades, in terms of how we formulate regulation.
And there are people who warn of, kind of, regulatory capture, if Big Tech voices at too, sort of, closely involved in the conversation about how we govern these risks. On the other hand, as a lot of members of the panel have said, these really powerful frontier AI models are mainly being developed by just a few private labs, because of the type of computing capacity and investment that you need. So, can you tell us a bit about, sort of, how do you think – what do you think is a constructive relationship between government and the private sector on these risks? What would you like to see happen?
Katie O’Donovan
Well, I think – I mean, boringly, I’d like to echo a lot of the comments from other panellists, in that I think the UK Government is to be praised for having the Summit. I think they’ve done so on a quick turnaround time, in a really vibrant international environment, where lots of different organisations are thinking about what they should be doing on AI. And bringing together – I think it is really important, it’s been reflected in some of the comments so far, bringing together not just the Technologists – and I do think it’s important that the tech companies are there, because, at the moment, this is where the technology is being created, and this is where a lot of the expertise sits. But if you have that alongside civil society, academia and the other governments, I think that’s the right framework for thinking about how, you know, we respond really nimbly and rapidly to things that are moving quickly.
So, I don’t know how you would have a successful, meaningful Summit without the companies there, and I think that provides a, kind of, really important anchor point. However, there’s lots of other people there and there’s lots of other challenges. There’s, you know, G7, OECD, UN processes, that will also play a part in this. I think it’s obvious that the UK and the US Government have worked closely together, which I think is helpful, but the US Government themselves have their own White House principles that have led onto this Summit.
I think, it’s interesting this question of whether the Summit itself looks just at the frontier risks, or whether it looks at the broader risks, and I think, you know, there’s arguments to be made on both sides. But where we think about some of the broader risks, I do think, actually, UK regulators are already thinking about how they address those. And so, you know, for example, there’s already AI used in many products that we all use as consumers, whether it’s Google Maps or different kind of chatbots, so that sort of thing. And actually, the UK regulators are stepping up, you know, with some platforms they’ve already issued engagement, and I think you see both from Ofcom and the CMA and others, a real appetite to actually already get stuck in on the AI that we’re using.
There’s a really important body in the UK, which is little known, I’ll say, the DRCF, the Digital Regulatory Co-operation Forum, which brings together the main UK regulators to think about how do they have the capacity, the expertise and the speed to look at, not just AI, but wider tech regulation. And I think that’s where the UK, actually, has a real potential, as well, not just to engage internationally, but to set some of those standards and those work programmes.
Olivia O’Sullivan
That’s really useful, thank you. Thanks to everyone on our panel for the, kind of, responses to those opening questions. I know, Professor Bengio, you wanted to come in, particularly on the geopolitics of this. You know, the question of whether tensions between the US and China, or just the dominance of the US and China in this area, is going to, kind of, undermine any attempts at global governance. So, I’ll come to you for a response to that and then I’ll come to the rest of the panel for reaction to some of the other answers panel members have given. So, it’s over to you, Professor Bengio.
Professor Yoshua Bengio
Yeah, I think it’s great the UK is taking leadership here, rather than leaving it all in the hands of the US, as far as Western nations are concerned. It is going to be a lot healthier for geopolitical, kind of, stability and governance if we end up with multilateral agreements that are not just the, you know, two powers of AI, but a broader set of countries, and there are reasons for this. Like, if all of the decision making is happening in these two countries, you know, the smaller voices are not going to be heard. There’s also something I call the single point of failure problem, which is something that can threaten our democracies, and also comes up in the, kind of, scenario making about loss of control.
We want to make sure in the future that the powers that control AI are diverse. ‘Cause if all the power is say in the worst case, concentrated in one country, like the US, and let’s say there’s a change of governance for a populist government that wishes to use technology for its own political benefit, or even military benefit, we could all lose. And instead, if say the US agrees with a number of other countries about some principles of how the power that we’re bringing into the world is going to be managed in a way that’s aligned with, let’s say, the UN Declaration of Human Rights, for example, something that – of which there’s a broad agreement. For example, that AI is going to be used for peaceful reasons, or, you know, at most, for defending against attacks, but not to attack other countries. There are things that it’s going to be easier to agree upon if the circle is larger, and that’s very important.
Olivia O’Sullivan
Thanks, Professor Bengio, for that. I’m going to come to audience questions shortly, but I do just want to turn to the panel. I mean, Francine, Jean, you’ve worked on these issues for a long time. That was a kind of spirited defence of the idea of, kind of, multilateral governance, of, kind of, international regulation of Big Tech. But this has been a challenge, I mean, not just for AI, for social media, for other, kind of, ways that technology has shaped societies and our lives. You know, do you think – what do you think, kind of, the prospects for success are here? We’ve – to either of you, you know, it’s good to hear in a way, you know, speaking from a Chatham House perspective, a vote of confidence in multilateral governance here, in multilateral regulation, but it hasn’t been easy to develop that kind of governance of Big Tech so far. So, what are your thoughts on challenges in the past and prospects for success in the future?
Francine Bennett
I have a very initial thought on that, which is, obviously, this is hard. It’s complicated, it’s a messy technology, with multiple uses and we’re trying to work out what to do. One really, really important thing this time, which we got very wrong last time, I would say, is having a global voice and a public voice in the conversation, in a serious way. So, to Professor Bengio’s point, you know, if – from different countries, this is going to look really different. If you ask somebody from Nigeria how these technologies are going to play out, I think their perspective on the harms and benefits is going to be very different. And I think we can’t reach a stable agreement on what this should look like by only bringing in the tech – technological powers, and so we need actually, a lot more public voice in this, to make it successful.
And having good domestic regulation, and to your point about DRCF, we’re, sort of, going round saying, “What should our new institutions be and our new rules?” And that is great, but we also have some existing institutions and some existing rules, which, actually if we get those really right, that’s a very solid building block for getting towards the brand new, I’d say.
Jean Innes
Could I, I suppose, give an example of why the global conversation is so important? The Lloyd’s Register Foundation did a world risk poll, and it asked people, “Do you think AI, overall, is going to be of benefit to you and your community over the next 20 years?” Most – it was about two in five thought, yes, so, you know, kind of, edging positive, but still not great. But interestingly, it was very, very clear that the countries which are more involved in developing these technologies were much more optimistic about them.
And just bringing that to life for a moment, the large language models, a, sort of, a huge amount of excellent technical work, was made – was put into stopping them from producing content that is troubling. But the way that was done was by the data was labelled by people in – outsourced into other countries, where individuals had to look at some extremely troubling content, in order to build the safety tools that means we can all use ChatGPT without facing this stuff. So, that’s just an example of where, bringing the technologists to the table, you can trace that supply chain of the realities of using these technologies. And I’m afraid it just means it’s difficult, but it does mean that inclusive conversation is incredibly important, because we need to retain society’s trust on this.
And if I may just cite some work that actually we did in partnership with the Ada Lovelace Institute. We went out to the great British public, I think 4,000 representative individuals, and asked them, “What would you like to see to help you trust this stuff?” And they said, “Number one, we’d like laws and regulations that, basically, they make it safe.” Good point, we’re on it. And number two, they said, “We would like to be able to appeal a decision made by an algorithm. We want to go to an individual.” And I find that quite clarifying in terms of a really human, relatable response to what can feel like quite an abstract technological problem. These are the sorts of things we need to build.
Olivia O’Sullivan
That’s really useful, yeah, and I think that desire for a, kind of, human in the decision-making system, it’s really interesting to see that that is quite common, but, of course, we also have to think about the humans who are part of making technology safe. As you say, even social media content moderation also relies on quite, you know, poorly paid and difficult work in the Global South. So, thinking about how we make things safe, and all the ways we involve humans in AI systems, I think is a really useful point.
Would any of our panel like to make any other final points? Otherwise, I will open up for questions. Katie, you…
Katie O’Donovan
I just wanted to touch very…
Olivia O’Sullivan
…might want to come in.
Katie O’Donovan
…very quickly on the multilateral approach, which I think is really welcome, and I think the points that – of, you know, who’s in the room, whose voices are heard, who’s shaping those decisions, whose impression on the potential of AI, is really relevant.
I think, with regard to the Summit and just broader conversations, it’s worth being specific about the different roles, though. Because I think, you know, when the world is agreeing, kind of, “What are the principles we want to govern, you know, frontier AI?” you know, that’s absolutely right for multilateral organisations, and for, you know, really inclusive conversation. When you’re thinking about, “Let’s evaluate the risk and is there a shared vision” – not shared vision, sorry,” shared understanding of the research and shared area of concern?” again, I think that’s, you know, something that you wouldn’t want to limit to a narrow number of countries.
I think when you then look at, kind of, product, deployment and use, I think there it is worth thinking really carefully about, you know, who has the expertise and the resources to do that, and how do we do that in a way that absolutely guards against the risk, but helps people realise the potential? Because if you’re looking at AI that has potential to help identify or look at drug discovery, or look at different disease treatment, then we need a way that also can bring that potential to the right communities at the right time.
Olivia O’Sullivan
Yeah, really useful, thank you. I’m going to suggest that we open it up for questions now. There’s a few online, but let’s start in the room. So, please do raise your hand. Please could I encourage people to ask questions, rather than make comments, and if you are comfortable doing so, please do introduce yourself and say where you’re from. So, I’m going to suggest go to the person here on the end. I’ll take a few at the time, so if you ask your question and then, I’ll go to the gentleman in the tie. Over to you.
Dr Alexi Drew
Good evening, well, I’m Alexi Drew. I work as a Technology Policy Advisor at the International Committee of the Red Cross. My question is, are we able, currently, to technically, or politically, or perhaps and politically, track, measure or record the potential harm that might be caused by AI systems? I guess the point here is that in order to be able to really measure pros and cons, surely we need to be able to be certain that we can gather the data that allows us to do so first.
Olivia O’Sullivan
Great, thank you. I’ll take a couple more and then I’ll put that to the panel. So, if we – yeah, go straight to the gentleman here.
Christopher Townsend
Thank you. I’m Christopher Townsend, just a private citizen here. We regulate banks ‘cause we don’t want them to fail, we regulate aircraft ‘cause we don’t want them to fall out of the sky, we regulate medicine ‘cause we don’t want people to abuse pills. I’m – I want to understand what the big risk is that we’re regulating against from a, sort of, consumer point of view, if the panel have got any thoughts on that? Thank you.
Olivia O’Sullivan
Great, thank you. If I – I’m going to take one more. Let’s go to the lady in the shirt here. Oh, thank you, doing our job for us.
Lara Turner
Hello, I’m Lara Turner. I’m a master’s student in cyber policy and strategy. I would like to ask the question, what are your general thoughts on compute monitoring, and whether you think it’s achievable to get further progress in this area in the coming AI Safety Summit?
Olivia O’Sullivan
Great, thanks, Lara. Let’s take those together. So, are we able to track or record the harms that we’re talking about? Question from the gentleman on, kind of, what is the big harm? Like, how would you explain it in the way that we say we regulate planes so they don’t fall out of the sky? And then a question from Lara on monitoring compute. So, just to make sure I’ve got it clear, the, kind of, computing capacity required for the really powerful AI models is massive. So, what’s, sort of, the state of – or what kind of progress can we make on monitoring access and development of that input to these models? I’m going to open that up, ‘cause I think all of our panel might well have responses. Let me go – I’ll go to Zoe, and then Professor Bengio. Zoe, over to you.
Zoe Kleinman
Thank you. I’ll answer the question about what we should be worried about, because I sometimes feel that this discussion gets quite dystopian and sci-fi quite quickly. You know, lots of powerful, generally men, will tell you that it’s killer robots we need to worry about and existential threat, which obviously is part of the story. But I feel like there are many more immediate and, kind of, mundane harms, if you like, that we need to worry about before the killer robots turn up, and those are things like, what will you do when an AI tool is making decisions about you, and it makes what you think is the wrong decision? How do you challenge it? Where do you go? How do you redress that?
Somebody told me today that her son is in trouble at school because they think that ChatGPT wrote his essay, and he insists that it didn’t, but the onus is on him to try to disprove that he didn’t use it, and how is he going to do that? You know, this is a 14-year-old boy, right, this is difficult and low level, but also, immediate, you know, unpleasantness that I think we’re going to face initially. And the other thing, I think that is even more worrying and well, a challenge, is how dramatically I think it’s going to change work. particularly the sort of admin, office based work that millions of people do.
I had the demo last week of Microsoft’s Copilot, which is essentially ChatGPT tech put into Microsoft Office apps, and I watched it draft emails, replying to email chains I hadn’t read. It summarised meetings that I hadn’t been to. It wrote a PowerPoint presentation for me in 43 seconds, based on a document it had drafted earlier about a fictional product that, you know, that the demo was about. I mean, it was an absolute game changer. It was very impressive to watch, very impressive.
And I had so much feedback from people when we ran the story last week, saying, “This is going to save me so much money.” One lady messaged me and said, “This is going to save my business.” Which is great, in some ways, but, you know, if your job is to do those PowerPoints, what are you going to be doing instead? Microsoft will say, “This is taking the drudgery out of work,” right? “This is going to get rid of all the boring stuff that you don’t really want to be doing anyway.” And that’s great, but what if there’s nothing left for you to do? I think, you know, another, kind of, immediate and everyday harm is the jobs market.
Now, new jobs will emerge, that’s what they say, right, and it did with the internet. You know, 20/30 years ago, if I’d said to you, “search engine optimisation,” that would have meant nothing to anybody, and now it’s a whole profession. So, we know that that’s coming, but in the short-term, I think that’s going to be a very bumpy ride for a lot of people.
Olivia O’Sullivan
Yeah, thanks very much, Zoe, worrying for us all. Professor Bengio, can I bring you in to respond to those questions? You had your hand up, I think.
Professor Yoshua Bengio
Yes. I’ll try to respond quickly to all of the three questions. So, regarding whether we’re able, technically or politically, to track the potential harms. At least a lot of the harms having to do with our - you know, making sure that AI does things that are aligned with our norms and morals and so on, no. The answer is a big no, and that’s one of the big reasons why I think at the Summit we’re going to try to encourage countries to invest in R&D, to improve this technical ability. And then there’s the – we also don’t have the governance tools to do this right. So, that’s – we need to do a lot more work in there.
Regarding the big risks. I mean, I already talked about them a bit earlier, but I would say, besides the things that Zoe talked about that are very important for people and they’re going to feel in their life, I would say the short-term risks that’s coming is – that are big are potential national security risks, with terrorists, bad actors and so on, using these systems. So, how do we make sure we reduce the chances of that happening? The – this is a, you know, a big question.
And then, finally, about the compute monitoring, I think that’s one of the most important rules that governance could put in place, in order to increase safety. ‘Cause right now, it takes huge amounts of compute that should be – we should be able to track quite easily, ‘cause there are very few companies, like, three companies in the world that can build the required chips. And right now, there less than a handful of companies that really are able to train those systems, so – and you need these large compute. So, this is – for the frontier AI risks, which I admit is only part of the picture, there are things that we can do under compute monitoring, and that should be part of the agenda for regulation.
Olivia O’Sullivan
Can I ask you, Professor Bengio, would you favour restricting access to that level of compute, you know, if some kind of governance regime could be agreed?
Professor Yoshua Bengio
Well, that’s the point. That’s the point of any kind of licencing or registration, that governments can pull out the rights to use something that is not properly, you know, designed in terms of safety, just like we do for any other product. So, it - the government needs to know, so, monitor, but also be able to say, “Oh no, like, this transaction of 20,000 GPUs is something that, you know, we want more scrutiny over before we allow it,” for example.
Olivia O’Sullivan
Thanks very much. Did you want to come in, Francine, then?
Francine Bennett
Yeah, could I pick up on the question about the big risk? And I’d actually push back a bit on your framing that there is a big risk with each of those technologies. I mean, in each of those, actually, I think it’s already more complicated. So, you take an example like cars, we have cars, we know how to regulate cars. You think, okay, we don’t want cars to crash, but actually, we want a bunch of other things, as well. We want them to have low emissions, we want them to have good seatbelts. We have this whole ecosystem of roads that they’re allowed to drive on and places they are not allowed to drive. We have parking, rules about where you can park, when you park and you get in trouble if you do it wrong.
There’s this whole complicated ecosystem of things, just with a relatively simple, relatively single use tool, and it’s kind of the same for AI. We want lots of the benefits from AI, we want to mitigate against all the harms that we can think of, and that make – that means quite a complicated ecosystem of things. You can still talk about them as interlinked, but there’s not one rule and one harm that we’re thinking about.
And then, that goes onto your question about, you know, the measuring and monitoring of harms in a Red Cross context, for example. You can’t have a single – you know, in the same way as you couldn’t count all the harms of cars in one metric, you can’t count all the harms of AI that you might want to track. But you can have, sort of, theories of different types of harms that you think are important and work out how to track. And I think starting to get more norms about how we measure we – these things, how we evaluate them pre-deployment, and also, how we monitor post-deployment, know if things are working or not working. And one, actually, potentially really good outcome of things like the Summit could kick off that conversation about, what does that look like, longer term, for us?
Olivia O’Sullivan
Thanks very much. I’m going to put a couple of questions that have come in from people online to the panel. So, David Stakes is asking about, kind of, the role of the UN in this. So, some people have suggested that a good outcome from the Summit might be something like the Intergovernmental Panel on Climate Change, or a kind of similar UN body, to do precisely that role, of tracking and communicating risks. So, David is asking, “Would it be beneficial if the UN took charge with a new agency, as the, kind of, existing multipolar body, that, you know, arguably has more legitimacy and inclusivity than anyone else?” So, I put that to the panel and I – another question that’s come in online, a nice blunt one, Antonello Guerrera, “Could AI kill democracy?” So, who would like to take either of those two?
Jean Innes
Can I take a run at a few – putting together a few threads? The online – the collection of harms is really interesting. Because that’s about looking at the evidence and collecting information and knowing what you’re trying to deal with. And we have made a small start on this, some work done by the Turing Institute, the National Physical Laboratory and the British Standards Institute, have produced something called the AI Standards Hub. And it sounds very worthy, and it is. It’s a place where you can collect information, you can collect information about regulatory approaches. And there’s actually something called the Online Harms Observatory, where you try and, basically, gather information so that you can look at what you’re dealing with. Now, that is a, sort of, grass roots up, actually very well regarded international effort, there’s lots of – it’s been pretty well regarded and a lot of other countries are interested in that. But it’s a grass roots effort, to start putting information into the system, so that we know what we’re dealing with.
Picking up the point about, is the UN the answer? I mean, the UN is actually doing good work on this. There have been sev – it’s brought together several conferences and panels, and I think it’s just not an either/or. I think it’s about, sort of, recognising – and it – there isn’t a clean answer to this, but there will need to be – I mean, AI doesn’t recognise borders, there’s just this, sort of, simple truth about AI that, you know, it doesn’t recognise geographical borders. And so, you do have to think multilaterally, and, you know, the UN is, I think, already doing some excellent work in that space. So, it’s – but it’s about bringing it together, which is, sort of – it’s a complicated topic, and then you bring the actors together, which puts together all the elements that need to, sort of, minimise the risks and maximise the benefits.
Olivia O’Sullivan
That’s very much, Jean. And Professor Bengio, you had your hand up for this one. Do you want to come in?
Professor Yoshua Bengio
Yeah, so I’ll just say plus one to an IPCC-like organisation for AI harms and risks. Regarding democracy, I think there’s a very important train of thought here that I’d like to share, which is, AI systems are going to be more and more powerful in the future, they’ll have more and more capability. And any powerful tool can be used to – by humans, organisations, countries, companies, to accrue more power, and the more powerful the tool, the more we might end up with an excessive concentration of power.
You have to remember that democracy is the antithesis of concentration of power, it means sharing power. So, even our market system requires, you know, avoiding too much concentration of power. So, if we’re not careful, we might end up with some organisations having too much dominance, either economically, politically, or military or all three. And it’s not going to happen, you know, in one day, but I think we need to think of powerful tools like AI in the future as something that fundamentally threatens democracy, and that means we need to – equivalently, democracies need to put the right protections against these concentrations of power.
Olivia O’Sullivan
Thanks very much. Do you want to come in, Katie?
Katie O’Donovan
Yeah, I think about that question from a slightly different perspective, and I think for me, it draws out some of the tendencies we sometimes see in the conversation about AI risks and frontier AI, and I think it’s been touched on, I think, Jean, you mentioned it, or maybe it was Zoe actually, in terms of, you know, thinking about the existential, thinking about the very, very scary. And, you know, the question is a challenging one, and it makes you, sort of, stop in your tracks and say, “Gosh, you know, could AI, you know, be the – spell the end of democracy?” And that’s, you know, that’s an arresting proposition.
Actually, I sometimes think, when we think about things in such abstract terms, we, sort of, forget our own agency, and we forget the own – expertise and the history that institutions have, and that we all have around dealing with these issues. And, you know, I’ve worked at Google long enough that I’ve seen, sort of, Brexit referendum, and there’s, you know, multiple general elections that we’ve had through those, and all of those have presented challenges that have involved technology. And I think those are challenges that we, as a company, who allow political ads on our systems, needed to respond to, and to check that we had, firstly, the right rules in place and we had the right transparency, we had the right engagement with the Electoral Commission. As we, sort of, approach elections that will, you know, involve perhaps AI, we look at, you know, whether we have the right rules in place for adverts that might have been generated with AI, or indeed, building technology that lets you understand when a voice is artificially created, or an image has been artificially created, and I think those are really practical tools.
But we also, in the UK, have really, really robust electoral law, and I think that’s something that we – you shouldn’t forget about. That when you’re talking about practical application of new technology, it still exists in that framework that we’ve lived with for hundreds of years. So, if you put out a piece of election material, you have to say, you know, who’s it published by? That’s accountable to the Electoral Commission, and that’s the same if it’s a leaflet coming through your door as it is as a Tweet or something else.
So, I think the risks shouldn’t be minimised, and I think, you know, the fora like this, or the Summit next week, are really, really important. But we also need to remember, what are the institutions and the historic perspectives and, you know, democratic and societal efforts that we can combine together to make that framework right? Rather than, kind of, think only in existential terms.
Olivia O’Sullivan
Yeah, thanks very much for that. Jean, qui – and then…
Jean Innes
I would just say, I do think something’s changed. This isn’t about leaflets through doors. It’s personalised, very bespoke, tested – a – beta tested on vast volumes of citizens, and I think we probably do need to recognise that, and to, sort of, gear up to respond.
Katie O’Donovan
But we have the institutions in place, and we have, I think, learnings and expertise, like…
Jean Innes
We certainly have institutions and learnings, but the pace of change is just extraordinary. I mean, the question about risk, I mean, moving from elections to cybersecurity, just – I think I’m already experiencing more phishing attacks. I think the volume of attacks out in – in, sort of, cyberattacks. So, I think there is something we have to face up to about the pace of change, and how that interacts with some of our systems. We haven’t got time to, sort of, fact check everything, and there’s just some new realities that we have to come together and recognise.
Katie O’Donovan
I’m absolutely all for coming together and recognising, but I think with agency and experience, and a bit of optimism that we have perhaps some of the right tools to solve them, or maybe…
Jean Innes
I am fundamentally optimistic, and, I suppose, the other thing is, there’s never zero risk. We live in a world of risk. Antimicrobial resistance is something that concerns me, I’m a Chemist by training. So, we live in a world of risk, but we do have to evolve and respond.
Katie O’Donovan
Hmmm, yeah, no, absolutely, I think – I agree with that.
Olivia O’Sullivan
I’m going to come back to the room for questions, and then I’ll try to sneak in a few more from people online. I’m going to go to the woman in the glasses at the back there first, and then, I’ll take a couple more questions, but if you ask your question first.
Meg Davis
Hi, I’m Meg Davis, a Professor at University of Warwick. Thank you, wonderful discussion. I’m wondering if you could reflect a bit on the role of the private sector in, sort of, framing this discussion and in shaping legislation to protect its own interests, and, in particular, there’s been some criticism about the use of frontier AI as a framing device for this Summit. You know, coming from open AI, which is, you know, had – kind of, spoken out of two sides of their mouths in terms of regulation and what they’re willing to subject themselves to. So, what are your reflections on the role of the private sector in all this? Thanks.
Olivia O’Sullivan
Great, thank you. I’ll take a couple more. I’ll go the gentleman in the red top there. Thank you.
Luke Arkush
I’m Luke Arkush, Chatham House member. My question is, AI been used for many years in the tools, like iPhone, autorecognition, Google, and can you give some good examples of what AI has been already doing and helping our lives better, so to cheer up people?
Olivia O’Sullivan
Okay, cheer us up a bit. Can I take from Joyce here?
Joyce Hakmeh
Thank you very much, Joyce Hakmeh from Chatham House. So, looking beyond the Summit, there have been some announcement that at the Summit there will be – or there’ll be the announcement of establishing the AI Safety Institute, and potentially, a global AI research body. So, I’m interested in the views of the panellists about, what is the unique added value that these bodies can bring, and how can they complement existing initiatives, whether at the multilateral or regional level?
Olivia O’Sullivan
Thanks very much, Joyce. I’m going to sneak in a question from online, I’m just conscious of time. So, it’s, kind of, similar to the first question that was asked, Chris Middleton, online, has, kind of, mentioned that “The White House’s AI Bill of Rights was introduced a while back, a kind of voluntary code. Since then, we’ve seen a shift to growing calls for more kind of muscular regulation, even among IT leaders. So,” kind of, “what has changed?” Why have we, kind of, gone from a conversation about voluntary code to calls for regulation, include – including from IT leaders and the private sector?
So, the role of the private sector in framing the discussion, possibly to their own benefits, in response to that, that’s an accurate summary of your question. So, cheer us up a bit, please, with some examples of AI helping us. What is the added value of a new or additional international body? And then perhaps some more reflections on why have we, kind of, shifted from maybe voluntary frameworks to calls for regulation, even from IT leaders? Zoe, you’ve got your hand up, so why don’t you come in and respond to whichever of those you prefer?
Zoe Kleinman
I – oh, sorry, I can put my hand down now, can’t I? I just wanted to say something quickly about the push for private companies to be involved. I mean, this is really not uncommon, we see this all the time. You know, they want to be involved, because they want to shape it. If it’s going to affect them, they want to shape it. You can bet your bottom dollar that there is a hell of a lot of lobbying going on constantly from all of these firms. I think they’ve looked at the lessons of the past, somebody mentioned before, you know, we look at how disastrously the whole social media thing went, when they said, “We don’t need regulation, we’re fine, we can do it ourselves,” and then failed repeatedly. So, I think everybody wants to avoid that situation.
And also, in a way, it takes responsibility away from them, doesn’t it? Because they then say, “Well, do you know what, we followed your rules, we’ve done everything you said,” and there’s less of an onus on accountability, perhaps, if the rules are set for these big companies. That said, they are the companies that are building these tools, and so it’s absolutely right that they need to be part of the discussion, and be at the table, because they know what they’re capable of building, what they are building, and how quickly it’s going to change.
And I think that is the big fear with any kind of regulation of tech, is that it cannot keep up with the pace of change. You know, this is evolving so very rapidly, and we haven’t even had – ChatGPT is the thing that is the, kind of, the poster child, isn’t it? Because it was, I think for so many people, the first time they’d knowingly interacted with AI. It’s not even been out for a year, right? You know, we are so – we are moving so fast in this world, and when you think about regulation and how traditionally slowly that’s moved, you know, the Online Safety Bill in the UK here has been years in the making, and it’s only just about to come in now. And arguably, when the conversation began, we were in a totally different landscape. So, I think all regulators are going to struggle with that, and they’re cautious about it.
Olivia O’Sullivan
Thanks very much, Zoe. And Professor Bengio, let me bring you in now.
Professor Yoshua Bengio
Thank you. So, I agree with Zoe. I would add that there’s something that’s been discussed I think that’s very interesting, for example, it was discussed at the US Senate hearings, where I was witnessing. It’s the idea of strengthening the liability danger from the point of view of the company, so that they would have an incentive to invest a lot more in protecting the public in all the ways that AI could be harmful. So, yes, governments need to invest, companies need to invest, but how do you force a company to invest 30% of, you know, their effort on safety? Well, it’s – I don’t think we can, like, quantify that, but we can scare them into doing it using laws. So, I think that’s something we should do.
About the – whether an AI Safety Institute, or something like an IPCC-like thing on AI safety would complement existing initiatives, I think it would. There are a number of things that are – have happened at the multilateral level, but that don’t include any safety component. So, I’ve been for two years leading one of the Working Groups of the Global Partnership on AI, which now has 30-ish countries, I think, including from – some from the Global South. And we’ve looked at a number of issues around AI, but safety hasn’t been on the radar screen of any organisation yet. The OECD has done also quite a lot in terms of the governance regulation, UNESCO as well, so there are lots of initiatives.
But I think, one way or the other, we need to make sure to bring in the, kind of, summary of the science that’s important regarding these bigger risks, to decision makers. And it – you know, including the understanding of – that we are – already have some studies on, regarding current harms, like, you know, with discrimination and so on, but we need to complement that with the question of safety. Which has been – you know, the media has been talking about it a lot, with all the letters and so on in the last six months, but not so much the, kind of, more evidence-based, scientific evaluation that decision makers need to refer to, like the IPCC.
Olivia O’Sullivan
Thanks very much, Professor. We’re bumping up on time, so I’m going to come to the panel in the room now. Perhaps I could encourage final closing comments and maybe we can help out the gentleman in red with some positives from AI, to close?
Jean Innes
So, just to share with you two very specific examples. People often talk about health, sort of, reading of eye scans, or prediction of heart problems, or cancer. I know, because I was part of – associated with a team that did it, was – machine learning and data science was used to materially impact on our ability to respond during the pandemic, and make sure that hospitals, that we had an understanding of where the pressure points were, and where the demand was, and so that we got what was necessary to the right places. So, a very real and – very real and very valuable contribution.
If I may, my other – if – my other point of optimism is, we have very, very high stressed public services at the moment. So, if you think about some of these tools reducing admin burdens on hardworking Doctors or, sort of, taking the pressure off those Clinicians, and letting them spend more time with patients, that’s where one starts to feel more optimistic about what these things can do, well, at least I do.
Olivia O’Sullivan
Thanks very much. Katie, and then Fran, very quick closing remarks.
Katie O’Donovan
I think my reason to be optimistic, or my favourite use of AI, is in Google Translate, and it’s not just for the, kind of, French/Spanish to English. But actually now, you know, working with local universities, we can take a dialect from Uganda and make the whole internet, therefore some of the world’s knowledge, accessible to people in their local dialects, and I think that’s absolutely game changing.
Olivia O’Sullivan
Fran, you’ve got the final world.
Francine Bennett
Great. Yeah, I mean, I’m very glad about – well, I’ve got a pacemaker, so I’m actually very glad about, certainly machine learning. I don’t know if it counts as AI, but it’s keeping me alive right now, so I’m very glad about that. Yeah, and more broadly, you know, that illustrates, these technological advances can be very useful when deployed in the right way, and safety tested very well, I hope, to do good things for us. So, what I want us to all figure out together is, how do we design these – and manage these tools, through their life cycle, to do good things for us, and to not have the bad things?
Olivia O’Sullivan
Thanks very much. Right, we’re a few minutes over time, I’m not going to try to sum up. I think that all is left to say is a really warm thank you to our panel, online and in person. I hope that was enlightening and interesting to people, in this context, a week before the Summit, but also in this year, where we have seen these, kind of, unbelievable advancements in this technology. Thank you all so much for coming and thank you for your wonderful questions. I’m sorry I didn’t get to them all. A round of applause for our panel [applause].