Bronwen Maddox
Hello, everyone. Great to see the room absolutely packed. A very warm welcome to the many, many people joining us online. Well, I’m very pleased to welcome you to this session on Global AI Governance: What is the UK Role? and we are going to be talking about the UK, but the questions go much, much wider.
I’m Bronwen Maddox, the Director, if we haven’t met, and I can’t think, seriously, of a better trio of people, I’m not even going to call you a panel, to come, to assemble, to talk about these things today. We’re doing this in partnership with Microsoft and the Tony Blair Institute, and very glad to have worked with them both on this event and with Microsoft on many questions of research and convening ahead of the UK’s Summit that the Prime Minister has called to discuss AI and governance later this year in the autumn. So thank you very much indeed for that.
Before I introduce the trio, we’re just going to have a short video from Chloe Smith, the Secretary of State, for Digital Sci – sorry, Department for Science, Innovation and Technology, the new formulations of this Whitehall machinery, always catch me out. And she’s going to speak to us for a few minutes, particularly about the plans that the government has for that summit and then we’re going into our discussion. So, if we can have that video now.
The Rt Hon Chloe Smith MP
Good afternoon, and a warm welcome to everyone attending today’s discussion on Global AI Governance. My thanks to Brad Smith and the Microsoft Team, the Tony Blair Institute for Global Change, and Chatham House for holding these valuable discussions in the run-up to the UK’s hosting of the Global AI Safety Summit.
The questions to which we’re seeking answers today, about how we govern and regulate AI, could not be more pertinent nor more pressing. Artificial intelligence is no longer the stuff of science fiction; it’s here in our pockets, our cars, our offices, our hospitals, and our homes. Its adoption is arguably no less significant than the groundbreaking inventions of the battery, the microchip, or the World Wide Web. AI is fast becoming part of our daily lives and will continue to become even more integral to our economy and our society as this technology rapidly advances.
The government has long recognised its transformative potential, and we’ve sought to be ahead of the curve. That’s evidenced in the £2.5 billion we’ve invested since 2014 in building a thriving AI ecosystem and the AI sector deal that we announced all the way back in 2018, which we backed up with the National AI Strategy three years later. Fast-forward to the here and now and the UK is at the very forefront of AI, not just in its application, but in its governance, as well.
The Prime Minister couldn’t have been clearer when he set out our national ambitions on artificial intelligence earlier this month; to lead at home, to lead abroad, to lead change in our public services. Will fulfilling those ambitions, starting with our AI white paper, it shows how we intend to address AI’s inherent risks, but also create a regulatory environment which fosters innovation and growth. It advocates a proportionate and agile approach, recognising the need for a regime that can keep pace with the rapid way in which AI is advancing.
We recognise, as well, when it comes to AI governance, the government cannot and should not go it alone. That’s one of the reasons why we’ve established the Foundation Model Taskforce to drive UK capabilities so that we can be standard-bearers for the safe development and deployment of AI. And I’m delighted that we recently announced Ian Hogarth as the Chair of that Foundation Model Taskforce.
As an esteemed entrepreneur, investor, and the co-author of the annual State of AI Report, Ian brings a wealth of knowledge to the role. Under his stewardship, the taskforce will focus on navigating the complex challenges posed by frontier AI, bringing together expertise from right across government, industry, and academia. It will collaborate closely with leading tech AI companies like Google DeepMind, OpenAI, and Anthropic, who’ve agreed to share access to their models for safety research.
We’ve said all along that this has to be an international effort, and it’s one that we’re proud to lead. We’ve been engaging with international bodies, including the Council of Europe, the G7, and the Global Partnership for AI, and the OECD, and we’ve committed to working hand-in-hand with our partners in the US, through the Atlantic Declaration that the Prime Minister recently agreed with President Biden.
As I mentioned earlier, the UK has also committed to holding the first major global summit on AI safety this autumn, assembling partners from across the globe to consider the risks of AI, including frontier systems, so that we can overcome these challenges together. The summit will consider AI risks and discuss how they can be mitigated through internationally co-ordinated action. That will help to ensure that AI develops and is applied safely, not just here, but around the world, that its benefits are fully realised tomorrow because of the guardrails we put in place today.
By effectively addressing the risks, we can seize the many opportunities that AI has to offer, from transforming our NHS with the discovery of new drugs, new treatments, and new ways of supporting patients, to helping us race ahead to net zero and build a greener, fairer, more efficient economy. In that sense, the conversation on AI is not just a technical one; it’s a social one, a cultural one. It asks us to reconsider our relationship with technology and to imagine the kind of future we want to build with it, but we can only do that by working together.
Indeed, Tony Blair’s and William Hague’s comprehensive report, A New National Purpose, is compelling in the way it sets out the true promise of this technology and the need for the UK to remain at the forefront of the AI revolution. We’re listening to some of the most valued voices in tech, like Baroness Lane-Fox, who knows from first-hand experience how businesses can use technology to innovate, grow, and rapidly expand. And we’re working with our partners in academia and in world-renowned think tanks like Chatham House to ensure that the views of leading experts and thinkers on AI are placed at the heart of this debate.
Michelle Donovan and I want all of you to be part of this journey because, together, we can ensure that artificial intelligence helps us realise not just a more prosperous, more dynamic economy, but a stronger country, and, ultimately, a better world. Thank you.
Bronwen Maddox
Thank you to the team for playing that, thank you very much, Chloe Smith, for giving it. I can’t fire questions at her, so we’re going to go straight into this discussion.
Well, let me briefly introduce, though they need very little introduction, Brad Smith, President of Microsoft, who I think you said it was the 18th country where you’re bringing some of these themes, but has written and spoken very widely for Microsoft and for the industry about these questions. And you may have come across his book Tools and Weapons on the promises and threats of the digital age, and indeed, the podcast of the same name.
Tony Blair, Former Prime Minister, also of the Tony Blair Institute, and I’m really struck at how Tony Blair and the Institute, which has dedicated itself to governance and leadership, has seized this subject and been right out in the front on it.
And Martha-Lane Fox, a Peer, President of the British Chamber of Commerce, a leading figure in the UK and international tech industry, and also, we were discussing earlier, other things. Not only Former Director of Twitter, but Chancellor of the Open University and a Co-Founder and Chair of Lucky Voice, which has revolutionised the karaoke industry, but we’re not going there today, we’re starting on these bigger and more serious, anyway, questions. Thank you all very, very much for coming here.
Let me start with you all of just saying if you can give us – what was the hardest thing, your calibration of how big this challenge is? We’ve had these extraordinary statements, all the more striking for coming from people who lead technology companies of warning apocalyptically about the threats to the human race and about the ways that technology may now run ahead, and seeming – you know, if you like, without limit on the threat side. And people also poring words and words, to my mind most movingly and excitedly, actually at the moment in some of the pharmaceutical and the medical diagnostics industry, about the potential. How do you see it and its significance? Brad.
Brad Smith
We’ve often had this conversation, a few of us inside Microsoft, what invention do we compare this to that seems the most apt? And I think – the one that I’ve come to, others as well, I think is the printing press. It was perfected by Gutenberg in 1452 and it fundamentally changed the humanity’s ability to write, to create, to share knowledge and wreak havoc, and bring good, as people were able to write and read more books.
And I would just analogise briefly that, in some ways, it made England and Great Britain a global power because, by the year 1500, the Netherlands and England were consuming more books than any other country in the world, and of course it turned out that the fears of some also proved to be true.
There was a concern in the Ottoman Empire, and it’s what led them to ban the printing press 20 years after it was invented, that the clerics would lose control of religion, that the rulers would lose control of their people, and the Calligraphers would lose their jobs. And, to some degree, Martin Luther proved that indeed, with the power of the printing press, the clerics did lose control of religion, and the Holy Roman Empire proved that the rulers did then lose control of people, and absolutely the Calligraphers lost their jobs. So, it’s profound, almost everything that’s good and everything that’s bad about the world in which we live today could in some ways be taken back to that invention. This will do the same thing.
Bronwen Maddox
Okay. Tony, do you want to comment on that?
The Rt Hon Tony Blair
Yeah. First of all, by the way, thank you to Chatham House for having us here and it’s a great pleasure to be with Brad and with Martha and you couldn’t have two more expert people, and, you know, personally, I don’t think we’ve had enough of experts, one. So, Brad, that’s an in-joke for the British, but – and so I’m not going to talk so much, ‘cause I’m not qualified to talk about some of the technical detail of this, but what I do want to say is that Brad used the analogy of the printing press, I use the analogy of the 19th Century Industrial Revolution, but you can get the magnitude of that from these analogies of what we’re talking about. I think this will change everything.
And the fascinating thing, my Institute actually once published a paper on this and – about three or four years back, which is how long did it take the world of politics to catch up with reality in the Industrial Revolution? And the answer is it took a long time, and in the end, when it did catch up, and things move much faster today, but everything changed. I mean, everything changed and the modern state was borne out of it, modern political parties were also borne out of it.
So, I think this is – we’re at the start of the revolutionary change and the essential thing is that we comprehend it and get our heads around it from the public policy point of view. And, you know, sometimes when people – particularly from my own political persuasion on the centre left, as it were, you know, they say, “Well, I don’t know, we’ve got so many difficulties, we’ve got public spending pressures and high taxes and low growth and low productivity. What’s our – how can we be ambitious? What’s our mission in this world of turmoil?” and I say, “This is your mission, this is going to change everything.”
So how you understand, master and harness this technology revolution will define the place of this country and the shape of the world. So get your heads round that and stop spending your time thinking about a little bit more on tax or a little bit less on tax, a little bit more on spending, a little bit less on spending. That is not what the future’s going to be about; it’s going to be about this, understanding it, and dealing with it, accessing its opportunities, and I’m an opportunity person on it, but mitigating its risks.
Bronwen Maddox
Thank you very much indeed. Martha.
Baroness Martha-Lane Fox CBE
Well, sometimes, I think the best thing is just to play with the stuff, right? And one of the things that I think has happened in the last year is that you are able to play with an AI interface in a way that you have never been able to before.
Bronwen Maddox
That’s a really good point.
Baroness Martha-Lane Fox CBE
There’s a – you know, in a sector, there’s lots of talk about AI. You have AI conferences, every tech sector gig would have someone talk about AI, investors chucking their money into the sector, but what has happened, in my opinion, in the last six to 12 months, is that you can – anybody can play with it.
In fact, more than that, to build on what Tony said, I think it’s a dereliction of duty not to play with it, if you’re in a leadership position, because if you don’t understand what’s possible, then you’re not going to be able to start to understand how to learn what the right policy decisions make, what the right commercial decisions to make.
But it can also show you just the challenge that we have, sort of, basing policy decisions on technology that right now is very patchy, no disrespect, Brad, but it’s very patchy, and I’ll give you two examples. So, in 40 minutes, and this is not an MLF exaggeration, which I can be prone to, I managed to take all of the content that I had created in my lifetime that’s out on the web and turn it into a podcast in my own voice. It might take slightly longer for these two, but for my limited catalogue, it was about 40 minutes, and pretty good quality podcast. Don’t worry, I’m never going to release them, but then – so then I thought, “Well, that’s interesting,” right? That’s a completely different way of productivity for me as MLF, and then I was thinking, “Well, if I was starting a business now, what would that mean?” I wouldn’t have a customer service team, I probably wouldn’t have a finance team, I wouldn’t have a – money of the operations. That completely shifts how you build things in the future, so I totally and completely agree with my other part of the trio.
The bit that I think we still have to remember is the quality of the data right now, and where we’re at in the lifecycle, which does make the challenge. Just, you know, to be silly, I’ve put in this morning, “Tell me a joke about Brad Smith, Martha-Lane Fox and Tony Blair.” Well, it turns out there are no jokes about us three, so that’s not great. They were awful; I’m not even going to bother to repeat them. So I just poked it a bit more, ‘cause in my experience of playing with this stuff, if you keep poking it then it gets a bit better, and I said, “Well, just one common purpose.” Well, obviously the common purpose was, “They’re all interested in tech and they’ve all been tech leaders,” and then I poked it a bit further and I said, “Yeah, but what do they like doing when they’re not in tech?” and apparently the common thing between us is we all like going to the beach.
So, you know, the robots aren’t taking over immediately, but I’m not diminishing the importance of this moment. I’m just saying that we – this has been happening for a long time. It’s now that the scales have fallen off certain eyes because we have a much more direct interface into it and we need to take it very seriously.
The Rt Hon Tony Blair
I didn’t know that about you, Brad.
Brad Smith
I did not know that about myself. That may have been a hallucination.
Baroness Martha-Lane Fox CBE
I think we’re def – well, not for me.
Bronwen Maddox
Or you are allowed to say it’s wrong and that…
Baroness Martha-Lane Fox CBE
Yes.
Brad Smith
Exactly.
Bronwen Maddox
…is still part of it. Alright, so it’s big and you put a lot of eloquence and subtlety to the ways in which it is going to change things. What is the first thing that governments ought to do now? We’ve had the pitch from Chloe Smith and indeed from Rishi Sunak about what the UK Government wants to do, we have the summit coming down. What is the most – well, I’m going to sit with the first thing, which is not always the same as the most important thing, but where do governments start in this? Tony, let me start with you and then…
The Rt Hon Tony Blair
Yeah, I think – I mean, we tried to set this out in the paper, but I think it’s – probably where government should start with how it organises itself, both to comprehend better what is happening and to start to work through some of the applications of this. Because, you see, I think it’s – in the end, you do have to reimagine what the state looks like, or how your public services function, in a really profound way.
So, I think, you know, the first thing is to make sure governments properly organise, and, you know, we suggest the whole series of things in the paper, which I think – you know, one thing just to pick out is that the Foundation Model Taskforce that the government’s establishing should report to the Prime Minister. I mean, this has got to be driven from the centre, understood by the centre.
But I think then secondly, which is maybe where you were thinking we should get to is, “Well, then, what do you do about the regulation part of this?” and, you know, I think the government’s ambition, in trying to make us a leader in this field, is a perfectly sensible ambition. But I think to do it, we have to recognise two things. Number one: this technology isn’t – these guys will explain it far better, also changing very fast, and even those people inventing it aren’t sure where its next iterations will go. And secondly, Britain’s not going to be able to come up with its own thing and just insulate itself, right? So we are going to have to have close co-operation and partnerships with other key players within this, including, of course, the US and the EU.
So it’s going to be – you know, I think this is an ambition that the UK leads in this field, it’s completely possible, but it’s got to organise itself properly. And when it gets into this, what is going to be a very, very difficult field of regulation, it’s got to keep a very open mind on a close relationship with those who are engaged in the technology. As Martha was saying to me earlier, make sure that civic society and outside voices are heard, but also, to be aware of the fact we’ll need to do this in co-operation with others.
Bronwen Maddox
And Brad, that’s others within the country, as well as internationally, isn’t it? We were talking upstairs, just briefly before, about what companies might want from governments in this.
Brad Smith
Yeah, and I think – this focus on organisation first, I think, is really interesting, and it is one thing you see in governments, which make perfect sense. It’s not something that I would have thought of initially, being in the private sector. You really do have to decide in each government how you’re going to organise yourself to manage this.
I think the same is true in companies and, you know, fundamentally, one path here is, and it’s a point that you’ve made, start using it. Use it in part because to use it is to understand and it demystifies it. You start to identify the fundamental processes that can be improved, how you boost the productivity and people, how you just serve citizens or consumers or customers in a more effective way. But then it does help you identify the problems that you need to manage, as well, and so you start to identify the risks.
Bronwen Maddox
Which are what? I’d like to dig into this a bit because this all – it can sound very abstract, and I know for those wrestling with it in government at the moment, it does feel rather abstract. They know they want to do something and they have three months to work out what, before this summit, but they would be helped by you, obviously.
Brad Smith
Well, the first thing I would say is, you know, for a number of years, people have been focused on what I will call applications that are powered by AI. And there’s been a real almost consensus globally on the principles, which point to the problems people need to make sure that they avoid. You don’t want to have bias in the computer systems that are deciding who gets a loan or who gets an application approved. You know, so you have to worry about bias and discrimination, you have to protect privacy, you have to focus on cybersecurity.
We need to ensure that this technology is inclusive, meaning it’s broadly accessible, it’s easy for people to use, it’s as easy for people with disabilities as without. You have to ensure that it’s transparent, and I think, above all, you actually have to ensure that these systems remain accountable to people and the people who are using them, who are creating them, are accountable to the public.
Now, where the debate has really changed, since last 30th of November, when ChatGPT was released, is that in addition to these applications, people have realised, “Oh my gosh, there are these so-called frontier or foundational models that are so much more powerful and can do so many more things than we thought we would see in the year 2023.” And this is where I think there’s this fundamental connection with safety, and, like other products that are normally – that are extraordinarily useful, but also potentially dangerous, like electricity, a commercial aircraft, a highspeed train, just to give three examples, you need to develop a safety approach. And it starts with what do you want to test? Who’s going to test it? How are they then going to measure and reduce the risks that may be associated with it? Do they have to get a licence to deploy it? How do they monitor it after it gets deployed?
And then the last thing I would say is, everybody’s so focused on these big models like GPT-4, for example, that it’s easy to forget that, in some ways, this is like the engine in a car. You know, you can’t have a safe car if you have an unsafe engine, but there’s a lot more to the car than the engine, and what really matters, in practice, is how the model is deployed for specific uses. And so what we do with open AI, what we do for Microsoft is, we have a Deployment Safety Board and we look at the specific scenario. So, I think this is where the UK can go in really sponsoring a summit around safety. What the world needs to see emerge, in effect, is a new paradigm for how to manage the safety of this new technology.
Bronwen Maddox
Martha, you’re nodding, and I want to – you’ve started all kinds of businesses from confused.com way back to others more recently. How does the government do this in a way that actually gets the best out of things and encourages people to start new things and doesn’t choke it all off?
Baroness Martha-Lane Fox CBE
I was actually reflecting on the journey that I had in creating the Government Digital Service and gov.uk here in the UK because gov.uk was borne out of some work I did, and I think that some of the lessons from that project are directly applicable to some of the ways I think you could work in this arena, as well. You know, we worked directly for the Prime Minister, we had massive Civil Service support.
Exactly to Tony’s point, you need to have those two pieces of authority to get anything done, in my experience. It was a hugely entrepreneurial enterprise, but we brought in a lot of people from the outside in order to create government services in the way that had never been done before. We put an actual citizen in a room with a person who was building the policy. We started with DVLA, something that wasn’t going to be completely catastrophic for people if it didn’t quite go right, but no one in government had before met the person that had been trying to renew their driving licence through linking up along the chain.
Now, I don’t say that to make you laugh particularly, it was actually quite depressing. But the reason I mention it is because I think there are ways, to Brad’s point, that the UK can become a kind of indicator of what these cases might look like, and to build some very specific examples to just have those minimum viable products in what public services and safe public services look like.
You know, the thing in all of this that I think makes a large constituent of people very, very anxious is about the increasing inequalities that are going to be, in my opinion, inevitable out of this, not only two-tier businesses, those that get it, those that don’t. And I’m President of the British Chambers of Commerce and it’s one of the top issues that I hear again and again from our 100,000 members, “What do I do?” or “I’m using it amazingly and I’ve taken out half my team,” so one or the other, right?
Or – so that’s one aspect of concern, but the other aspect of concern is for the communities of people that are continually screwed over by bad public services because of bad data. And there are hundreds of examples, whether – you know, you think about the examples in the Netherlands for the Child Benefit scandal, which actually led to the resignation of the Dutch Prime Minister because the algorithm was based on bad data that meant that it was discriminatory against a whole bunch of community of users who had been told that they had been defrauding the system when they hadn’t, through to, you know, can you imagine if Windrush had been based on an algorithm? It was pretty bad already, so, I think it’s incredibly important that the UK stakes out a role in this safety aspect.
I think we’ve done a lot of work actually in digital government in this country we can be proud of. People have copied gov.uk all over the world, but you need to bring outside people and clean up the data and do some actual use cases and show what some of the ways that we can change are, and that’s a very tangible thing.
Bronwen Maddox
It’s fascinating, and one of the things you’re saying, it seems to me, is the government should use this.
Baroness Martha-Lane Fox CBE
Yes.
Bronwen Maddox
Not just regulate it, but use it itself.
Baroness Martha-Lane Fox CBE
Yes.
Bronwen Maddox
And move fast to use it, and another one is, I think, picking up on what Brad was saying, to go into the different areas, saying, “Look, there’s not one answer to regulation, but it’s actually what you do about transport or about education data, or you’ve actually got to start getting to grips with these particular subject areas,” if you like.
Tony, what do you think the biggest threats are, and, with that, some of the most urgent things that government needs to do to keep people safe?
The Rt Hon Tony Blair
Well, I think it is urgent because, you know, as Brad was saying earlier, this is a technology that obviously – because it’s general-purpose technology can be used for good and used for bad. So, I mean, there are risks that fall into the category of wrongdoing, I mean, you can – there can be a whole generation of bioweapons that’s borne as a result of this. There can be interference in elections. You know, there are – I think you can identify categories of risk fairly easily, then you’ve got to decide what you actually do about that.
But then, as Martha was just saying, we will want gov – government will want business to be efficient. Business will have to employ this technology, but when it employs the technology, it will do so often by making redundant some of its workforce, potentially quite a lot of its workforce, potentially, as I understand it, as much in white collar industries as not. So this is a – you know, this – the social and political implications are going to be huge.
And I think that – I mean, I think the other thing that’s really important therefore, from my experience of government, and the trouble with – you know, the trouble with politics, as I always say to people, it’s the one really important job in the world where you put people in place with absolutely no qualifications to do it. And it’s a, sort of, fascinating thing ‘cause the skillset that takes you to government is not the skillset that really helps you in governing, and, you know, the fact is…
Bronwen Maddox
Can I be really clear? Are you talking about Politicians or Civil Servants?
The Rt Hon Tony Blair
No, I’m talking about Politicians.
Bronwen Maddox
There you go.
The Rt Hon Tony Blair
The Politicians who – ‘cause when you’re a – when you’re winning power, you’re the great campaigner, you’re the great – I mean, it’s presentational skills. When you’re in government, you actually become the Chief Executive, and it’s a different skillset.
And, you know, we work with governments around the world in our Institute and, you know, I always say to the leaders, “You’re in a completely different situation now you’ve come into power.” It’s all about, you know, focus, it’s about strategy, it’s about getting the right team in place, it’s about prioritisation, and above all, it’s about getting the right policy and understanding the way the world’s changing. Politics often doesn’t work like that. You know, as I say to people, you start in politics, well, this was my experience anyway, you’re least capable and most popular, and then you end, you know – but the – you know, so by the time you actually get to know how to do the job, they want you gone, but there it is.
So, the thing is, you’ve got to – the most important thing in this is that the nature of the dialogue between government and not just the people in the sector, but outside, has got to be so much richer and more deep, and, you know, it’s all about understanding. Because if you don’t understand it in the right way, you will definitely – I mean, I talk to Politicians about this the whole time and the trouble is, the default position for a Politician is they get regulation, you know, that they get, they always get that, but what will be really difficult for them is to get the reimagining part of this. And that is where you’ve got to have the conversation with the people who are going to be, you know, owning and implementing a lot of this technology and the people who are going to be the potential beneficiaries or victims of it, and it requires – as I say, it requires a quite different sort of dialogue between the state and the citizen.
Bronwen Maddox
Martha, Tony mentioned in the middle of that the disinformation point, and you were talking right at the beginning about your own ability to create your own podcast from all your past. What do you think everyone, including companies and governments, ought to do about this? Because the capacity of these systems to mimic, reproduce, even, do you call it mimicry, it’s a new creation that is indistinguishable from the original. What should be done about this? Should we worry, do we not? Do we use it, do we need both of these, but what do we – how do we need to contain this?
Baroness Martha-Lane Fox CBE
It’s not one thing, and I wish I had that. I mean, I don’t think I’d be sitting here quite so smugly, if I had that one answer. I’d be feeling like I needed to be executing it and doing it right now because it’s so important. It’s a myriad of different things. I think it’s across the piece in a business that both Brad and Tony have mentioned about trying to stay relevant and on top of what you’re seeing in your business, implementing those policies and trying to see what’s coming at you.
You know, to use the example of Twitter, we spent a lot of time – regardless of your take on what Twitter is now like a company, when I joined the board in 2016, there was a massive amount of work going on to try and manage misinformation, which was before the election at that point and so on. And, you know, that shifted in technical ability, but also what was coming at us, just in that period of time, and it shifted again dramatically, clearly, in the last year again.
So, I wish I could give you a silver bullet answer, I can’t. I think that we have – it has to be a joined-up approach between companies thinking about it, government. There will be regulation, of course, there’s some coming in our online harms bill. I’m dubious about how effective that’s going to be, it’s already legislation from the past, not for the future, and that’s exactly, kind of, the point we’re talking about.
That legislation is a brilliant example of something starts, you know, don’t let’s harm children on the internet, fairly uncontroversial, you’d have thought we could get round that quickly, and it’s morphed into this massive piece of legislation that is practically, you know, hardly – hard to get redress as a citizen and hard to deal with as a company. No one being bad in that scenario, we wanted to do the right thing, but it’s become slightly unmanageable, and so, that’s the complexity, I think.
Bronwen Maddox
And that’s a really, really good example of how – just how hard, even the best will in the world, legislation and legislators find it to keep up with this.
Brad, you’ve given this a great deal of thought in your writing, in Microsoft.
Brad Smith
The first thing I would say is I think this is very important, and I think it’s especially important for the UK and the US, since we both have elections in 2024. I think we should start by recognising that we need to address especially what we call foreign cyber influence operations, namely efforts by the Russians, the Chinese or the Iranians to try to disrupt public opinion and impact in the sway of an election.
And we should recognise, in my view, that whereas we’ve largely been able to defeat the Russians with traditional cyberattacks in a place like Ukraine, the Russians, without AI, are very, very good in this space. They probably spend about a billion dollars a year, they have a complete ecosystem, they pump out the information in 23 languages, and we’ve probably built what I think today is the best capability in the world to track this. AI will make them better.
Now, they don’t actually need to be able to get that much better to continue to be successful and it’s really important to remember that. So, project number one between now and 2024, let’s make it hard for them to use AI to get better, and I think we’ll see an initiative emerge quickly that fundamentally will involve watermarking and controls, so that when they’re using technology from companies to, say, create synthetic audio or video, you know, it gets watermarked, so people know that it’s been created synthetically. I think that we can use watermarking to make it harder for them to change other, say, video content using AI or other technology.
Project number two, which I really regard, in some ways, as the lynchpin for everything, is the ability to detect these kinds of activities much more rapidly. And AI should be an enormous help because we have so much data today, and what we don’t have is enough humans to go through it all, but with the power of AI, we can add to that human capacity and detect what they’re doing more quickly.
The third thing we need to do, and this, in my view, is the hardest, is then to figure out how the – we use the knowledge we gain when we detect what they’re doing to disrupt it. And that really involves two things: how do we talk about it, how do we issue warnings? Believe me, for a company like Microsoft, the hardest thing is, like, how do we use our voice to tell the public what we see? We do that with other forms of cyberattacks, but this is new terrain, and it’s not easy terrain to sort out.
And what do we do, for example, if you’re Twitter or LinkedIn or Facebook or some other platform, and we see content that has been altered with, say, the intent to deceive? I think we would benefit if governments would revise laws that in certain instances would make that an unlawful act. But we’ll have to decide, do we take it down, do we make it harder to find in a search index, do we relabel it, so that it is what we know it to be, namely altered content? And we do need to sort this out, I will say by the beginning of the year, if we are going to protect our elections in 2024.
Bronwen Maddox
Thank you for that and for reminding us that actually, these questions are coming right up at us, as elections keep doing, but the impact is really just, you know, months away, in a sense.
I want to ask you all before we go to general questions, and I know there are going to be a lot, and there’s a lot online, as well, do keep them coming; do you think we should create a new agency to regulate this? And my colleagues at Chatham House will know I’m no fan of going around creating hypothetical new agencies, but – and I can tell already, from a lot of the questions, that that is the way people are thinking. Some people talk about a new International Atomic Energy Agency like the nuclear watchdog for this stuff, and I would like to know where your instincts are, maybe no more than that at this point? Martha.
Baroness Martha-Lane Fox CBE
I think I would say yes, I think that the capacity to change, kind of, institutional knowledge and skills at speed is a pretty big ask, from where we sit right now. And maybe by being able to inject enormous energy around joining up the civil society, academic, corporate world together and quickly, is most likely, if we are creating a new structure. But then maybe it might not need to live forever, it could be a five-year process to then put skills back into countries and departments. Can’t be the only thing that happens, but I think it might be one of the things that are necessary.
Bronwen Maddox
Tony.
The Rt Hon Tony Blair
Yeah, I agree, and just subject to getting really good people to run them.
Bronwen Maddox
Well, that is a big line of discussion of how to – the world’s technocrats who can manage these things are in short supply. Brad.
Brad Smith
Yeah, I would – I agree. I mean, we won’t need an agency to regulate every aspect of everything, which is probably a good thing, but I do think that there are certain uses of AI, certain foundational models and certain applications in critical areas, including, say, using AI to control critical infrastructure, where we should probably want a licencing regime, and we should probably want it at a national level, and we should probably want an international agency, as well.
Bronwen Maddox
Licencing of, and I’m just looking at a question, it’s from actually Duff Mitchell here. No, sorry, it’s from someone that was asking, “Should people be licenced?” Are you talking about companies, people?
Brad Smith
I’m talking about companies, not people. Look, the only test I’ve failed in my life was when I moved here in 1989 and failed the driver’s test. I was told to feed the wheel and, as an American, I didn’t even know it was hungry. So, let’s leave the individuals alone for a while, let’s get the companies under control, and figure out what kind of system we need to protect safety.
But I do think we’re going to want to do this on an international basis as quickly as we can, because what we’re fundamentally talking about, especially if licencing goes to, say, the processes used to develop safety, imagine ten different countries saying that the safety process and product development each needs to be different. It’s just – it’s not just that it slows innovation, in my experience it creates the risk of errors when you have people tripping over each other. So I think we’re going to have real pressure for international harmonisation very quickly.
Bronwen Maddox
Okay, thank you for that, and I think your phrase, “Let’s get the companies under control,” is a striking one from the President of a leading company.
Let’s go to questions, there are going to be ah, loads and loads and loads. Okay, brilliant, I’m spoilt for choice. Right, here on the aisle and then here, I’m going to take them in pairs. Here on the aisle, back, yeah, and then here on the aisle, yeah.
Timothy Folaranmi
Thank you, sorry, so my question is…
Bronwen Maddox
Oh, would you like to say who you are? You don’t have to, but we’d love to know.
Timothy Folaranmi
Oh, my name’s Timothy Folaranmi. I’m a Lawyer at Mishcon de Reya. My question is obviously AI’s here, it’s impacting everybody’s lives, but how can we form – in terms of education, how can we help children and young people, sort of, understand AI and how it works because of the impact it’s going to have on job opportunities in certain industries? What can we do, from an educational standpoint, to ensure that they are equipped to navigate this, sort of, future AI that we are already in?
Bronwen Maddox
Thank you. Here on the aisle.
Elizabeth Seger
Hi, my name’s Elizabeth Seger from the Centre for the Governance of AI, and this goes back to a comment that was made about the Global AI Summit, specifically that out of the summit, the world needs a new paradigm for thinking about and managing AI safety. I was wondering if you could expand on that paradigm shift, what do you see as our current paradigm and ideally what paradigm should we be switching into?
Bronwen Maddox
Okay, thank you very much, and thanks for pressing on – this is a subject riddled with abstractions and metaphors, thank you for pressing on that, and I’m going to add a third one online from Dina Mufti saying, “Which jobs will be in – displaced in the UK and which jobs will be needed?” to bring us down to the core of it. Who would like to start? You don’t have to answer all of it, but I’m going to…
Baroness Martha-Lane Fox CBE
Thank goodness.
Bronwen Maddox
Right, we’ve got young people and how they’re going to understand it, new paradigm, and UK jobs.
The Rt Hon Tony Blair
So I think in education, actually, AI has got a lot of applications in education itself. And I was talking a couple of days ago to someone, speaking about the conference we’re having in a few weeks’ time, Sal Khan, who runs this Khan Academy that does extraordinary work on creating programmes for young people. And actually, the way that we teach will also be hugely affected by this and that will be one of the ways, by the way, which young people learn about the technology and about its possibilities.
But I think, in the end, we will probably reform the way we educate and the curriculum and everything about it. So, I think this is going to be – this is why I think it’s – when I talk about a reimagining of the state and public services, I really mean that. I think it will be – you know, we’ll have to alter everything that we do in order to take advantage of it, and in order, obviously, to – as I said earlier, to mitigate the risks.
But I think, you know, making sure that young people are then growing up with the skills they will require, the risk is that you end up with a deep divide that all the inequalities from society get deeper and this is why the public policy aspect of this is really important. But I don’t think you can engage in the public policy properly unless you actually understand it, and the only thing I’d say, ‘cause there are people better qualified, Brad, particularly on the – what we should get out of this AI summit, but I think we’re – I think we’ll be lucky to get a new paradigm out of a summit at the end of this year. What I hope we will get is an exposing of the political class to the full magnitude of this challenge, and if we get that as a first step, I think we’ll be doing pretty well.
Bronwen Maddox
So, in answer, getting young people to understand, so that was trumped by getting Politicians to understand, you know, almost.
The Rt Hon Tony Blair
Well, I mean, the young people, a lot of them will – by the way, will take to this – I mean, as we all know who’ve got kids or grandchildren, I mean, they’ll take to it pretty fast.
Bronwen Maddox
Martha.
Baroness Martha-Lane Fox CBE
Just two points. I’m really struck by how there was this huge push to put coding into the curriculum, based again from a good – probably good genesis that children should be well-equipped in the digital world, how completely misplaced, in my opinion, completely irrelevant, absolutely irrelevant, and very quickly. Not – you know, and I’ll maybe say that as someone that kept a book of java programming on her desk to scare the tech team, I can’t code at all, in her company.
But I – the serious point is that so quickly the skills are not going to be the technical skills that we’re embedding in the system now, they’re going to be either intervention, and Brad is far better placed to talk about that than me, or the human skills that we’re always going to need.
The Rt Hon Tony Blair
The creative skills.
Baroness Martha-Lane Fox CBE
The creative skills, the curiosity, you know, you will always be okay if you keep asking questions and work really hard, it’s, sort of, more the philosophy, I believe. So, I think it’s dangerous to try and design around any one bit of technology when the technology is moving so quickly.
Just to quickly talk about British business ‘cause of the President of British Chambers of Commerce, 100,000 members around the world, 80% of our members right now say that skills and people are their biggest concern, their biggest concern. We have a million unfilled jobs in this country and people can’t get the people they need, they can’t find them, they can’t retain them, they can’t retrain them, so it’s all across the piece.
So, arguably, you could say, maybe this is an exciting new world where we will be able to fill more jobs and we will have the capacity to do more things. But I think it’s probably naïve not to put that against a backdrop of a rapidly shifting sands. And I think the thing that matters is making sure that we don’t create this two-tier economy and that we can equip businesses to understand more of this stuff, to be able to invest in it wisely, to be able to have the regulation that helps them to make the right decision. So, it’s really fundamental that we help British businesses navigate this landscape, which comes back to partly, I would argue, something we want out of this summit.
The Rt Hon Tony Blair
Yeah, and, Brad, just before you answer, could I just literally ask you this in that third question, because I think a lot of people think, okay, when you have these technology revolutions, people get displaced, but then new jobs come along, and the one equals the other. Are we in that scenario or not?
Brad Smith
We’re absolutely in that scenario, in my view, and I – when we wrote our book in 2019, we actually had this – in one of the chapters about AI, we said, “What’s a job that’s going to be eliminated?” They said, “The job of taking orders in a fast-food restaurant,” which actually you now read in the news that that is exactly what’s happening, but think about it for a minute. How little real value is added by human beings? I listen to you tell me what you want, I punch it on a keypad, you look at it, and then you give me your credit card. Not really doing that much that – it’s not the worst thing in the world. if those are the jobs that go away, let’s use people to do better things that will be more fulfilling.
But then there are jobs that exist in 2023 that literally no one had ever heard of a year ago, it’s called being a Prompt Engineer. What’s a Prompt Engineer? It’s somebody that works with one of these generative AI systems, GPT-4, ChatGPT, others, and they’re basically – they’re learning how to use this system to do whatever their employer wants it to do, and there’s a real art and science and skillset that is being developed to do that.
And just as we saw employers really increase investment in employee training in the 90s when PCs entered the workplace, we’re going to see this not just in schools and universities, it’s going to be businesses and governments. So I think we’re going to need to invest more in this kind of new skill.
If I just go to the other question, though, what’s the new paradigm? What I keep trying to think about in part is what’s the existing paradigm that might be the most helpful, the most thought-provoking, the most relevant? And, you know, Sam Altman and – has said, and I’ve, you know, had a number of conversations with him about this, you know, he suggested the International Atomic Energy Agency, and I think that’s interesting, but I’m sort of myself coming to the International Civil Aviation Organization. Since 1944, it’s been a UN agency, it’s headquartered in Montreal, but it goes back to the fundamental safety process used to create aircraft, and we have complete interoperability with 192 countries, so you can get on a plane and go from one country to another.
But I think that the question that you asked is right for a lot of good thought, including by academics and research and social science research and the like. But let’s not assume that this is so different that we can’t learn from some things that we already have.
Bronwen Maddox
Okay, thank you for that. Let’s take some more. I’m going to go – let’s go here on the aisle and then here in the front row.
Rodrigo Rodriguez Fernandez
Yeah, hi, Rodrigo Rodriguez Fernandez, International SOS. Given the UK’s powerhouse status in research and academia, how would you see the UK leveraging this to rather than create let’s say a new Silicon Valley, we create a new Silicon Bridge between low and middle-income countries and high-income countries, especially in things like global health equity?
Bronwen Maddox
Thank you very much, and I’m just noting from Jim Scott online, “I’m not hearing anything regarding non-Western countries, are we just going to hope they will not develop AI?” So broaden it that way and here.
Emma Ross
Thank you. Emma Ross from the Global Health Programme at Chatham House. Given what’s at stake, I just wondered what your feeling is for the appetite of how interventionist the governance system we can go, as far as get ready for what’s coming and mitigate it or more directional, is there an appetite to be a little – take charge of it a bit more on the frontend? Where are people thinking, or where are you thinking, of how interventionalist the governance system needs to be for this? Is this a special case or should this be, you know, the norm?
Bronwen Maddox
Thank you very much indeed for that. Okay, UK powerhouse, non-Western countries, how interventionalist? Anyone? Martha.
Baroness Martha-Lane Fox CBE
Yes, I agree with you about the UK powerhouse. It makes me scream at my radio or scream at my iPad or whatever I’m listening to when I hear Politicians say, “We need to be the next Silicon Valley.” I don’t want to be the next Silicon Valley in this country, we can bring something more and better to it. We can wreak less havoc on the world and think about it more consequentially. So I agree with you, it links to me – directly to your point about the bridge between other countries and how we can show and develop more collaborative inclusive design.
And one of the things I feel it would be my dereliction of duty if I don’t mention sitting here is just about bringing more diverse voices full stop into this landscape. I don’t just mean at the summit and bringing academia and the groups of vulnerable people, I mean actually diverse voices. Because if you look at who the creators of this stuff is and where the power is already going very, very fast, it is not in the places I think that will unlock the most benefit for the maximum number of humans, and that matters.
So I think we need more younger people, more people from different backgrounds, more people from different countries, more people from different genders, and that is so fundamental if we’re going to get this right, and I think the UK can showcase some of that, build some of it. And kind of to my point earlier, one of the things I think should come out of the summit is some actual development of the UK’s good examples of how things can change and shift to show what’s possible.
Bronwen Maddox
That’s great, thank you. Brad, do you want to – interventionalist, UK powerhouse, non-Western countries.
Brad Smith
First of all, I think it’s very important to be proactive. Second, I think it’s good to build on strengths. There is no country that is better positioned than the UK, but we’re all, sort of, starting this race. And if I were to be candid, a lot of what is said in this room today, I’ve heard said in 17 other countries, as well. We’re that named powerhouse.
So, how do you really make yourself successful? And I think it’s – you start to do what you suggested, what are our strengths, and in particular what are the strengths that we – where we can use AI to make them even stronger? And, you know, from a Microsoft perspective, having been in this country for 40 years, as an individual who spent four years living here, the United Kingdom has an extraordinary strength in science. You know, in biology, in, you know, chemistry, you know, in physics, you know, in meteorology, all of these enormous things, and it’s true in the universities and it’s true in the companies, and every one of these fields is going to be transformed by AI. The field of scientific computing is just going to just revolutionise science, I would argue. Make sure you’re at the forefront of that, and that the government is actually stimulating and providing funding to get off to an early start and make it go faster.
And then, on the other side, I do think that it is good to lean in on safety and to be proactive interventionist, of course in a balanced way, you don’t want to stifle innovation. But if there’s one thing I hear around the world, and I completely actually agree with it, people say, “Let’s not make the same mistake with AI that we did with social media,” and then you, sort of, have a conversation, “Let’s talk about the mistake we made.”
I think one of the mistakes we made, and I think we should just say we all made it, we all got too exuberant. In the wake of the Arab Spring, we thought that social media would become the saviour of democracy, and instead, in five years, we found that it was a weapon targeting democracy. Let’s not go into this era with just unbridled exuberance, let’s identify the problems and start to manage them.
And then I’ll just say one last thing. If there’s one thing I’ve found over the years, I’ve been at Microsoft for 30 years now, and have been involved in so many negotiations with governments and companies around the world, every time we got something done, a new regulatory agreement or something else, we would sit down at the end and, you know, sort of on both sides, you look at each other and you’d say, “Yeah, what is it that we’re going to most regret? What’s the mistake we made? What is it that’s going to go wrong?” Five years later, we were always wrong. Something usually goes wrong, actually, that’s the way life works, but it is so hard to predict it in a fastmoving technology field that one needs agility and humility to keep adapting.
Bronwen Maddox
Thank you for that. Tony, do you have some other…?
The Rt Hon Tony Blair
Yeah, just to the – for coming to Britain as a global power on this, but on the – on how government intervenes, I think this will be completely different. Because I think government is going to – its first task will be to get the right skills into government to be able to understand in order to be able to do anything with this that is meaningful.
And secondly, the biggest problem, if we’re not careful, is that government tries to change everything without changing itself. And there’s no doubt, in my mind, that this is going and should change the whole way the state operates, so I think that’s what I’d say. So I think it’s really quite different this, because of the nature of the revolution.
I think in respect of Britain, yeah, I agree with what Brad was saying completely about how Britain should position itself and what we tried to – William Hague and myself set out in the paper, the group of things that we need to do in order to give us the best opportunity, and because we do have real strengths. We probably, after America and China, are, you know, number three, so that’s pretty good.
But he is 100% right about everyone’s on this, you know, you go to Paris, Macron’s talking about it, you go to Germany, Scholz is talking about it. Now, you go to Africa, they’re starting to talk about it, and here’s what’s going to be interesting. I think there are things that we can do, as this technology develops, where we can have relationships with countries where we’re helping them make their changes and reforms using this technology, and I think that could be very significant.
So, one of the things – I mean, we work now in almost 40 different countries round the world, most of them developing, and with the teams of people that are there on the ground, I’m constantly saying to the leaders, “Don’t try and repeat the legacy systems of the West in health and education, or even in basic things like your interaction with a citizen. You can do it completely differently through technology.” That’s why we try to get them to adopt proper data infrastructure, you know, move their data into the Cloud, have it so that they can – it can help them, for example, predict healthcare issues and so on, in a much, much better way.
But the other thing that’s going to be interesting is that because a lot of this will be open to people, a lot of countries, if they create – if they watch carefully this regulatory debate and create the right regulatory framework within their countries, there’s no reason why they shouldn’t also become players in this space. Because, in the end – for example, if you take the way the – I think the pharmaceutical industry is one of those industries that’s going to be hugely disrupted as a result of this, and the clinical research organisations I think even more so.
But I’m looking at some of the countries we’re working with today who are going to change completely the way that they collect the data, in respect of their citizens, the way they use it. Some of them are now going to the pharmaceutical companies for this time and saying, “We can provide you a much better way of doing trials.” You know, you – there’s no point in doing trials in a whole lot of Western countries. Actually, Britain is a good place to do trials because of the ethnic diversity of our population, but, you know, some of these countries themselves are going to sit down and work out how they can play a role in this and then – and we should be helping them be partners in that.
So it requires an understanding that I think it – you know, you’re either going to get your – get to grips with this or you’re going to get left behind, and one of the things we do in the institute, I say to people today, “The difference between countries that succeed and fail today, ‘cause everything is in the end pretty open and mobile, the thing that’s not mobile is your government. Right, if your government’s useless, not much chance. Right, if your government takes some good decisions, you can put yourself on a track for the future,” and I think that will be a really interesting part of all this. And by the way, at the G20 this year, you’ll find what India decides to centre on is what its technology can do for the Global South. So, it’s a big new world geopolitically, too.
Bronwen Maddox
That is a hugely important last point. We are going to have to end there. I’m really sorry, there’s a forest of hands up, there are some terrific questions online. Let me just say online, there’s been a whole cluster around threats, which we’ve dealt with as much as we can in this short space of time. There was an interesting lot on “Who guards the guards?” Phillip Tomsley, thank you for your beautifully-phrased question on that. We’ve dealt with that briefly with the agency, but much, much more could be discussed on that.
And there is a final strand on, is this really an existential threat to humanity, going back to the first question I asked the trio about how big is that? And we’ve answered that, I think, as best we can in the time, so thank you all enormously for coming and thank the three of you very much, indeed, for setting out your ideas [applause].