John Thornhill
[Pause] Well, good evening and welcome to this discussion about “Who Gains from Artificial Intelligence.” Welcome to those in the room and also online. My name is John Thornhill. I’m the Innovation Editor at the Financial Times. I’m also a Tech Columnist and write a weekly column about tech, and for the past seven years or so, I’ve been writing about, and obsessing about, artificial intelligence, and now it seems the entire world is writing about and obsessing about artificial intelligence. It’s a great pleasure to moderate this discussion tonight.
And to introduce the speakers, on my immediate left, we have Julie Brill, who is the Corporate Vice President for Global Privacy and Regulatory Affairs and the Chief Privacy Officer at Microsoft. Julie was previously a Commissioner of the US Federal Trade Commission in the Obama administration, where she worked on all kinds of issues, including privacy, fair advertising practices, financial fraud and competition. In the middle we have Stephen Almond, who is the Director for Technology, Innovation and Enterprise at the Information Commissioner’s Office, here in London, and he has had a previous career working at the World Economic Forum and various government departments with non-existent acronyms, like B-E-I-S, BEIS. And then on the far left, we have Carly Kind, who is the Director of the Ada Lovelace Institute, which is a very fine institute focusing on the benefits and the dangers of artificial intelligence, and as a declaration of interest, I’m also a board member of Ada, so I think it’s a very fine institution.
So, we’re going to start with you, Julie. The discussion is very much focusing on the risks and opportunities of AI, but I wondered if we could start a bit talking about the opportunities and, kind of, reverse the flow of this, because I think everyone focuses and latches on to all of the problems that we have with AI, but I wondered if you could just lay out a bit for us what are the opportunities, as well.
Julie Brill
Oh, absolutely, thank you. Thanks so much, John, and thanks so much to Chatham House for organising this great conversation. I’m really excited to be on a panel with these incredibly esteemed people. So, listen, when it comes to artificial intelligence, we actually have had artificial intelligence and have been using artificial intelligence for quite some time. It’s everything from spellcheck to, you know, all sorts of other minor adjustments to various tools that you’ve been using and enhancements to tools you’ve been using, for five to ten years. But what has happened over the past several years is AI systems and AI technologies have become much more advanced and arguably, much more disruptive, both in very positive ways, as well as some ways that present some challenges.
So, some of the AI systems that we’ve been thinking about in terms of the most advanced, and again, engaged in creative destruction, would be self-driving cars, facial recognition, natural language processing, which actually leads us to what is probably one of the AI systems that is most talked about, certainly over the last few weeks, and that would be large language models. Large language models are a really interesting technology which frankly, many, many people in the technology sector believe will profoundly change the way that we do work, the way we interact with each other, and really lead into incredible efficiencies across lots of different systems. We will talk about the challenges, ‘cause there definitely are challenges, as well, but just thinking about some of the opportunities.
I’d like to start with creative enablement. You can – through products and platforms like DALL-E, through various other visual, as well as text platforms, you can ask for certain pictures, pictures that you want to see, pictures that you want to put in different presentations, or that you want to use in various other contexts. You can pull together introductions. You could have asked at chatbot, “I have to introduce Julie Brill, tell me a little bit about her.” And what it will do is pull together public information from all sorts of sources, just the way search works, and create – and I’ve done this before in various other contexts, create a really good first draft of an introduction of someone that you don’t know and that you want to learn about.
But then there are much more impactful types of uses for large language models. Think of them as platforms upon which other apps are built, other types of products and services are going to be built, so time saving applications. If you don’t want to write a first draft of a report that you have to hand in, you want to have the data put in and have some help in creating a first draft, this is a great tool. It’s been described by some writers, these large language models with the apps built on top, as having an assistant who’s like someone who is first/second year of college. It’s not – they’re not going to produce something that you’re going to want to necessarily give to your boss or submit to the government, but what they’ll do is – what it will do is provide for you a guide that you will need to explore, you will need to make sure it is correct, you will want to interact with it, but it’s a way of coalescing a whole bunch of data in one place.
I’ll give a couple of other really interesting, and I think great, opportunities for this kind of technology. If you think about medicine, just as one of many areas that will be touched by large language models, think about a Doctor who has a patient with odd symptoms and wants to understand what are the possible diagnoses for a patient who has these odd symptoms. You put the odd symptoms and/or a more complete description, into an application and then an answer will come up. Now the Doctor would obviously not base their diagnosis just on that answer, but what I have heard, and what we have seen with respect to one medical professional that Microsoft has partnered with, is that actually when he put in some symptoms of a disease which he believed 99.5% of Doctors would misdiagnose, he put it into a large language model that we have provided and got what he believed was the right answer.
Now, again, you need to check it, you need to verify it, but think about just in the field of medicine, what will Doctors’ visits be like if you have – if the Doctor has an app that can absolutely just translate what is happening? The Doctor doesn’t have to take a whole lot of notes. The Doctor can act – is actually freed up so she can work with the patient, right? Or think about all of the other kinds of applications: summaries of meetings, summaries of symptoms, understanding quickly what may be happening with that individual patient. That’s just one of many different areas.
I’ll just mention two others, if I may, ‘cause I know that we wanted these to be short. Think about learning and education. Now, clearly, there’s been discussion about how Teachers would deal with applications that will give their students immediate answers of things that they might want to submit. Remarkably, what we are seeing is also apps that are getting designed to identify plagiarism, or identify when someone is pulling their answer directly from a large language model.
But think about this, think about a young woman at home, maybe not in the world’s greatest situation, and she just started menstruating and she doesn’t know what it means, and she doesn’t even know what’s going to happen to her. You can type in, you know, “I just think I started menstruating. What does this mean?” and you can – I’ve actually practised this. I saw what this was. I actually knew the answer, but – so – but think about a child who doesn’t necessarily have great resources in terms of their home environment, being able to ask questions and get a more – you know, an adult and calm answer about what is happening to her body, and what she can expect in the coming years, incredible, incredible opportunities.
And then, finally, I’ll just say in addition to learning, there’s actually a way in which large language models are being married up, as I said, with other platforms. We happen to have GitHub, and now we have a product called Copilot, which is a large language model built into GitHub, which allows Developers to, in a very easy way, say what they need to do, say what kind of programme they want to develop, and Copilot will engage in the coding for them, provide them what the coding should look like.
This has accelerated the ability of Developers to be efficient, and, frankly, to get things right where they might have made mistakes, they might have had bugs, they might have had cybersecurity issues before. Now that they’re able to, through this open source platform, which GitHub is, create the kinds of tools that actually we’re all going to be relying on in the future. So just some incredible opportunities. I’ll pause, let others talk and then, I’ll go – I’ll be happy to talk about challenges, ‘cause there are challenges.
John Thornhill
Well, just one if I could pick up on, so I’d like to talk about the issue of over trust, that there’s…
Julie Brill
Hmmm.
John Thornhill
…a, kind of, automation bias.
Julie Brill
Yes.
John Thornhill
We tend to believe the computer comes up with the right answer.
Julie Brill
Yes.
John Thornhill
Some people call this, kind of, “Death by GPS syndrome,” where people follow their GPS systems and they drive into the middle of Death Valley and have problems. That is a real problem…
Julie Brill
Yes.
John Thornhill
…with foundation models, or large language models, isn’t it, because a lot of technology is deterministic? You tap an answer or a question, well, a calculation into a calculator and you will always come out with the right answer. These large language models are probabilistic, therefore they cannot be wholly trusted and so, for someone who’s seeking medical advice…
Julie Brill
Yes.
John Thornhill
…using Bing or ChatGPT…
Julie Brill
Yes.
John Thornhill
…at the moment, that is a really very bad idea, isn’t it?
Julie Brill
Yes, right, and I would not use that as an example, but what it can do, just as you can do on normal search now, is you can say, “Gee, my – you know, my arm is itching, what’s wrong?” and you’ll get answers. The really remarkable thing about Bing, the new Bing, which builds on our search protocols and our search systems, it now adds this chat function, which does create a very powerful response, because you can then query and have a conversation about that answer, about what’s happening to your arm. But the most important thing that I think really does help with that vulnerability of humans to just believe everything that pops up, the answers are grounded in terms of citations about where the information comes from.
So, what I do when I get a response, you can click on these citations and you can determine whether or not, well, that’s an article about an interesting issue, but it actually didn’t really answer that – the question that I asked. And then you can say back to the chat, you can say, “Well, I see that you cited,” say something from the ICO, “you cited something from the ICO, and actually that doesn’t – it doesn’t talk about that issue, so can you do it again, find me some different pieces of information?” So, it really does require that kind of interrogating, and you have that opportunity when chat is built on top of search, making them both even more powerful. But I will say, you know, it’s also a learning system, so if you respond back with a thumbs up, “Hey, you got it right,” or a thumbs down, that then gets processed to help make the information better, as well.
John Thornhill
Okay, fantastic. We’ll come back to a lot of those issues, I’m sure. Stephen, though, I’d like to come on to you. With your regulatory hat on, what do you see as the risks and opportunities of AI?
Stephen Almond
Thank you. Well, I mean, look, I mean, from health wearables to new forms of mobility, assisted education, new forms of content, AI is making our lives more easy, more efficient and more fun. I confess, I definitely spent part of last night experimenting a little bit with how a very well-known large language model might give my opening remarks for the evening. I also asked it about its privacy policy, but I won’t comment on what it said in reply to that.
Above all, AI is here and that means that we need to have a serious conversation about how it interacts with individual rights. Now, just to lower the tone for a moment, last year, the ICO fined Clearview AI £7.5 million for using images of people in the UK that were collected from the web and from social media, to create a global online database to train and develop facial recognition technology, that it then sold to the Police and other parties. That processing wasn’t fair or transparent, people weren’t made aware and they didn’t really have any expectation that their personal data from their social media would be used to train AI models in that sort of way. In addition to the fines that we levelled – levied, we ordered the firm to delete the data of UK residents from its systems, and our counterparts in Australia, with whom we worked in this area, ordered them to do the same.
And it’s not just us that are leaning in. In the USA, Julie’s alma mater at the Federal Trade Commission went one step further relatively recently, and in the case of the firm formerly known as WeightWatchers, required the firm to destroy the actual algorithms that they’d developed with ill-gotten children’s health data, as well as deleting the data and paying a monetary penalty.
So, quite often I come to events like this and we have a very good and quite worthy debate about how regulation and law needs to adapt and, you know, what the future gaps are that are coming up, and I confess I struggle a little bit with that sort of conversation. I mean, yes, of course AI is going to throw up novel challenges for regulation. I mean, the current debate that we’re having around the interplay between generative AI and intellectual property rights, for example, is a case in point. But for the most part, actually, we don’t really regulate technology. We regulate its applications and so, actually, many of our regulatory framework should, in principle, remain sound.
So, take data protection law, for example, framed around a set of principles that are supposed to be adapted to new technological contexts, whether that’s avoiding excessive collection of people’s personal data, or making sure that the data’s processed lawfully, fairly, transparently. The technology will evolve, all of these fantastic developments, and so it’s – such exciting developments that Julie was speaking to, but the principles of how you process people’s personal data will remain. And so, the conversation that I want to have is perhaps not now, how law and regulation will need to evolve in response to AI, and yes it will, but also about how AI will need to evolve in response to regulation.
At the ICO, you know, we’re – we are definitely doing our best to try and make sure that it’s easy for AI Developers to comply, whether that’s providing them with comprehensive guidance, or making sure that where they’ve got questions about how the law might apply to their products, they’re able to work with us through things like our regulatory sandbox or our innovation advice service. And we’re partnering with the other regulators, who’ve got a stake in this. Particularly in the UK we have the Digital Regulation Co-operation Forum, where we work with our partners, Ofcom, the Competition and Markets Authority, the Financial Conduct Authority, to make sure that for businesses, for AI Developers, it’s easy to comply with our rules, and that there are lines. And even actually right now, we’re in the process of piloting a multiagency advice service that AI Developers can get joined up advice from us on just the implications of their ideas.
But it means that, as we’re doing that, we need to start talking about how we get the right levels of accountability for those who are developing new algorithms, new ways of exploring artificial intelligence, for thinking through some of the opportunities and the challenges upfront. So, the – as I was saying, the conversation that I’m really keen that we start to explore is this one of, how do we make sure that, as well as regulation responding to developments in AI, we get the right response from AI in response to regulation?
John Thornhill
Okay, thank you very much, Stephen. That’s a very clear explanation of where you stand on this. Carly, can I come to you? Tell us, what do you think the risks and opportunities of AI are?
Carly Kind
Thanks, John. I feel like I’m being set up to be the, kind of, doom-monger here, but I will step into that role if I’m asked. No, to – in all seriousness, so, the Ada Lovelace Institute thinks about the, kind of, societal implications of new technologies, and we are interested in ensuring that AI and data works for people and society and that the benefits are equitably distributed.
What we know about these new technologies is that they are changing and they are going to change power structures entirely, and they could change those for the benefit of people, or they could change them for the detriment of people. They could increase corporate power or state power, or they could democratise power in different ways. The outcome is not inevitable, and we have a chance to shape it, and that’s why we’re here and I think that’s why we’re having this discussion.
So, listening to Julie, I was listening to the, kind of, description of the various benefits of these new technologies, and I mean, for the sake of conversation, we’re inevitably going to focus on ChatGPT Bing, but I think it is a mistake to get distracted from the, you know, latest and greatest innovation. This is a new era of, kind of, consumer AI that is penetrating all of our lives, and Microsoft has put its head above the parapet for this first iteration, but there will be others and we will be talking about other integrations very soon.
But when Julie was talking, I was thinking, you know, all of those things sound like what’s happening in a really interesting research lab. We’ve got this tool, it does these really cool things, but it also has these downsides, for example, it – you know, it’s wrong a lot of the time. It’s producing a, kind of, mediocre assistant, first year university assistant. It has a huge climate impact sometimes, we’ve got to work on that, and you know, here are some of the downsides, here are some of the benefits.
But – so I could see – I see that description fitting really well for something that’s happening in a research lab, but actually, we’re talking about something that is now completely publicly available and rolled out in society. And I think that is where we need to focus the discussion, which is, what is the right process for guiding these technologies from research to deployment, and at what stage do people get a say in that and do people get to shape it, do regulators get a say, do companies take on responsibilities?
And so, I think one of the really interesting tensions at the heart of AI is this, kind of, permanent beta mode of rolling out these new technologies to, kind of, essentially, experiment on people and see how people use them, and get that feedback mechanism in as justifications their – you know, being levied by open AI and others about why that’s important for the tech development. But I think is it good for society, is it safe, is it the way we want to see these new innovations rolled out? I think that is a really interesting set of discussions that we should come back to.
So, I mean, we can talk about you – and you will all have read the stories about, you know, the ChatGPT trying to convince The New York Times Journalist to leave his wife and all of these, kind of, instances of, you know, abuse and manipulation, and others, but I’m not sure that focusing on, kind of, exactly the failings of this technology is useful. I think what would be more interesting is for us to think, what is the right process to shape this? I mean, Stephen said AI is here, and I think that’s right, but I think we need to be careful not to be too nihilistic about how – our ability to shape that and to ask certain things of it.
If this is a technology that’s being built off, you know, the public good, essentially, data generation by all of us, is building the infrastructure of this technology, in addition to, of course, a lot of investment by tech companies, you know, how should it serve the public good? These technologies can be deployed in a range of different ways. If it does have a climate impact, and it does, how should we think about that? We can’t just say, you know, carbon emissions are here, let’s deal with it. We have – we know – we have an opportunity to shape that.
What I think is holding that back at the moment is there’s a real challenge of putting a narrative around these tools. You know, in some conversations, it’s like a chatbot, you know, “What is this exciting new chatbot?” It’s like a fidget spinner, “What can I do with this thing in my spare time?” And then in other conversations, it’s this huge, kind of, conversation about LLMs and tech – massive models and how we’re going to, kind of, regulate those, and that is impenetrable for many people, including those in policy. How should we think about what this technology is, how should we think about this era?
One way of thinking about is a new era of the internet, in a way, in which we have a lot more intermediation by technology, and historically, that hasn’t necessarily been a good thing in the internet. It’s been something that’s given rise to lots of the challenges around disinformation, misinformation, online harms, etc. If we increase intermediation, so instead of going to Wikipedia to read about an issue, we just ask an intermediary to summarise Wikipedia and tell us about it, you know, how might that actually increase some of the challenges we’ve seen around the internet already? How might it concentrate power in new intermediaries and new actors, and is that going to be good for people or not?
So, I think now’s a good time to ask what do we actually want out of the internet? What do we want out of the tools we use every day? And then where do we, kind of, deploy these massive efforts? And, you know, remembering that we have a say in that, as people, I think is really important. And when we talk to people, at the Ada Lovelace Institute, which is what we do a lot, we do hear that lack of agency, lack of power, this feeling that they’re, kind of, subject to technological change without being able to shape it. So, I would like to see, kind of, shift in that regard.
John Thornhill
You have made a superb presentation about the risks involved. Can you say something about the opportunities, as well? How do you personally think that civil society and people generally can benefit from this amazing technology?
Carly Kind
So we definitely see people very keen to see new technologies integrated, particularly into services that are – you know, they interact with in day-to-day life. So, Julie mentioned the healthcare system, for example, and we saw it around COVID, you know, people were very keen to say how can AI or data-driven technologies improve the COVID response? We hear it less in education, I have to say. I mean, the research we’ve done so far says that people aren’t that interested in new technologies in certain education contexts, but we can come back to that.
So, I think people would like to see the efficiencies of AI realised, in particularly in services, like frontline services, health, and you know, other types of – the things that really matter to them in their day-to-day life. We don’t hear much people saying, “Oh, I wish I had another way to waste my time on the internet,” I have to say, but we also see real hesitancy about that. So, people say, “Yes, we want it – we want AI in various aspects of our life, provided it is, kind of, verifiably accurate.” I think there’s a real concern around accuracy of these tools.
Some of that is shaped by, you know – some of the first public instantiations of AI for people has been facial recognition, and so we saw over the last few years facial recognition rolled out and then, a huge range of stories around the inaccuracy of image recognition models, and that is getting better over time. But that has, I think, instilled in public a concern around accuracy of these tools, and the desire to see some kind of independent verification of accuracy of these tools comes through very strongly. And then, another point related to that is the, kind of, equitableness of how they operate. So, people are very aware that these tools work better for some people than others. I think that has very much penetrated public discourse, and you know, that these questions around bias and racism in large language models or other types of systems is prevalent. So, this question around, like, is – does it work for everyone?
So, I would say – I don’t know if I quite answered your question, John, in terms of the benefits, but when we speak to people about this, it’s not an – I don’t think it’s fair to equate hesitation with, like, Neo-Luddism, that people don’t want technologies in their life at all. I think they do, but they want to make sure that they work well for everyone and that they’re, kind of, independently verified and accurate.
John Thornhill
Okay, thank you for that. I mean, I’d like to pick up on some of those issues that you’re talking about, in particular with Microsoft, and how and when we decide to release some of these models out into the public. Because at Microsoft, you had the experience a few years ago with the Tay bot, which wasn’t up long before it started spouting some rather unpleasant language. Meta had the same issue with Galactica, its large language model that it put out and then had to shut down very quickly. Carly, you’ve referred to the issue with the, kind of, Kevin Roose at The New York Times, who was experimenting with Bing…
Carly Kind
Right.
John Thornhill
…and had some very hair-raising exchanges with it. And I see from the questions that are already coming in from the audience online, they’re also interested on that. So, can you tell us, how do you decide at what stage you can release this technology into the wild, as it were, and what are the, kind of, guardrails that you’re going to put in place?
Julie Brill
Absolutely, and it’s a really important question, because I think companies that are developing these kinds of foundational models, these models like GPT that are going to be used for building other products and services, but really have a tremendous amount of power, we definitely take that responsibility very seriously. We have very strong internal governance systems and those governance systems, of course, are, then, you know, also supported by external regulations. There are some standards under development, but honestly, in terms of evaluating AI systems for things like safety and fairness and transparency and accountability, those are things that we have developed in terms of our own principles and our own standards.
So, we’ve actually – when we first started down this road, we had an initial standard that we created. We then revised it and sent it out publicly. There was a New York Times article about it. What we really wanted was to get feedback, feedback from the public, feedback from organisations, like Carly’s organisation and any others that wanted to comment, as well as feedback from regulators. And what this standard does is it requires internal governance, internal action, internal demonstration of compliance with the principles that we have, and then an auditing cycle, and it, sort of, goes around, rinse and repeat.
And this is a really – very robust process, again, that we recently, within the past year or two, built up tremendously. So, for instance, we do require impact assessments for a particular – if we’re dealing with an app or if we’re dealing with a foundational model that apps are built upon, we will do internal impact assessments. And we also create transparency notes, which are available for the public to see. We have a transparency note that is available with respect to the new Bing. So, this is the type of work that our Engineering Teams have to align with, our Office of Responsible AI oversees, and our Executive Team is ultimately responsible to the board for these kinds of internal governance structures, so it’s really quite robust.
Now, in terms of the – when we issued – you know, we actually were working on – working with GPT and getting it into Bing to marry up search and chat for quite some time, but when we launched this, back several weeks ago, it was not launched into the wild, just to be really clear. It was given to a very limited number of Technologists, external Technologists and external Reporters, who were looking at it and were experimenting with it, and may I say challenging it. And that was exactly what we wanted.
Not unlike what you would do in a regulatory sandbox, not unlike what one does after one internally assesses safety and appropriateness and benefit with a pharmaceutical when you’re working with the Food and Drug Administration in the United States or another agency here in Europe. Often, you know, people assess the product, they say, “This looks good and it has huge benefit, and now we will launch it in a controlled way,” or at phase IV when you’re with the FDA, it’s just launched to the public, and then you det – you learn, you get feedback, you understand, are there risks, are there things that happen?
So, in this limited launch, again, not launched out into the wild, a New York Times Reporter, Kevin Roose, spent two hours experimenting. And through his questioning, and I do invite everyone, as we did, to study the actual two-hour text script of what he developed with the tool, what he was able to do was separate the search function from the chat function, because he was asking about things like your shadow self, which is this theory, and the shadow self became – you know, the chat answered this shadow self. And it, sort of, became a replay of, as some observers have noted, kind of, a Black Mirror episode of what it would be like for a chatbot to, sort of, have an alternative self and then what they – she or it would start to talk about. But through that separation, the chat just had an inability to come back to grounding, which is deeply important for what we do.
So, look, we – the entire company deeply appreciated what Kevin Roose was – found and what he publicised, and we made changes. We made some significant changes to the availability of the length, in terms of the length of the chat that’s allowed. It is now greatly reduced, and after about six queries, the interrogator has to start over again. That’s the type of mitigation – we put many mitigations, many MetaPrompts into this system ahead of time, to deal with things like misinformation, to deal with things like bias and inappropriate racial discussions and things like that. Did we think about the shadow self and what might happen if you started talking a Bla – about a Black Mirror episode? We missed that one.
John Thornhill
It reminds…
Julie Brill
The point – so, sorry, the…
John Thornhill
Yes.
Julie Brill
…point was we learned, and that’s what is most important, is that we do the best we can and then, we talk to people. We talk to civil society, we talk here at these organis – at meetings like this, and we learn and we adapt. That’s one of the most important things that you can do when you’re dealing with this kind of technology.
John Thornhill
Let’s get to one of Carly’s early points, should it just be the technology companies that sell, themselves, that are taking on the responsibility for, as Carly was saying, being the interface between us and technology? If this is the new computer interface, why is it only that technology companies are determining that relationship?
Julie Brill
We have long called for regulation of AI. We called for the regulation of facial recognition back about four or five years ago, when it first started to become prominent. We think that it’s very important for companies, large tech companies or deployers of these systems, to work closely with governments, to work closely with regulators, to ensure that we are doing the right thing and, frankly, that there are appropriate guardrails in place so that we know what we’re supposed to be doing. And we can build in, whether it’s privacy by design, data minimisation, appropriate point – the types of guardrails that we talk about in the privacy context, or we start to develop additional guardrails specifically for AI.
Now, I will say that we’ve got some ideas about the way in which some of those regulations are to be shaped in order to balance innovation and all of these benefits with helping companies understand what they need to do and how they need to do it, but we want to engage in a conversation about that. You know, again, we don’t have all the answers. We’ve got some ideas and we want to work with folks on developing that, but I think that’s – the important thing that I was trying to raise is until governments do create those systems and approaches, we’re not going to say we get to – we’re going to do whatever we want. We’re going to have an internal governance system that we think does a pretty good job, you know, and we’re going to continue to – we have to do that.
John Thornhill
Alright, I want to open this up to the audience very soon, both online and in the room, but just before I do that, one final question to Stephen. To generalise heroically, which is what I do for a living, in the US, there is a, kind of, belief in, kind of, permissionless innovation, as it were, that things should be tested, they should be tried out and then, they should be modified once we’ve got the benefit of experience. In Europe, they’re – the precautionary principle tends to apply more, as we’re seeing now with the EU AI directive. Where does Britain stand between these two stools?
Stephen Almond
I don’t – I think you’ve probably answered the question by saying “between these two stools.” Look, I mean, take privacy, for example. So, I mean, in the US, you do not have a federal privacy law that would provide the same degree of protections as you find in Europe or in the UK for how your data is handled. So actually, when you’re thinking in a US context about the levels of protections that would exist for people in terms of how their personal data is used, it’s just a very different thing.
But, actually, then, take the conversation around regulation of AI per se, itself, and you see in Europe, we have the development of the EU’s AI Act, where, as you, sort of, allude to, there is a real emphasis on a slightly more precautionary approach, a slightly more – approach which is more focused on actually identifying use cases and saying, “Look, there is a bar to these sorts of use cases.” So, you know, let’s take, sort of, live facial recognition technology. And actually, in the UK context, what you find is, you know, it’s the Goldilocks position, right? It’s actually saying, right, we need to take a risk-based approach here and we need to assess actually what are the use cases that present risks, you know, for example, in the privacy context, to the rights and freedoms of individuals? Make sure that those risks are really mitigated. But, actually, if you can prove that those risks are being, you know, actively mitigated, and that, you know, compliance is upheld, then actually, you know, sometimes there’s a case.
And so, for example, in relation to live facial recognition technology, the information is – the Information Commissioner’s Office has not said, you know, there should be an outright ban on that technology. Actually, it said there’s a high bar for actually when that sort of technology can be used. There are conditions that need to be met and there are safeguards that need to be in place, but hopefully, actually, it’s going to be by that sort of more risk-based approach that we get the right approach to regulation in the UK.
John Thornhill
Okay. I’m going to open this up to questions. So, could you wait for the microphone to come round to you and please state your name and where you come from? And if you could keep your question short, please.
Member
Yeah, [inaudible – 43:48], Chatham House member. My question is, when will AI will have the consciousness, it will understand that and say – do we need to wait for quantum computing or is it possible with our processers? Thanks.
John Thornhill
Carly, do you have any views on consciousness element and how AI…?
Carly Kind
This is definitely directed at Julie, I think, that one.
Julie Brill
Indeed, and I know, I saw that, but look, I think that the chat is very sophisticated and I think fun, but, you know, others might say, “Boy, this feels different.” I don’t think it approaches consciousness right now. It’s a large language model that is trained on how people talk and therefore, it is predictive of the next word in a sentence. It’s like when you’re just typing in Word and it says, “We think this is going to be your next word,” about, you know, 80% of the time, that’s right, but – or less, but then sometimes it’s wrong. The – I don’t – I really don’t think we’re in a world of consciousness anytime soon.
John Thornhill
Okay, now we have another question down here.
John Warren
Thank you. John Warren, Physician. If you were to type “medical information” in and it comes up with a diagnosis, that becomes a medical device under device regulation, and if it gets it wrong, presumably, it could be sued for negligence. And a more interesting question than menstruation, if the young lady typed in “Should I have an abortion?” what would you say?
Julie Brill
I don’t – okay, I’m super happy to answer all these questions, but actually, you can go find that out for yourself. You could go get into Bing and ask that question. I don’t know what it would say, but I think it would probably def – in situations like that, that I have tested myself, it defer – it doesn’t answer those sens – deeply sensitive questions like the one that you posed, and it tries to deflect away from it, honestly. That’s what I have seen and that’s – you – the kind of meta prompt that will be in the system to try to screen away from highly controversial topics.
John Thornhill
We also…
Julie Brill
The…
John Thornhill
…have a question online here about the AI and healthcare ethics, as well, about AI will get very good at, kind of, helping people reach what they think is the diagnosis and then go to the healthcare system and say, “I’d like you to treat me according to this,” and the healthcare system will be unable to deliver it. But, Carly, could I bring you in on this one? I mean, cou – how do we use, kind of, AI in healthcare in a positive way?
Carly Kind
It’s a really good challenge, and I think – I was thinking about Julie’s example before about a Doctor, kind of, seeking summaries of best evidence, and I mean, John, you can probably speak to this more than I, but, you know, how does that challenge notions of, I don’t know, peer review and research in the medical space? And, you know, we’re using Doctors as experts, they make a judgment about the range of evidence available to them, including the best available evidence recent research, so if ChatGPT or another LLM integration, gives them a summary, is that sufficient, will that satisfy them that they’re across the research? And then how will that change their practice?
We’re seeing the same, I think, with, you know, using image recognition in medical imaging. You know, how do Doctors and Radiologists, kind of, integrate with the tool? I think that’s one of the bigger challenges, and why I would say – sorry, that is, that this is not only about what the tool is technically capable of, but how does it change the system it’s being embedded in? And in the case of healthcare, a very complex sociotechnical system of experts and actors and trust and expertise, how does inserting a new technology into that system change it?
I think there’s a hes – there’s a tendency to say it will be a net benefit, but actually, we’ve seen from studying similar systems, that it – when you introduce a system into a complex system, it changes the system fundamentally. And you know, Julie was talking about all the very good, responsible AI practices at Microsoft, and to be clear, Microsoft is absolutely one of the best actors in this, kind of, self-regulatory ethic space, but, you know, John’s question suggested, like, are you the best – is the company the best place to think about the systemics or sectoral implications of these new technologies? The same in education, you know, how is it changing assessment, examination, marking, etc.?
It’s such a big societal or sectoral question, just can’t be answered at the company level, it has to engage regulators. It’s not fair to ask companies to answer the question of how is this going to change the medical system? How’s it going to change what expertise means? What should Doctor – you know, medical students study now if ChatGPT is available? These are really big, kind of, societal questions. So, sorry, I know, I’m, kind of, complexifying rather than simplifying a response to your question, but…
Julie Brill
Just while you’re picking the next person, they’re getting their mic to them, you know, we deeply agree with that, but, you know, but what I was clearly trying to say is we welcome that. We are working with partnership on AI, we work with multistakeholder organisations, we bring in consumer groups to look at the things we’re doing and to get that kind of feedback, but there isn’t anything quite organised yet, and we can’t ignore our responsibility to do what we need to do.
And similarly to your point about changing – what does the technology do to change the system? At the end of the day, the Doctor is responsible for the diagnosis. The Doctor can use tools right now to get summaries of research, and – right? and that’s the – but that doesn’t change the Doctor’s accountability for the ultimate diagnosis.
John Thornhill
Okay, another question here.
Member
Hi, thank you. I just want to say thank you for what you’ve said earlier, it’s been very interesting. I’m a Master’s student at LSC, and I’m going to continue the grilling of Julie Brill a little bit. You were talking earlier about the empowering nature of lan – large language models for people in poor education environments, you know, providing tools that aren’t necessarily available to people. My question here is, and it, sort of, relates to what’s been talking about with medical ethics, ethics generally vary, depending upon the culture, politics, belief, and we’ve seen with certain ChatGPT prompts that ethically dubious answers have been provided to certain questions. How do we construct ethical frameworks in liberal democratic fashions? Is it something that the technology should wait for before it advances further? I mean, who should be involved? Should ethical frameworks be established at all? I was just wanting to see what else you would have to say about it.
Julie Brill
Do you want me to respond? Okay. Well, there’s a lot of work that is happening, I believe, in all sorts of multistakeholder organisations to try to come up with ethical frameworks. We – our principles are built on an ethical framework and as I said, you can go to our website, you can look at it, we welcome comments. We engage with The New York Times to talk about it. We want feedback in terms of our approach. But I do think it needs to be – to go back to Carly’s point, it – this needs to be a conversation that we all are having, and these conversations are taking place in – at the OECD level, at the UN level, within UNESCO. They’re taking place in, again, sort of, industry partnerships with consumer groups and with technology groups, like as in the partnership for AI.
I also think an interesting place where these conversations are taking place is in standards bodies themselves, so the IEEE has developed an AI ethics standard. The International Organization for Standards, the IO – sorry, International Standards Organization, sorry, the ISO, has also developed an AI management system, dealing more on the governance side than on the substance of ethics, if you can. Yet I feel – I believe you can’t have ethics unless you have got governance, the accountability has to marry up with the substance of the ethics. So, there’s a lot of work that is happening in this space. We are participating in all of those efforts and listening to all of them and addressing them all. It’s a really great question, and I think everybody here needs to be paying attention to that.
John Thornhill
I mean, it is a question of whose ethics should we be listening to, ‘cause, I mean, there are dozens of ethics codes, aren’t there?
Julie Brill
Hmmm.
John Thornhill
Carly, do you have a view on this? I mean, how do we inject ethics into this global technology in a way that is meaningful to people all round the world?
Carly Kind
Yeah, I mean, I think Julie got to some of the complexity of that challenge, but what we’ve seen is in the last, kind of, let’s say seven years – I would say almost 2016 as a bit of a beginning of this conversation, because you had Brexit, you had Trump, you had this whole conversation about algorithms and the internet, and you also had some really big advancements at DeepMind at that stage around AlphaGo and others, so it was kind of the start of the ethics conversation.
For a long time, we were talking about, you know, an ethical code that was going to be a, kind of, static thing that we could all agree for AI, and I think what we’ve learnt in the last few years is that, to your exact point, ethics are contextual, they’re cultural, they have to be informed by the exact use, and they’re also informed by politics and culture and society. And I think – I hope that days behind us of ethical codes, you know, that those are behind us, because I don’t think that they served us well in the last few years. And, you know, to Julie’s point, I think what we’re seeing is like the instantiation of ethics in practice. So, what does it mean to do a responsible product release strategy, and what does it mean to do a governance framework that takes into consideration various impacts, etc.? So really this – it’s a maturation of the conversation into process outcomes, governance and those types of things.
I think there’s still a role for ethics and thinking about, you know, the big picture questions around what’s right and wrong in this space, some of the big trade-offs. Again, I don’t think that technology companies can be expected to make, because in a way they’re about politics, the politics of this technology, so, you know – and politics is about trade-off. So, at the end of the day, somebody has to make a choice about do we need AI in the education sector to improve – you know, to address budget shortfalls and improve education, or is that not – is that a bridge too far? Is that not a trade-off we’re willing to make in order to improve the rollout of education to certain communities? And we can’t expect, I don’t think, tech companies to make that. I do think that some of that has to move into the space of politics.
John Thornhill
Okay. Let me just take another question down here.
Alan Raul
My name is Alan Raul, Lawyer with Sidley Austin. Each of you has called for the, or recognised, the importance of regulation. Do you have a paradigm for the ideal regulator and the optimal legal regime under which that regulator should operate? And Mr Almond, if you think that the ICO is the ideal regulator, is there anything you would change about it, with the UK Data Protection Act? So, what is the ideal regulator, do you have an existing model that you would point to and what legal regime?
John Thornhill
Okay, that’s a great question, and I also want to add to that from the online question, as well, which is, “Are regulators suitably equipped to properly understand how these tools function, and given the money involved, can regulators avoid regulatory catchup?” So, I think that these are directed, at first instance, to you, Stephen, so…
Stephen Almond
Feeling a bit quiet. Look, in terms of the ideal regulator, look, we have to reflect on the fact that AI and the regulation of AI is always going to be context specific. It is in – it’s a general purpose technology that is going to be coming up in all sorts of different contexts. So yes, I can wax lyrical about the perfect, sort of, risk-based general binding rules type regime that might be, sort of, the mainstay of a privacy regulator, but actually, is that appropriate in a medical context, where actually the risks that come to individuals mean that actually you would want something which is far more focused on actually authorising something ex-ante? Because the risks that come from the technology in that particular context, you know, medical advice, legal advice, financial advice, actually mean that you’d want to scrutinise that upfront.
So, I’m going to give you a very – a sort of, officialese answer about it being horses for courses in different contexts. But actually, I think when I’m talking about an ideal regulator, just to, sort of, link it back to the – your second question here, I’d be thinking about the, sort of, the mindset and the skills that you want of an ideal regulator in this space, because my goodness, do you want a curious regulator. You want a regulator which is prepared to really dive in, to make sure that its skills are keeping level best with the Microsofts of the world, to be able to really probe and do what we can’t expect the public or civil society to do.
And that was the third point I was going to come onto here, which is just around the inherent power imbalance that comes from the development of systems that, fundamentally, nobody is really going to know more about than the organisation that develops them themselves. And you know, the – you know, often when we’re talking about AI, people will talk about invisible processing and the fact that, you know, ultimately, nobody really necessarily has the perfect insight. So yes, you’re going to need that curiosity to be able to get under the skin of it, but also, you’re going to need a regulator that’s fostering industry accountability here, the sorts of governance, the sorts of checks and balances that Julie was speaking to.
And actually, what I’d really like to see, if I have my wish list here, it’s not just, sort of, firm level accountability, but it’s the real industry level accountability. How do we move from the sort of industry that we have right now, where actually, you know, it can be almost acceptable for a, you know, a firm to release a bad product into the wild, as it were, and everyone goes, “Oh, well that’s terrible, you know, they shouldn’t have done that,” to actually the sort of thing that you see more in the medical sector, where actually, it’s almost felt slightly more as the collective responsibility of industry because they’re giving everybody a bad name, they’re reducing the trust of everybody in this? How do we get to that sort of paradigm around AI?
John Thornhill
And it does remind me quite a lot of the debate that we were having in the financial services industry before 2008, that, you know, we in the financial services industry understand these collateralised debt obligations really well and you needn’t worry about them. But, anyway, I’d like to take a few more questions, the gentleman at the back, and then can we take three questions, and then here, and just there, as well? So, shoot. Keep your questions as short as possible, please.
Member
I wanted to speak about content creators and the role of content creators right now, and what framework do you have in mind, ‘cause you had these LLMs developed and trained on these content that is created? Commercially, how are we – how are they going to get paid?
John Thornhill
Okay, thank you. Second one, down here.
Curtis
Hi, everybody, I’ll preface this by saying, firstly, I’m Curtis. I’m with Deloitte’s AI Assurance and Audit Team in London, which is a hint to my question, but I’m also an Editor of the Journal of AI and Ethics. Question is what do you see the role of independent audit as being in ensuring that AI is trustworthy, both now and moving forward in the future?
John Thornhill
Okay, and final question just here.
Member
Thanks, this one’s for Stephen. I know that with the EU AI Act, there’s an idea of extraterritoriality of it. I’m wondering exactly how the UK is going to react to that, especially as industry’s come a long way, it would be interested more with what regulators want in certain areas, similar to how GDPR has been used.
John Thornhill
Okay, so, Julie, do you want to answer the one on content creation?
Julie Brill
It’s an important question. We do ground our responses, so we cite, too, the particular areas that we are looking at and, you know, we think it’s just a very important thing to make clear where sources are coming from. And it’s obviously going to be a very important conversation going forward, in terms of the compensation aspect and it’s something that we are thinking deeply about and partnering with others on.
Can I talk about the independent audit thing, because I do think that that’s also important? And I think independent audit is also one of the reasons why having these standards out there, the ISO standard, the IEEE standard, you need to have something to audit against, you know this better than anybody, and you need to make sure that the company has controls in place. And you – there is so much goodness in what Stephen had to say about the regulatory approach. I mean, it’s also incredibly important to think about the use and the use case, and the risk that’s involved there. And so, I think there’s going to need to be an important marriage, if I may – you used horses or something like that, that…
Stephen Almond
Comparing it to marrying me to a horse in this, but having other…
Julie Brill
No, no, I promise I won’t do that, I promise I won’t do that, but I think there needs to be a really important marriage between safety and thinking about safety approaches, whether it’s pharmaceuticals or whether it’s financial regulators, which very much do engage in partnerships, but also having a standard against which you are examining the various activities at the use level, and to the extent necessary at a foundational level.
John Thornhill
Okay. Stephen, there’s one question directed at you specifically, but if you want to answer anything else, as well.
Stephen Almond
Yeah, certainly. I mean, and first I’d just love to build on the point that Julie was making there around audit, because actually, I think that audit and, kind of, those forms of independent assurance are actually part of how we grapple with some of the power dynamic here. Trust me, you don’t want regulators everywhere, you know, they’re – you – what actually you want are mechanisms whereby both industry and regulators can have assurance that the right things are being done, and audit plays a really important role in that landscape.
And actually, one of the big conversations that we’re having with our counterparts in the Digital Regulation Co-operation Forum in the UK is how do we stimulate the development of an audit and assurance market that is really complementary to the roles of governance within the firm and governance by the regulators? And there’s definitely a really interesting, sort of, gap to explore there.
John Thornhill
Okay.
Stephen Almond
Talking…
John Thornhill
Go on.
Stephen Almond
Oh, sorry, just – I mean, just talking of interesting gaps, I mean, extraterritoriality would probably – could take a good hour itself, but, you know, you point at the really interesting example of UK GDPR, for example, in this space, which does have those provisions around extraterritoriality. It is something that everybody is thinking really, really hard about, because ultimately, the supply chains that we’re seeing here are, you know, they are cutting across borders. We need to think about this as a global conversation.
John Thornhill
And Carly.
Carly Kind
I don’t have anything novel to add to those comments, I don’t think.
John Thornhill
Alright, now I think we are very rapidly running out of time, if we haven’t already done so, so I’m just going to ask one final question to each of the members of the panel, which is that we’re talking about the risks and the opportunities of AI. What is the one thing that you would do to maximise the opportunities and minimise the risks? So Julie, I’m going to start with you.
Julie Brill
The one thing to maximise opportunities and minimise the risks are – is, in my view, really learning and engaging in conversations just like this, whether they’re structured in the context of working with consumer groups, working with AI experts, in the external world. We definitely have, as Carly said, very strong internal governance. That’s how we are so successful in that enterprise space. That’s where the audits come in, right? It’s in that enterprise context, and public sector context. But we need to continue to have the humility that we absolutely need to have as we’re moving forward in this incredible world of opportunity, to learn, and to not only hear and listen, but then make changes and adapt. And I think that that will lead to the kind of trust that this world of AI and very advanced AI systems is going to require in order to minimise risks and maximise opportunities.
John Thornhill
Okay, thank you. Stephen?
Stephen Almond
I mean, three words, design, design, design. It – you know, really thinking through how to tackle some of the fundamental issues, whether it’s making sure that privacy is built in upfront, safety is designed upfront, fairness is designed upfront. Thinking through those issues at the design stage, rather than as some glib, kind of, regulatory thing that you need to wrap around at the end of product development, would help us realise so much more by way of opportunity and address a heck of a lot of the challenge.
John Thornhill
Okay, and Carly.
Carly Kind
Okay, in the interests of thinking of something provocative to say, I guess I would say maybe I would like to see us think about a new paradigm for thinking about AI and AI governance, and you know, people in this space, we all struggle to think of analogies, but, you know, for today, let’s try. Let’s think about a new natural resource that we’ve just unearthed. What would be the best way to think about governing that in the public interest, across borders, that’s not about just a, kind of, shiny new technology, but a fundamental public good that could be used in different ways? And how might that open up new ways of thinking about governance, regulation, public participation, taxation, redistribution of benefits, you know, a whole range of different things? So, that would be my suggestion.
John Thornhill
Alright, so if I’m to sum all of that up, I would say that it was – we’re talking about that we need to talk, talk, talk, design, design, design, and redesign, redesign, redesign. So, thank you very much to all of our panellists for a wonderfully rich conversation, and thank you to Chatham House.