Gillian Tett
Hello, good afternoon, everybody, and it is an absolute delight to be here today. My name’s Gillian Tett. I’m both a Columnist with the Financial Times and Provost of King’s College in Cambridge. And the reason I’m so thrilled to be here today is that we’re not only talking about a topic which is of interest to absolutely everybody in the room, consuming a lot of attention and focus from people like me who work in the media, but also anyone engaged in academia, and I’ll talk about that in a moment, but we’re also talking to somebody who is perhaps one of the most interesting Commentators, or Cultural Translators, on this topic, James Manyika.
I first met James when he was running the, sort of, brains trust for McKinsey, which is a job all about joining up the dots between different parts of the world, which is something that most AI Scientists, who have been deeply in the weeds, are not trained to do. They often speak in a language which sounds like gobbledygook to everybody else. But James has been trying to look at the world in its glorious complexity and confusing fragmentation, and try and join it up, but he also has a PhD in AI, so he knows what he’s talking about. He’s not an amateur, like me. And in his role, overseeing AI research at Alphabet, is that correct? Alphabet has such a complicated structure that I never quite know what anyone does from there, led by – but roughly speaking, you’re overseeing the AI research, but you’re also the interface of government relations and the wider world, including Academics and grubby Journalists, in trying to make sense of what this all means. Is that fair?
James Manyika
Close. There is a colleague who leads public affairs, so…
Gillian Tett
Right.
James Manyika
…you know, I only dabble, so to speak, in the government affairs part.
Gillian Tett
Right, well, okay, that’s with typical modesty, but anyway – so, he is in a position – brilliant position to tell us what on earth is going on and what it actually means in practical terms, and where we’re heading for both good and bad.
And I should say two quick things before we go into the conversation. Firstly, this conversation is on the record. Secondly, when we go into the lunch portion of the event, it will be off the record, Chatham House Rule, so please note that. Thirdly, for anybody who wants to see any of the technologies we’re going to be talking about, they will be released in the wild, or, basically, for you to actually play with, upstairs, between noon and 1 o’clock, and there will be lunch at the same time. But please don’t rush the stage when we finish, this is a request from the Chatham House organisers, because you got a chance to grab James over lunch. So, those are the quick housekeeping rules.
But let me start by asking the question, this has been one hell of a week for Google, extraordinary timing by the extremely prescient Chatham House organisers doing it this week. Tell us what exactly happened this week at Google that makes this event so well-timed in terms of both some very important AI breakthroughs and some quantum computing breakthroughs.
James Manyika
Ah, well, thank you, it’s a pleasure to be back at Chatham House and doing this conversation with you, Gillian. It’s always a pleasure. Well, it’s been quite a week, actually, because if I think about what’s happened this week, so, on Monday, we announced some important breakthroughs in quantum computing. So, I – one of the teams that I oversee is our Quantum AI Team, and that’s been fascinating, because two things happened with that, which is – so we’ve been on this journey, and I’m sure we’ll talk about it, to build a fully fault-tolerant, error-corrected quantum computer for quite some time. But this week two things happened. We introduced our Willow chip, and two things that are notable about that.
One is we’re able to – there’s a benchmark computation called ‘RCS’, random computational, kind of, simulation, which is a way to compare emerging quantum systems and their performance with classic supercomputers. And what our quant – a Willow Chip was able to do was in less than five minutes, it did an RCS computation that would take the world’s leading frontier computer ten to the power of 25 years to do. That’s ten septillion years, so, that’s, kind of, one with 25 zeros next to it. That’s, you know, much, much, much longer, several times over, the age of the universe.
Gillian Tett
Yeah.
James Manyika
So, that’s a big deal.
Gillian Tett
I think you can hear from the gasps from the audience that they’re pretty impressed by that.
James Manyika
It’s – yeah, so…
Gillian Tett
This is a hard crowd to impress, so…
James Manyika
So, it’s able to do that. And then, also, more importantly, though, it was able to do what’s called ‘below threshold’, which is to show that as we add the number of qubits, I’m sure we’ll talk about this, we can actually reduce the errors, exponentially, which is a big deal on the way to building a fully error-corrected quantum computer. So…
Gillian Tett
And can I stop you there? Because people may not understand why the issue of errors is so important. Maybe we should ask, who in the audience feels they understand quantum computing? Okay, we have two of you, two and a half of you. Okay, we have a suitably modest or half-asleep audience here. So, can you quickly explain why errors matter so much with quantum computing?
James Manyika
Yeah, so, the way to think about it, I mean, classic computing, obviously, ones and zero, Boolean logic, you’re either in one state or the other. In quantum computing, you can be in both states at the same time, and these are highly unstable systems that are also very, very noisy. And because they work on this idea called ‘superposition’, where it’s actually quite hard to figure out what state you’re in, but if you can figure that out, you can get – you know, there’s computations you can do in multiple states at the same time. So, one of the big hurdles to getting to a fully error-corrected quantum computer is if you can actually reduce the errors in these systems.
And so one of the techniques we’ve come up with is to ab – do a bit of an abstraction from the physical qubit. So, qubits, as opposed to bits, physical qubits, which are quantum bits, we abstract away from the physical qubits to what are called ‘logical qubits’ and we use these, kind of, surface codes as a way to reduce the errors. So, we’re able to show that, for the first time – this has been one of the challenges for the last 30 or 40 years, which is, can you show that as you add the number of qubits, you can actually reduce the errors? So, this is the first time anybody’s actually shown that. So, it’s a really…
Gillian Tett
Right.
James Manyika
…big deal. In fact, in some ways that’s actually more impressive than the ten to the…
Gillian Tett
Well, that’s one reason I wanted…
James Manyika
…power of 25 years point.
Gillian Tett
Right, well, that’s one of the reasons I wanted to, basically, quickly pull that out, because it sounds a bit, like, Doctor Who, frankly, or I sometimes think Schroeder’s – Schrödinger’s Cat. And essentially you’re trying to make sure that if you’ve got Schrödinger’s Cat, it’s basically consistent, as to what kind of…
James Manyika
It is very consistent.
Gillian Tett
…what kind of cat you’ve got hanging around in that vision, or cats, you know.
James Manyika
Yes.
Gillian Tett
They all match up.
James Manyika
They all match up and so, it’s…
Gillian Tett
Yeah.
James Manyika
…a big deal. But we’re still a long way away, by the way, from a fully, you know, fault-tolerant qua – error-corrected quantum computer, which is what everybody worries about because of – you know, I’m sure, this is Chatham House, we’ll talk about encryption and codebreaking…
Gillian Tett
Well, of course, the reason…
James Manyika
…and all these kinds of things.
Gillian Tett
…why we’re worried about is because if we do get a fully fault-tolerant quantum computing, it will then be able to hack into everything we’ve ever done and encrypted, potentially, if we carry on using the old-fashioned RSM technologies. So…
James Manyika
Absolutely, but I think…
Gillian Tett
I get a lot of Bankers very worried about this, understandably.
James Manyika
They should be, I mean, ‘cause, you know – but we’ll…
Gillian Tett
Never mind the rest of us who actually have bank accounts, but, you know – but, yes.
James Manyika
Well, part of the reason, as people hopefully, in the audience know, is that because a lot of our encryption systems largely rely on the fact that the computations to do what’s called ‘prime number factorization’ take an inordinately long time to do on classic computers. So, we rely on the complexity of the computation, basically, to encrypt our system. So, if you…
Gillian Tett
Yes.
James Manyika
…did have a quantum computer, you could crack those pretty easily. This is what Shor demonst – proved in 1994, with Shor’s algorithm, that you can actually factorize these numbers. So, anyway, so that’s why that matters, but even before you get to that, we’re already in this intermediate stage where we’re going to start to see useful things you can do with these…
Gillian Tett
Yeah.
James Manyika
…still noisy systems, whether it’s in chemistry, in, you know, quantum chemistry, in a bunch of areas. So, that’s the next exciting stage we’re going to see even before we break all the codes.
Gillian Tett
Absolutely, I mean, someone like Jack Hidary, who is one of the Quantum Entrepreneurs, and putting his, you know, savings and energies into creating a new round of start-ups around this, you know, he would say, in fact, the future is not LLMs, but LQMs with quantum instead, and that’s one way to frame it, which I think is fascinating. But just before we switch gear, ‘cause I know we are supposed to talk about mere AI…
James Manyika
Well, I want to – so…
Gillian Tett
Yeah.
James Manyika
…to come back to your original question, I got carried away with the quantum stuff, it’s pretty fascinating, ‘cause it’s actually one…
Gillian Tett
I have one more question about quantum before we switch, but, yes, go on.
James Manyika
…one of my team. So, that was Monday, which was exciting, and then later in the week was a extraordinary proud moment for us, because my colleagues Demis Hassabis and John Jumper went to Stockham to collect their Nobels, earlier this week, for the extraordinary work that they’ve done in protein – understanding protein structures and protein folding with AlphaFold. But even that work, in some ways, is emblematic of the many exciting things that AI is starting to help us make progress on in science. Proteins are just one thing, there’s a lot in structural biology that we’re now able to do, a lot in material science that we’re now able to do, a lot in understanding climate science. So, the contributions from AI in science, I mean, we just had – at the Royal Society, we co-hosted a science event a few weeks ago, with the Royal Society, there’s an extraordinary amount that’s now happening in AI in science. So, that was Tuesday and Wednesday.
And then yesterday, the other thing that was exciting that happened, is we also announced and released our next generation of AI foundation models, Gemini 2.0, which now have this extraordinary – in addition to doing really well on multiple benchmarks, now also have these agentic capabilities. Which is an – which I think in some ways is the next era of the large language model-based, or foundation model-based, systems to start to do these agentic capabilities, where in addition to – ‘cause I think we’ve come to think of generative AI as in you type of prompt and you get some output and some content back, and you can debate how good or bad that is and so forth. But agentic capabilities are even – are the next step.
Gillian Tett
Can I stop there for a moment? I mean, just for those, again, who may not be familiar with the word ‘agentic capabilities’, does everyone know what that means, agentic AI? Some of you do, yeah. I fin – I think it’s one of the most important concepts to grasp in terms of what’s making sense of what’s happening right now, ‘cause agentic means basically having agency, and having the ability to not be controlled by AI, but basically, to control it. In the sense that you programme what you actually want and it does what you want, a bit like a, sort of, digital elf at the end our fingertips. Is that fair?
James Manyika
It is, but – and more.
Gillian Tett
Are you building a flock of digital elves?
James Manyika
Well, actually, and more, because what you can also do is, in addition to just generating outputs, it can actually take actions for you. So, you can imagine simple actions as in, you know, fill out this spreadsheet for me, to go research this topic for me, or go find out about – in fact, one of the things we showed yesterday is called – you know, sort of, in Gemini, you can now do what’s called deep research’, where you can type in extraordinary query, and then have the system go research that for you, and then come back with a long report. So, it’ll go look at various reports, do research, check various things and come back with a report for you. So, this idea of actually taking actions on your behalf, on the things you’re interested in, grounded in things you care about, I think is the next exciting era. So, that’s – that was the other thing, and it enables…
Gillian Tett
Absolutely.
James Manyika
…things like – sorry, go ahead.
Gillian Tett
No, I was going to say, absolutely. I must say, sitting in Cambridge, you know, I had a conversation with someone the other day who was busy putting together a book in the humanities, and had used, in fact, a rival version of this to simply go out and collect all the research into the materials they needed, just like that. Conversely, I speak to a Life Scientists every single day whose work is being transformed by AlphaFold, and I often say, “If I ever feel depressed about the state of the world, I go hug a Life Scientist and hear what they’re doing.” Because it is just astonishing how the speed of research into potentially vital issues around, say, proteins, has been accelerated from a decade to literally a day, one – as one of them said to me the other day.
James Manyika
Well, I mean, take AlphaFold, I mean, I think we – most people think of AlphaFold as just primarily about proteins, but even that’s developed further. So, in AlphaFold 3, for example, we can now understand all of life’s biomolecules, so not just proteins, but DNA, RNA, and also ligands, and also the interactions, which is extraordinary. You take that to areas, like, even neuroscience. One of the extraordinary things that my colleagues – one of the teams that I oversee, last year did, was in the field of connectomics, where we actually created the first synapse level mapping of a piece of the human cortex, which had should never been done before, using AI systems, where you can actually – we actually discovered new structures in the brain that people didn’t – Researchers didn’t know existed before. Which opens up all possibilities, in terms of understanding neurobiology and neurological diseases, and so forth, and even how the brain works.
And you can take that to material science, where we’ve discovered, for example, more than 2.2 million new crystals that we didn’t know existed before, of which something like 380,000, in the first instance, are stable enough to be synthesised. So, think about the possibilities. In fact, just the other day, my colleagues at Google DeepMind announced a new AI weather forecasting system…
Gillian Tett
Yes.
James Manyika
…that’s actually – so, the…
Gillian Tett
Which is supposed to be, you know, dramatically more accurate than anything we’ve had so far.
James Manyika
Oh, it’s way more accurate, so between that system, other systems…
Gillian Tett
So, it really can tell us it’s going to rain in London every single day.
James Manyika
It’s going to tell us that, but also, more importantly, you take another system, like NeuralGCM, which actually can also do these long longitudinal analyses of weather, in a way that we couldn’t do before with the physical-based models that we had before. So, I think what’s happening in science is really quite striking.
But what I like about some of the science stuff, Gillian, is it’s not only things that we will see later, but we’re already seeing some of the benefits today, in real-time. So, I think about some of the scientific breakthroughs we’ve had in things like flood forecasting, for example, or understanding wildfire boundaries, people are already benefiting now. I think as of the most recent count, which is about a month ago, we’re now doing, for example, flood forecasting in over 100 countries, covering something like 700 million people, which is striking, right, in terms of just already, kind of, beneficial impacts like that.
Gillian Tett
So, can you tell us all where not to buy a house?
James Manyika
Actually probably, right? Because one of the things that’s been interesting about the flood…
Gillian Tett
So, that enchanting Tuscan villa suddenly doesn’t look so enchanting after all when you put it through the Google AI machine.
James Manyika
Well, I – ‘cause one of the things that was always tough with, especially with climate change, where extreme weather events and, like, floods are becoming just much more frequent, we’ve known for – you know, the world has known for a long time that if you can give people five/seven days advance notice, you can actually save lives. But that’s been incredibly difficult to do, and so, AI tools are now helping us do that, and so, we’ve gone from one little pilot in Bangladesh, 18 months ago, now to 100 countries where this is actually working, it’s extraordinary.
Gillian Tett
Right. So, those are all the extraordinarily beneficial, exciting, good things, and there’s plenty more we could talk about. Like the fact that, you know, Google – you know, AI translation tools can potentially mean that we no longer have the Tower of Babel problem in the world. It’s possible to even imagine a world where translation tools with AI could actually translate between different types of political points of view and stop everyone being so polarised, because someone would come in between and try and translate what we all really mean to each other. You know, Google can act – or, rather Google be – the AI can act a bit like a Family Therapist, if you like, but on the political stage.
But obviously, there are big, big downsides as well, and in fact, I should say, for those of you looking online, do feel free to ping over questions. Also, those of you in the audience, I’ll come to you for questions in a moment. But some of the questions I’ve got already is, for example, from Paul Mack, “Are you not worried about keeping AI under control? Last week’s headline quotes, “OpenAI’s new model tried to avoid being shut down,” springs to mind, are we not heading towards 2001’s HAL?” Does that worry you? And you can’t just blame that on your competitor going mad.
James Manyika
Well, first of all, I mean, as excited as I am, and I think many of us who work in the field are, about AI and its potential benefits for people, the economy, we haven’t talked about the economy, science and society, there are some important complexities and risks to think about. So, again, you could put those in a few categories. I think on the one hand, we have to worry about and pay attention to, what I what I think of as, kind of, performance risks. This is when these systems either give you inaccurate outputs, or they’re unsafe in some way, or they make up stuff, so that’s – you know, and those things are getting better, by the way, but that’s an important area that could cause risks. I think we have to think about risks to do with, kind of, misapplication and misuse. So, think about deep fakes, think about any of these things that – where, you know, bad actors could misuse these systems for any, you know – and these are bad actors of any wide variety. It could be, you know, two people in a garage, it could be terrorist organisations, it could be companies, it could be governments. So, there’s – the risk of misapplication, misuse, is quite an important one to pay attention to.
Then I also think it’s important to think about things that are much more, kind of, complex changes to society, and these, you can imagine them going in all kinds of direction. Think about how these technologies might change how we think about education and how we think about work. How we think about the way we’ve, kind of, arranged things to work in the world that could get, you know, changed or adjusted, for better and for worse, with these tech – we have to think about that.
I think to the question that was asked about, you know, those, kind of, you know, existential or control risks, I think we should always about that becau – you know, I don’t think we’re anywhere close to anything like that. But we should think about that, because if there’s any tiny chance, you know, 0.0001%, that that that could happen, we should be working on that problem right now, with – you know, take the precautionary principle and be working on those things. So, any of these are important things that I think, as with any powerful technology, on the one hand, we have these extraordinary transformational benefits, and then we have these complexities, risk, we have to work on both of those things.
Gillian Tett
Right. By the way, I’ve got a little message from what may be the Chatham House organisers, it may also be an AI bot, saying, “Please remind everyone that you – because it’s open – it’s an open session, it’s being recorded, so feel free to post on socials, including the hashtag, #CHE_Events.” So, there you go, that may be an AI recommendation, but who knows.
In terms of what can be done to try and control these risks, you were Deputy Chair of a UN Commission that was trying to do that, with input from both US and China. And I think at that point, a lot of people would do an eye roll and say, the chance of the UN doing anything is pretty low, and the chance of China and the US coming together and agreeing on anything, least of all with Europe, as well, is also pretty low. So, can you tell us, you know, do you think the UN is the right place to be trying to bring in any sense of global responsibility around this, or is it actually the case we need to create a new institute, or a new body, to try and police AI and stop the systems going mad?
James Manyika
Well, let me first describe what that was. So, the Secretary-General put together this UN high-level body, there were 39 members appointed to the body. I was one of them, and I happened to be the Co-Chair, together with…
Gillian Tett
Sorry, Co-Chair, I didn’t mean to…
James Manyika
…Carme Artigas…
Gillian Tett
Right.
James Manyika
…who was the – formerly – and this was at the time the Spanish Digital Minister. We actually helped lead the trilogue negotiations for the EU AI Act, in – you know, incredible leader. So, she and I were co-chairing this body. There were members from 33 different countries. We had to engage with all member states, so it was an extraordinary, interesting experience. I think, you know, one thing that I came to appreciate about that structure and that process is that the UN is, in fact, the one place where all the member states are at, right? There’s 194 member states who get together. There’s no other place that brings the whole world together. We can debate whether the US – UN agents are effective or not, but I think as a convening mechanism for the entire world to hear what everybody thinks, I think it’s an extraordinarily effective place.
So, what we were able to get to within that work was that, you know, first of all, I thought, gosh, what am I – what have I gotten myself into? What did I sign up for? You can imagine, you know, 39 members, all from different countries and different backgrounds, civil society, academia, you know, all of the above, but I think we got…
Gillian Tett
It would make an amazing reality TV show.
James Manyika
Oh, it was extraordinary, but I think what was remarkable, Gillian, is that, you know, we got to some basic principles pretty quickly, which was remarkable, and these were principles that seemed to have the support of all the member states. So, some of those principles included things like, you know, we should base governance of AI – well, first of all, let me describe what we first agreed on. We agreed on the extraordinary opportunities that AI represents, as well as some of the risks and complexities. We also realised that, in fact, different regions feel differently about the mix of those things. We can come back to that topic, which is fascinating.
But we then quickly agreed that the governance of AI globally should be based on things like fundamental human rights, for example, there was agreement on that, that it should be based on international law. That this should be AI in the public interest, in the public benefits. That, in fact, if it does anything, it should help advance the Sustainable Development Goals, because sustainable development goals, you know, whatever one thinks about them, they’re at least one thing. They’re the only expression that the world has for what it wants to improve about itself, right? It talks about addressing, you know, poverty, you know, health crisis, climate, gender equity. There’s a whole set of things within that that I think we would all agree are the things that the world needs to improve about itself.
So, all of those things were quite agreed to. Then we got to a discussion, “Okay, so how do we govern AI? What are some princip” – and we made in the end, you know, several recommendations, which were – ended up being part of the package that was voted on in the UN General Assembly in September, as part of the Digital Compact. And that, you know, basically, all the countries, except for seven who voted against, but the majority voted for the package, and so the package passed.
But what was remarkable about that, to your original question, Gillian, is that it was actually quite interesting to see different countries agree on those things. We would get a lot of feedback from member states, ‘cause we’d occasionally present our work to all the representatives of the member states, all of them, and there was general agreement on those principles. So, I have enormous hope in that way. To the extent that there were any debates, they tended to be debates about, “Okay, so do we need a new treaty or a new agency, and how do we do that?” But at least as a set of principles and norms, there was remarkable agreement on those things.
Gillian Tett
So, basically, broad agreement about principles, execution, though, was more contentious. Which is not surprising, because the other question I want to ask you, before I turn to the audience of questions, is that the question of who’s actually inventing this stuff and controlling its dissemination, in terms of the basic science, is very contentious. Because essentially, what you have is not only the basic science now emerging from the private sector, as opposed to the university sector, which it has in the past, I mean, basically, companies like yourselves are ahead of where Academics are, because you have the resources, but it’s also emerging in the hands of a very tiny, concentrated group of companies. And we can argue whether you have the lead, or OpenAI, or Microsoft, or Elon Musk’s new vehicle, which obviously is now, you know, the focus of intense interest given Trump’s vic – you know, victory. But does it worry you the level of concentration that’s now emerging around who’s actually driving this? And does it worry you, given that some people look at Trump’s victory as, essentially, the triumph of Silicon Valley in terms of capturing the political processes in America?
James Manyika
Well, I think first. just to – it’s worth thinking through how we got to where…
Gillian Tett
Or, rather, Elon Musk capturing the political processes with the help of people like Marc Andreessen and David Sacks and others.
James Manyika
Well, I think it’s important to understand how we, kind of – where we are, where we are. I remember when I did my PhD in AI and robotics at Oxford many years ago, if you’re looking for where the best science and research was going on in AI, you’d look at the key academic institutions. You’d look at Oxford, Stanford, Carnegie Mellon, Toronto.
Gillian Tett
Maybe even Cambridge.
James Manyika
Maybe even Cambridge, although Cambridge not so much, but there’s a handful of universities where a lot of the – and MIT, where a lot of the research was going on. If you look where we are now, it doesn’t quite look like that, and the reason is because a lot of the most successful techniques, whether it’s the transformer architecture that are the foundation of these foundation models, they’re very computationally intensive, extraordinarily computationally intensive. And so, you find that where the leading research is going on is where there’s, you know, sufficient compute capacity and the talent to work on those things. That’s why it’s ended up being, quite frankly, you know, a few companies in a few countries. Remember, it’s also a few countries. This isn’t happening everywhere.
So, I think – but I think you’re going to see that start to change, actually, because a lot of effort right now is going into how do we develop new architectures and models that are not so much the big ones and the more complex ones and the computationally intensive ones? Partly, because, a), it’s expensive to do this in the computationally intensive way, but also, even concerns about energy use and so forth, so I think you’re starting to see other architects. So, I don’t think this picture will always look like this, at least at the level – at the research end of it.
But also, keep in mind that, you know, when you say a few companies, you’re looking at the research end of it. If you look at the building of applications, there’s a very large ecosystems of start-ups and many others who are building applications, including others outside of the handful that are doing the frontier research.
Gillian Tett
Right.
James Manyika
So, I think the field itself is pretty diverse, and if you’re looking at – by the way, the few that are leading the research are not just the big entities, ‘cause there’s quite a few Big Tech companies who are not at the frontier of the research. So, it’s really a research frontier question, but the application and the ecosystems are quite large. So, I think that you’re now starting to see open source models that are much smaller, that allows start-ups and entrepreneurs to work in this space. So, that’ll change quite a bit.
But I do think that, you know, the more we can have very vibrant AI ecosystems around the world – I mean, one of the – you go – to go back to the UN, one of the things that was striking about that work at the UN was that while many in what I’ll – what’s often characterised as the Global South, tended to be much more optimistic about the potential for AI, they did have two big concerns. One of those concerns was the fact that many in the Global South are not able to participate in the development of these technologies in ways that reflect their interests and the things of they care about, their datasets, their languages and all of these things. And, also, the fact that they’re being left out, often, of the governance conversations, when this is a global technology. So, this – that’s – that was one of the key gaps we highlighted, as well as the gaps in capacity. Many of those countries don’t have the capacity to even participate.
So, I think that’s one of the things we’re going to need to change. So I think we’re going to see – I fully expect a very vibrant, competitive ecosystem. If you look at the leader boards these days, you see often, you know, models from companies, like, you know, the one that I work for, but also, a whole bunch of open source models. That’s becoming…
Gillian Tett
Well, you’re always fighting…
James Manyika
…very competitive.
Gillian Tett
…yeah, scrabbling.
James Manyika
Absolutely.
Gillian Tett
Scrambling right now. Just very last, quickly, before I go to questions, you say that, you know, emerging markets countries feel at a disadvantage. Frankly, I’d put the UK and Europe in that category too, vis-à-vis Silicon Valley right now, because, you know, it turns out, as it happens, Cambridge, unbeknown to most British people, has an astonishing strength in quantum computing. We have the father of quantum computing sitting inside the college I oversee. Gets totally ignored by the British public, but we do have astonishing strength in quantum computing.
James Manyika
Yes.
Gillian Tett
And yet, if you look at the AI race in general, the UK and Europe is nowhere compared to the Silicon Valley tech giants.
James Manyika
Well, I think it’s a – you know, there are countries and regions that have invested a lot. I mean, even in the US, you know, it isn’t every university or every company that’s leading the space. So, I think the question is what level of investment and – you know, each country and region is going to take. I mean, I took a lot of comfort in the – some of you might have seen Former Prime Minister Mario Draghi’s report on competitiveness, and I think, you know, a lot of that, quite frankly, spoke to this idea that I think it’s going to be quite important to focus on investments and competitiveness. When I sit in Silicon Valley, Gillian, and I look at people who work at our company, there are Europeans, British people, Researchers, you know, Europeans, so it’s not a national thing. There’s something in the context and in the countries that perhaps…
Gillian Tett
That’s also called…
James Manyika
…could…
Gillian Tett
…the export of intellectual capital.
James Manyika
Absolutely, that should encourage investment in innovation and competitiveness. I mean, Mario…
Gillian Tett
I hope the British Government’s listening…
James Manyika
No, I mean…
Gillian Tett
…speaking of my own book from Cambridge…
James Manyika
…I mean…
Gillian Tett
…we need a lot more support.
James Manyika
Mario Draghi’s report, I think one of the things that was – that that report highlighted is that if you look at, for example, the productivity comparisons between Europe and the United States, I think – I may be misquoting this, but something, like, 70/80% of the difference in productivity could be attributed to technology innovation investment…
Gillian Tett
Yeah.
James Manyika
…is one of the things that he actually called out. I think that there’s an opportunity there, because there’s such extraordinary talent at Cambridge, at Oxford, at Imperial College or the UCL, all these incredible Researchers, many of whom end up working in our company, who are doing extraordinary, extraordinary work…
Gillian Tett
Absolutely.
James Manyika
…which is encouraging.
Gillian Tett
Yes, well, listen, I’m going to – we have a huge number of questions. I’m going to group together two or three here on screen, and then turn to the audience. Firstly, someone called Mark Robertson asks a question which I think echoed a lot, saying, “Big Tech moguls are repeatedly saying that they would welcome more international governance and regulation, but can we actually believe them? Do they actually want to have more regulation?” And linked to that, we also have questions about the fact that will the – “Will AI ever be considered or classified as, quotes, “a weapon of mass destruction” by the UN? And what does it actually mean about creating ethical AI? Is that even possible?”
James Manyika
You know, obviously, I can’t speak for the entire industry, but I can tell you at least in our – from our point of view, we’ve said quite a few times that we think this technology is too important not to regulate, and, also, too important not to regulate well. And at least in my mind, what that means is, you know, approaches to regulation should do two things. On the one hand, they should, obviously, address the risks and complexities and things that we don’t want, that we worry about, but they should also enable the things that we want, the beneficial impacts and the innovation that comes from that.
I’m reminded, by the way, of the work at the UN. One of the things that many of the colleagues from the Global South emphasised, when we got to these discussions about risks of AI, you know, I think many of us, from ‘the West’ would talk about misapplications and misuses, colleagues from the Global South insisted that we add a fourth miss – a third miss, which was missed use, as a risk.
Gillian Tett
Right.
James Manyika
Because they’re describing how…
Gillian Tett
So, they’re being excluded from the technology altogether, yes.
James Manyika
Being excluded, or the missed opportunities where you could’ve applied, you know, in a – you know, I mean, I’ll give you actually a very recent example. It turns out, for example, that in the Global South, in most of the world, something like 30 to 40% of people who have tuberculosis go undiagnosed, because they live in communities and countries where there are no resources to be able to diagnose them properly. We’ve just, for example, worked on, in Zambia, now in India, on an AI-enabled technology that can actually do tuberculosis diagnosis.
Gillian Tett
Yes.
James Manyika
So, imagine if you missed the opportunity to do that in places where people don’t have alternatives. So, that’s what they meant by this idea of missed use as its own risk. So, I think – that’s why I come back to the idea that regulation should do both things, should…
Gillian Tett
Right.
James Manyika
…both address the things we worry about, and also enable the beneficial impacts that we want from this technology.
Gillian Tett
Well, that leads to a very nice couple of questions here. By the way, I should say, that there are a number of people, like, Robert Sulemani, who are saying that – whose – Robert’s from Zimbabwe, who’s watching online and saying, “Thank you so much for being here. You are an inspiration to people like me.” So, there you go.
James Manyika
Hmmm hmm.
Gillian Tett
One of the questions that emerges though, in relation to the emerging markets, is from Christina DeCoursey, who says – she is writing in from Astana, from Kazakhstan, saying, “Powerful AI affordances are in tension with proprietary medical and pharmaceutical patents, or patents. In the next pandemic, how can we handle this to ensure fair vaccine distribution to all nations?” That picks up on your idea about missed use, if you like.
James Manyika
It does, and I think that’s why it’s important, I mean, we believe it’s important to make the benefits of this as widely accessible as possible. I mean, we were talking earlier about AlphaFold…
Gillian Tett
Yeah.
James Manyika
…which my colleagues got a Nobel Prize for earlier this week. As of now, I think the number is something like 2.3 million Biologists are now using this. They’re in a – over 190 countries. We made this – those datasets freely available. So, you’re starting to see Researchers in places where they otherwise wouldn’t be working on these, you know, drug discovery or therapies, or being able to access these datasets. I was quite struck, earlier this year, I happened to be in Brazil, where, you know, they had all of a sudden, something like over 18,000 Biologists were starting to access AlphaFold, to do research on neglected diseases. So, I think part of it is the more we can make the benefits of these technologies widely accessible, I think it helps with the kind of question that’s being asked.
Gillian Tett
Right, and Robert’s come back here from Zimbabwe and said, “Well, that’s great, but what do the breakthroughs actually mean for low resource countries? And why” – he’s noticed that the programmes like the “trusted tester model,” which essentially allows users to test how these things are used, “are being almost exclusively limited to people from the US and UK.” What he says, I haven’t – I don’t know whether that’s true, but, anyway, he says, “How can Africa be included?”
James Manyika
Well, I think Africa is being included. I mean, I can tell you some of the things we’re doing. So, for example, we do a lot of our research in AI, we actually have a research centre in Ghana, a Google Research Centre in Ghana. It’s actually one of the first research centres in Africa doing foundational research…
Gillian Tett
Right.
James Manyika
…in these technologies. We’re doing the same thing in Kenya, and in that we actually involve African Researchers working on this. So, a lot of the work, for example, I mentioned the work on tuberculosis, that was initially piloted in Zambia, that was actually done by African-based Researchers. So, I think the more we can have these tools be made available to Researchers around the world, the easier it is…
Gillian Tett
Right.
James Manyika
…and I think we just need to do more – there’s clearly more to be done, but there’s – we should do more of that.
Gillian Tett
Okay, so I have a question from Lucy Blythe, asking where can she “find the AI flood forecasting model, is it open source?” I’m guessing that Lucy owns some property somewhere. And linked to that – well, you can tell us, but also, “How concerned are you” – this is Trevor Clarke asking, “that resources like water and electricity and land use will be essentially depleted by the companies in the name of AI plants – or their AI plants?”
James Manyika
Well, first of all, on flood forecasting, there’s actually a site called Flood Hub…
Gillian Tett
Flood Hub, okay, Lucy…
James Manyika
…if you actually go there…
Gillian Tett
…wherever you are, look up Flood Hub.
James Manyika
…you’ll see some of the countries that are now covered. In fact, of the 100 countries I mentioned, I think we added about – the last round of 20, those were actually – we added African countries, so we keep adding more countries as we do more research and work in this. So, we may – and we do the same thing with things like wildfire boundaries, etc., so all of these things are widely available.
I think on the question of resources, I think that’s an important question. I think one of the things, as I said, which is one of the challenges with these – with the latest round of advances in foundational models, that they’re very computationally intensive and consequently, resource and energy intensive. To put that in context, by the way, I think the annual demand for electricity in the world is something like 35,000 terawatt hours. That’s just electricity for everything, lights and all the things we use electricity for. Of that, roughly 1.2/1.3% is used by data centres. Of the data centre energy use, something like about 10% of that is used for AI, because remember, data centres are used by – for lots of other things besides AI, but AI is already about 10% of that, and it’s actually growing. So, we should be worried about that, and if it keeps growing at the same rate, it’s going to get very, very, very large.
So, I think we should make sure we address that, and so we’re starting to do that by trying to build more efficient models, more efficient compute, smaller models, that are less resource intensive, and that’s starting to make a difference. In fact, even in the last year alone, we’ve had more than a tenfold improvement in model efficiency just in the last year. But that’s a – that’s something we’re going to need to keep working on…
Gillian Tett
Right.
James Manyika
…to get that right, so we don’t deplete resources, at all.
Gillian Tett
Right, okay. We have time for about two questions from the room, I’m afraid. So woman in there, woman in the middle, there, and then let’s take the man over here, So, Robert, yes, sorry, didn’t have my glasses on. I’ll put my glasses on, so, there you go. So, a question from you, and then a question from Robert Peston, who…
Lella Halloum
Hello.
Gillian Tett
…of course I do recognise.
Lella Halloum
Hi, I’m Lella Halloum, I lead Global Student Outreach for IBM Z. As a student myself, AI is really being pushed out of the classroom, and considering my generation are all of your future leaders and bosses, I wanted to know how we can ensure my generation is at the heart of the digital transformation and not cast aside.
Gillian Tett
Okay, and while you’re thinking about that, let’s hear Robert’s question, as well. I want to know how we stop students using AI to cheat. A very real…
Robert Peston
So, I…
Gillian Tett
…issue right now.
Robert Peston
…I hate these occasions. There’s always way too much I want to ask. It’s a couple of things really, though, particularly on my mind at the moment. One is, you were talking a little bit about medical applications, and I think it’s pretty clear that we are in a position to design AI chatbots, agents, whatever you want to call them, that are going to be better at diagnosing than humans. We’re not rolling them out at the moment, they’re – because – partly because there’s just no debate about what we do with all those Doctors, for starters. But this is a, sort of, micro-version of a much bigger problem or challenge, which is…
Gillian Tett
Okay, well, let’s…
Robert Peston
…just the extent to which, you know, we are already in a position to replace a lot of very important human activity. And I guess, it’s just when – where is the public debate about all of – about all this? You could, sort of, you know, you could have a government that says, we’ve got a massive overspending problem in the NHS at the moment, you could, sort of, solve it at a stroke by effectively – I mean, this is a – not something I’m proposing, but effectively sacking every GP and replacing them with apps. And we’re not that far from being able to do that.
Gillian Tett
Right.
Robert Peston
So, how do we have a public debate about what we want AI to do, and then what we do with all these very talented, highly trained people…
Gillian Tett
Who could be replaced…
Robert Peston
…who are going to feel pretty redundant?
Gillian Tett
Right, including, of course, Journalists. We might yet be replaced by AI bots, too.
James Manyika
Well, to take those questions. First of all, you’re fortunate that you’re sitting next to one of the great AI Educators, right next to you, Sir Nigel Shadbolt, who I’m sure will have a more deeper, thoughtful answer to this. But I think one of the ways to – is, in fact, to involve and engage young people and learners themselves. One of the things that I’ve – that’s been extraordinary to me in the last few years is to see how young people, when they use this technology, often use it very differently than other people do. It’s been quite striking. The questions that they pose, the way they use it to draft things and to research things, is actually quite different. In fact, our – you know, we’ve actually ended up involving quite a lot of young people, including – as well as even Journalists and Artists, to actually help us design.
Gillian Tett
Even Journalists, yes.
James Manyika
Yeah, to help us actually design these tools, and often, where we end up is quite different. So, one example is, if you go to – there’s something that we – a experimental thing we put out called Learn About. That came out of seeing how young people were using – it’s an experimental tool on Google Labs, came out of how young people prefer to interact with these tools. So, I think part of it is involving young people in the development of these tools and their use. I think that’s something we need to do more of.
I think on the question you’re asking about Doctors, you know, our experience so far has been that, in fact, people assisted, whether they’re Doctors, Radiologists, and others, assisted by these technologies, are actually much better. And I think in the case of healthcare, I don’t think the world suffers from too many Doctors, in fact, it’s the opposite. In fact, if you go even around the world, we actually have a shortage of Doctors and health practitioners. So, I think in fields where – if there are Economists in the room, I think there are many occupations and fields where the demand elasticity is such that, in fact, we’ll end up doing more of the activity if these people are assisted with these tools, as opposed to the opposite. So, I can’t – I don’t imagine a world in which we’ll say, “We have too many Doctors.” Maybe that’ll – you know, maybe when those waitlists are at zero in the NHS, as far as I understand, maybe that’s the time to have that problem.
So, I think the assistive use of these tools, if you look at some of the benchmarking we’ve done with some of ours innovations, like Med-Gemini, which is based on health in particular, all those results – and we’ve just published actually something quite recently on this, show that when medical practitioners are assisted by these tools, they actually do better than medical practitioners not assisted by these tools. So, I think…
Gillian Tett
So, it’s augmented intelligence, not artificial intelligence?
James Manyika
Oh, absolutely, all…
Gillian Tett
I often say it’s basically, accelerated intelligence, augmented intelligence. We have the wrong AI basically, or the wrong A in AI, it’s not artificial.
James Manyika
Yeah, and, also, in many cases – by the way, there’s a wonderful paper that was written by an Economist at Stanford called – he’s Erik Brynjolfsson, what he called “The Turing Trap,” which is quite often, we should be doing more to think about – and I think even Daron Acemoglu’s written about this, he recently got a Nobel Prize recently, this idea that, what kind of AI do we want? And often it’s the assistive AI that’s able to do the things that we can’t do. That’s actually most helpful, actually, because there’s so many limitations of what we can do, and often there are things that these technologies can do that we can’t do, and so, the combination is incredibly powerful.
Gillian Tett
Right. Okay, I’ve been told by the organisers I can – once you’ve answered that, I’m allowed to do one more question. So, let’s take – if you want to ask anything more about the student thing, or is that done? Because if not, let’s have one more, over there in the corner, the gentleman over there, and this will have to be the last question, I’m afraid.
Kyo Diadine
Oh, thank you very much. My name is [Kyo Diadine – 50:02] from the London School of Economics and Political Science. And I’m from Nigeria, and since 2021, I’ve been using the Google Earth’s Engine to model flood risk in Nigeria, Kenya, Ghana, and I want to say thank you for that. It’s been free and open-source, I really appreciate that. And I’m a student studying digital innovation right now. I just want to ask that, if you were in my shoes, what question would you be asking?
Gillian Tett
Brilliant. Do you want a future in journalism by any chance? ‘Cause there are probably too many Journalists in the world too, given the AI bots, but there.
James Manyika
Well, first of all, thank you for the question. I’m glad you found those open-source tools helpful. By the way, we’ve also added to that what’s called, you know, open city datasets, that are – and maybe you’re using this already, that are very – proving to be very, very useful around the world.
I think the question that I would be asking, I would encourage you to ask and quite frankly, challenge all of us, is one that I ask myself all the time, which is, you know, on the one hand, I’m absolutely convinced that this technology is incredibly beneficial, in all the ways you talked about, to people, economy, science, and society. The question that I don’t think is automatic is, will everybody benefit from that? I think that’s the part that I think we just have to work on, and you should be asking – we should be asking all of ourselves that same question. ‘Cause I’ve seen enough examp – I mean, I grew up in Zimbabwe, and I’ve seen enough examples, I mean, COVID was one example of this, where, you know, the world invented these extraordinary vaccines, not everybody got them. Some places got them, other places didn’t. Some people got them, some – so, the question of, “How do we make sure everybody benefits from these technologies?” is one that I think you should be asking and we should all be asking ourselves.
Gillian Tett
Well, thank you. Well, it’s been an astonishing debate, and I guess I’d take away three mai – key main points. Firstly, that what is happening in AI is absolutely astonishing in terms of speed of innovation right now. We really are living in an extraordinary time, a bit, like, sort of, when Darwin stumbled on the Theory of Evolution times 20. And I think most of us do not understand that these little announcements, or these big announcements, that come out almost every other day, which get, sort of, maybe one paragraph in newspapers, basically represent some extraordinary leaps forward.
Secondly, that as we try and make sense of this, we probably are using the wrong A in AI, and it’s not so much about artificial intelligence as augmented intelligence, accelerated intelligence, aspirational intelligence or agentic intelligence. I would argue with my own very human biases, that we should stick in another A, which is anthropology intelligence, because that’s what I did my PhD in, but that’s just my bias. But the – we are talking about the wrong A in AI right now, in my view.
And thirdly, as we look at the world through that lens, it’s very clear, as you stressed and as Robert said, that it throws up extraordinarily big questions about, how we’re going to organise society going forward, who is going to control these technologies? And perhaps most importantly, who is going to think hard about who is going to get left out, or hurt, and who is going to be able to deploy these technologies for good, in the wider sense of good, not just the benefit of a few Silicon Valley bros? As Robert says, that is going to require a very, very big, thoughtful public debate, not just by Politicians, but voters too.
So, thank you for coming along today and starting this debate. Thank you to all of you for listening and contributing, and the brilliant questions online, and let’s go forward and try to debate with each other, if not with an AI bot. Thank you [applause].