Sir Simon Fraser
A very exciting session we’ve got ahead of us. I’m Simon Fraser. I’m Managing Partner of Flint Global, and former Deputy Chair of Chatham House. It’s my pleasure to chair this discussion today, and thank you very much to Microsoft for this making possible for us.
A couple of very quick announcements before we go ahead. This event is on the record and it’s being recorded. I think you’ve been invited to tweet, so please, feel free to do that. The – if you don’t have the Wi-Fi password, it is Welcome210, with a capital W. I was told to tell you that ‘cause it’s been changed, Welcome210. There will be – is – we’re on quite a tight timeline. We have to end at 20 to. There will be time for some questions, both in the room and I hope, online. If you want to ask a question, please raise your hand, a microphone will be brought to you, and if you could make your questions brief and to the point, and please say just briefly who you are, that would be great. Online, if you submit your questions due – using the Q&A, I’ll be able to see them on the screen and I will read them out. So, that’s the way we’ll manage this.
Without further ado, this is a session to discuss the impact of AI in global affairs, and we are hugely privileged to have with us Satya Nadella, who needs very little introduction. He is the C – Chair and CEO of Microsoft, and has been since 2014, but he’s been with Microsoft much longer than that. I think he joined in 1992, and Satya has held a number of important jobs and roles in Microsoft, both in, for example, leading the cloud and enterprise activities and also, Online Services Division, through the years. So, he really knows this business.
Satya was born in Hyderabad and pursued his studies in the Universities of Mangalore, Wisconsin and Chicago. So that’s a very brief background. I’m going to invite Satya to speak for about 20 minutes in response to a few questions, to get us going. And the first of those questions from me, Satya, if I may, is just if you could – last year was a huge year for AI. It, sort, of came into all our consciousness, and it was a big year, of course, for Microsoft and your industry. Could you just catch us up on all that, tell us what happened and where things stand, and what the opportunities are, therefore, that we’re facing?
Satya Nadella
Well, first of all, thank you so much, Simon, for the opportunity to be here, and it’s right, I mean, the last year, maybe since November of – what is it, 22, it’s – that’s when ChatGPT first came out. It’s interesting, when I think back, it reminds me a bit of when I joined Microsoft. I joined in 1992, and the November of 1993 is when Mosaic first came out, and so, it – if you, sort of, think about ChatGPT, there’s some parallels to how the web came about, and I think we are there. So, to me, the last year has been all about, what is this technology? The broad contours of it are becoming a lot more clearer to us. After all, we’ve been talking about AI for decades, but the realisation of AI in a form that all of us can relate to is probably the biggest thing that happened in the last 12 months.
In that context, there are two big breakthroughs, at least in my mind. One is a breakthrough that we – in some sense, for the last 70 years of computing, we’ve been trying to build that most natural of user interfaces, right? You can, sort of, say from Licklider to AnGeL-Bot, to now, it is always about, can computers understand us versus us trying to understand computers? And it turns out that this natural language breakthrough we have to be able to have these multi-turn, multi-domain conversation, and multimodal conversations, is a massive, massive breakthrough in computers finally understanding us, versus us trying to understand computers.
In parallel, the other big thing that’s happened, which has also been a 70-year his – you know, computing history arc, which is, if you think about computing, it’s all about digitising people, places and things so that you can reason about them, and we now have a new reasoning engine, a neural reasoning engine. So, you put together this combination of a natural user interface and the reasoning engine, you pretty much can reimagine every layer of computing stack, every software category, and have broad impact.
And so, for us, one of the other things that I feel we got right was to even design this as something that works with humans. The Copilot design choice, design metaphor, when I first saw GitHub Copilot is when it, sort of, really etched – that design metaphor was etched in me. And so, really, you know, it’s been fantastic to see it scale, and I think 24 – if 23 was the year we built a lot of it, 24 I think will be the year where it scales to even greater heights in terms of everyday usage.
Sir Simon Fraser
Well, that’s very exciting. I guess, for some people in the room, those of us who don’t really feel that we understand it, that there’s a mixture of opportunity, but also some concerns, which we’ll come back to in a minute, about what these applications might mean for our lives. But maybe one of the areas that we should explore rapidly is that of science and the innovative opportunity that AI offers, therefore, in that sector. I mean, I think that you’ve already talked about this and the opportunities there are, for example, cancer diagnosis and other areas. So, what are the most promising areas for AI to support scientific development and innovation and discovery?
Satya Nadella
Yeah, I mean, I think that that’s right, because at some level, there are two broad areas where I think we can capture the benefits of this generation of AI. One is around core productivity, whether it’s for knowledge work or frontline work, and I’m sure we’ll talk more about that as well, but the other one is, how can we accelerate science? If you take something like even the energy transition, I think the core challenge in any energy transition path is that you have got to take 250 years of development of chemistry and compress it into the next 25 years, right, in terms of discovering new materials, new molecules. And so, that’s where I think this generative AI regime can be super helpful.
In fact, just last week, it was very exciting to see – one of the models we have, called MatterGen, was used to essentially come up with a new material that reduces in batteries the lithium content by 70%. So, we worked with one of the national labs in the United States to develop that and, you know, complete round tripping. So, it’s just not conceptually something that can be done, but getting it done and then, of course scaling it.
Same thing around, you mentioned cancer research. So, we’re working with, you know, variety of folks, including the Broad and others, to say, how can we, you know, ex-vivo, even grow cancer, such that we can get better protocols for treating cancer, as one example? And so, I think healthcare is one place. If I think about it, core understanding of physics, to material science, to chemistry, to biology, which are all very, very hard problems, this generation of technology, I think can be very useful to accelerate, I think, our knowledge acquisition in these fields.
Sir Simon Fraser
And just a quick one on that, do people in those fields have the necessary skills and training to help them? I mean, obviously the people at the very top end do, but what about – how does that deploy through the system?
Satya Nadella
I think the – one of the ways the technology here is getting diffused is, it’s, kind of like, computers. So, let’s take the analogy of what – when the PCs first became popular in the 90s – like, take office work, right? We all know how office work was done pre-email, word processors and spreadsheets, and then, essentially how they spread inside of the workplace. And pretty much, it doesn’t matter what kind of a company you are, you could be a law firm, or you could be a chemical company or what have you, you’d still, you know, do your work around forecasting, planning, on communication, using these tools.
So, to some degree, our hope is – and we’re seeing the rate of diffusion of these technologies into different domains in similar way, right? So, where these are just tools that you would use, like you used spreadsheets in the past. So, it’s not that you need to be expert in the tech. You want to be an expert in using this tech in your domain. That’s when, really, technology is a democratising force, because if this is about everybody becoming a PhD in AI, it’s not going to be that impactful. If this is about – oh, like, if I – all of us know our number sense pre-spreadsheets and after spreadsheets. So that’s, I think, what these tools are all about.
Sir Simon Fraser
So, people have to be intelligent users of it, wouldn’t they?
Satya Nadella
That’s right, and in fact, the intense usage of it, and then to be able to apply it in different contexts.
Sir Simon Fraser
I think we better, if you don’t mind, look at the other side of the story, which is the concerns that people have about this technology and the extent to which it might escape human control or pose, sort of, ethical and other issues in society. And the capacity of both corporations and governments actually to manage that, regulate it in a way which is – gives us the assurance we need, but doesn’t actually inhibit the benefits from it. Now, where are we, do you think, in that process of striking that balance?
Satya Nadella
Yeah, I mean, one of the things that I feel very, very good about is the fact that there is such, I’ll call it scrutiny, dialogue, conversation, around this topic, right? Because I think if I reflect on our industry, what has happened in the past is technologies moved fast and we built it, and then, essentially, perhaps had to deal with the unintended consequences. And even for regulators and civic society or what have you, you know, they – maybe they were even excited about the technology versus thinking about the unintended consequences from the get go.
Whereas now, everyone’s learnt, and the good news is everyone’s concerned and, you know, at the same time, also optimistic about what the technology can do, right? I mean, if I go back and say, if for the eight billion people, we can truly now think about, wow, there can be medical advice or a personalised tutor. So, the opportunities are clear as day, but you know, they can be – this can even accelerate disinformation. It can, sort of, real – cause issues in our democracies, or it can, you know, lead to bioterrorism. So, these are real risks. There’s also the existential risk of the runaway AI, and the fact that we can have an open conversation about risks and opportunities simultaneously, first should be celebrated by all constituents.
Having said that, I also feel more optimistic that there is – seems to be a general convergence, whether it’s in the UK, the Safety Summit that you all held here, or what’s happening with the Executive Order in the United States or in other capitals, you know, in the EU and elsewhere, is there’s a general convergence that we should deal with today here and now risk, right? Bias, election interference, bioterrorism, what should be the ways to think about those risks in terms of the engineering process, standards? There’s a lot of things that we have voluntarily agreed to do as creators of foundation models.
So that’s, I think, the approach we have taken today, and then even in the intermediate timeframe, I think that this risk-based approach, right? Like, the fundamental approach of if you’re going to deploy AI in healthcare, the healthcare regulations can apply to AI. You’re going to deploy it in financial services, the regulations in financial services can apply. Why? I think it’s a reasonable place to start. There is no reason why you can’t take all of the hard regulatory work that has been done and apply it to, sort of, the decision-making that AI gets deployed in. And then, we have to, sort of, solve for, what is the long-term existential risk, like, the tech – you know, the runaway AI?
So, if we think about it in these three stages, I feel the dialogue – the right dialogue is happening, the right frameworks are getting put in place, and there’s right consensus emerging, which I think will balance out us maximising the opportunity while mitigating the unintended consequences.
Sir Simon Fraser
Well, that’s very interesting. So – ‘cause you are now on your way to Davos and no – but no doubt you’ll be talking to a lot of political and corporate leaders there about this. And what you are saying is that you feel that there is a, sort of, convergence in approach here, which I find interesting because one often reads in the media here that there’s a divergence, for example, between the European Union’s approach, the British approach and the US approach. But you are relatively sanguine on that, are you?
Satya Nadella
I am a lot more sanguine. I mean, like, I mean, in today’s day and age, I feel that everything becomes extremely charged and political and you can – you know, you have these threads in social media. But if I look behind all of that, I think there’s real consensus that this technology is very important. It’s probably the most important technology to drive economic productivity and benefits for people in society, broadly. But we need to be careful, and we should learn from what has happened before in digital tech and in other technologies, and really take safety as a core, core first class feature, versus something that we deal with later. That’s a pretty mature reaction that I think, in spite of all of, let’s say, the handling around it, but the right things are happening.
Sir Simon Fraser
That’s very good to hear and, of course, there’s a particular concern in 2024 about the potential impact in election procedures, which I’m sure you’re focused on and thinking about.
Satya Nadella
That’s right.
Sir Simon Fraser
It’ll be a test.
Satya Nadella
Yeah, and I think – and again, even there, I think we should – there’s no question that with generative AI and things like deep fakes or what have you, that what could be election interference can be more accelerated. So, therefore, some of the things that everyone’s doing, we are doing with content ID and how do you really ensure the veracity of content out there in, sort of, the – in the media landscape, I think those are all important things. And there are some, you know, decent technical solutions, and there should be real control over distribution. I always say there’s, you know, there’s only so much – you know, it’s like anybody can write anything in a word processor and then, the only control human society has is how does that information get disseminated? And that’s where I think we have to exercise more control in – on the distribution side of it.
But I do feel – by the way, the election interference or disinformation stuff existed before generative AI, too, so it is not, like, the last election cycle didn’t have these issues. So, we should learn everything from that and then apply that learning to this new technology, realising that this technology does put, in adversaries hands, some powerful tools.
Sir Simon Fraser
Yeah. Before we go to the audience to open it up, just one – the last, sort of, set of issues I just would like to put to you is about the broader contribution to economic growth, productivity and therefore, jobs and the structure of industries and what people can expect in that area, is obviously an issue of opportunity and concern. How do you see that?
Satya Nadella
Yeah, I – you know, I start, fundamentally, from a point of view that I think we have a real challenge in economic productivity around the world. If you, sort of, adjust for inflation, we may be growing or we may have negative growth on a worldwide basis. Definitely in the developed world, you know, the economic growth is more challenged, and on top of all of the demographic challenges we have ahead of us.
So, I think we need a new factor of production. I mean, I simply, sort of, think of, we need some technological breakthrough that allows us to do more things and do that in such a way that it is, in fact, aligned with our energy transition goals. You know, it’s aligned with our demographic challenges so that it’s more equitable in terms of even the shape of the growth. It’s not just about one sector of the economy. It’s broad sectoral growth, small and large, big size businesses can all grow and so on. So, those are all things that I think we care about.
So, I feel AI is probably the most promising of those technologies that can drive, and especially if you think about the – our conversation, even. We talked about what it can do for education, what it can do for healthcare, what it can do in manufacturing, what it can do for broad knowledge work. So, it’s a general purpose technology that – you know, and they’re very few and far between. Like, steam engine to AI, that’s the type of stuff that can really move the economies of scale.
Then you ask about what does it do in terms of displacement? You know, let’s face it that, you know, anytime there’s a big technological revolution, there is – we should be clear, right, about displacement. But I also think that we should not fall for what the Economists call the lump of labour fallacy, right? There will be jobs, the question is the shape of these jobs. If anything, these tools can be very helpful in equipping us with the skills for what is the new set of tasks, right? So, the learning curves, right?
For example, one of the fundamental challenges is mid-career transitions. So, to the degree to which you can then say these tools can be helpful in being able to bring the expertise needed. In fact, the – you know, Bill Gates used to talk about “information at your fingertips.” I think he gave this famous Comdex speech in 93, where he talked about “information at the fingertips.” I think this is the age where this is about expertise at your fingertips. So, if anything, anyone can become expert in anything because you have the AI Assistant helping you.
One of the places where I’ll mention this is, the last time I was in the UK, I had a chance to meet with someone who was working in law enforcement, and, you know, he was – he, sort of, proudly said to me he built an app that he showed his grandson. And the app was built using one of our tools called Power Apps, and now, this Power Apps, basically, is a natural language interface where anyone can approach it to build a digital artefact, like an application, right? That’s about re-skilling.
So, you can think about the wage support for someone who’s in the frontline, who is now able to create, essentially, applications. That means IT level wages can go to the frontline. So, I also think that there will be interesting changes in the labour market in terms of wage support for jobs in the frontline, which will be more valuable because they’ll drive productivity.
Sir Simon Fraser
That’s very interesting, thank you. The other side of that is the infrastructure side. So, just very briefly on that, I mean, what are the implications for the infrastructure requirements, for example, the grid, other sorts of, you know, network supply, you know, requirements that we require to support this technology? Are governments up to speed on that?
Satya Nadella
I mean, I think the fundamental driver of all of this is the compute engine, right? So, the first thing is, if you think about even just take the prices of the AI compute minutes or hours or tokens, whichever way you want to look at it, Moore’s law is very much alive when it comes to AI. So, in other words, every 18 plus months you have halving, or even, you know, more reductions in – you know, increases in capacity, thereby reductions in prices. So, the one thing is, if you’re going back to my productivity argument, this is fantastic that we have this compute engine. So, then as you brought out, these compute engines need comp – you know, power. They need land, they need water and they need power.
The other thing that is also – since it’s all new infrastructure, we can build it using renewable sources. In fact, some of the – we’re one of the largest buyers of renewable sources everywhere, both – you know, and then obviously these renewable sources are, you know, being propped into the grid and the grid gets better. We use the stuff on grid, and we can even do things behind the meter that are going to be breakthrough technology. So, we are doing a lot upstream from us in order to help.
But the other thing is we should be grounded. This is what, 2%, 3% at most, of the total energy consumption? So, it’s not the largest, sort of, draw today, but as time goes on, this will be a large draw of grid power, but if you believe the – I think to me, the logic train has to work. That’s why I’m so obsessed about making sure that this is not technology that is some narrow technology that’s benefiting a narrow segment of users. It has to be a general purpose technology that’s broadly uplifting the economy and the benefits of that economic growth. If that is true, then we will be able to make sure that the resources that are needed for it, whether it’s energy, land or water, are, one, are being provided more sustainably, and that will probably be the best societal decision we could have ever made.
Sir Simon Fraser
Very good. I’m going to open up to questions. I’m going to start with two online, if I may, while people in the room prepare themselves, and can I remind you, when you do get to ask your questions, as I said, to keep them brief? But there are two, sort of, linked ones that I just wanted to raise that have come in online, one of which is, “How will our interactions with AI change how we speak and think?” And if I could link that to another one, which is, “How do we manage to avoid embedding historical racial biases, misogyny, etc., in the algorithms?” Those are quite interesting societal questions.
Satya Nadella
Yeah, let me take the second part. I mean, I think this is one of the places where the histor – I mean, I like the way the question is asked because the fundamental thing is bias exists in the real world, and the question is, how do you unbias the AI from the biases of humans? And that is where there is alignment research that needs to be done, and that’s where – for example, when we launched Bing and the large chunk of the work that we – OpenAI and Microsoft, all of us did, was to ensure that the model itself was not just the base model, but the model was bas – you know, was trained to be aligned with our interests. What are the ways we can – you know, if there’s a reference to a Doctor, it doesn’t, sort of, automatically think it’s just a male Doctor, but it does think about gender diversity, because that bias is there today on just the internet corpus. And these are things that, you know, we can do from, sort of, an engineering perspective.
But I do think that this is where – when we talk about safety, this is where some of what is – societies have to make decisions on what’s the boundary between what is free speech, what is real alignment required, what is censorship? So, some – one person’s free speech is another person’s censorship, and that’s a place where I think societies have to also come together to decide. But we take that responsibility, like we do in any other social product of ours, and we apply that.
The other question around, how do we interact with this? Is that the question?
Sir Simon Fraser
How is it going to change our – the way we speak and think? Yes, I suppose is, what are, sort of, the Meta implications for us?
Satya Nadella
But it’s a – the place where I’m, sort of, very excited about is take the – take education. One of the things, perhaps, which will be the breakthrough, one of the dreams in education, was if a Teacher could, in fact, have additional staff that really was able to do personal tutoring in the classroom, after the classroom, for every student in the class, right? That’s the ultimate breakthrough in educational outcomes that we’ve all dreamed of, and now it’s possible.
So, when I say, how can someone – what’s the way it can help us think? Is it can help us understand, for example, if I’m trying to solve a calculus problem, where I make conceptual mistakes. This is like that personalised Tutor that I always needed. Like, you know, I’m an – I studied electrical engineering, never understood Maxwell’s equations, but finally, I can, because there will be that Tutor who has all the patients to teach me, visualise. Like, in fact, the first time I actually got it was when it was not in a textbook, but it was a visualisation. So, those are the kinds of things that I think can be real breakthroughs in how we relate to AI.
Sir Simon Fraser
So, improving access to knowledge, in effect?
Satya Nadella
Absolutely.
Sir Simon Fraser
Right, in the room. Gosh, lots of hands. Okay, I’m going to do these in batches of two if I can, and I’ll go to both sides of the room. So, lady here and then, a lady here in the second row.
Latika Bourke
Hi, thanks for your interesting talk. Latika Bourke, Freelance Journalist. What’s your human commitment to news and editorial content, because we’ve already seen some issues with MSN promoting fake news, junk news, as the algorithms have replaced humans? So, do you have a strategy and an ethical framework around what your safeguarding role of that is in the future?
Satya Nadella
Yeah – no, it’s a – so, in fact, it’s interesting, one of the things I start with is, to even think of these tools as really tools, even, to create drafts for people who are in the news business. So, to your point, that’s, to me, the fundamental prem – when we talk about empowering people and organisations on the planet to achieve more, that’s the Microsoft mission. So, we start with, how do we put tools, even in the Journalist’s hands, where – by the way, this is just a draft, it has to be fact-checked. And even in our own distribution, you brought up MSN, our goal would be to be able to say, “What is the – who generated it, which news organisation is behind it?” and to make sure that we are not falling victim to anything that is considered the – not news that’s being created by people who are really in that business.
So, for sure, our framework would be to start with creating tools that improve journalism and journalism quality, both on the creation side and on the distribution side.
Sir Simon Fraser
Okay, can we try and take two in the interest of speed? So, lady here and then let’s go to someone near at the back, gentleman there, in the – towards the back.
Member
Yes, thank you very much. I’ve been working on AI for many years. When a – ChatGPT was launched, Bill Gates declare, “We made Google dance.” Do you think that GenAI could provide more diversity and freedom to citizens in the information delivery, versus old monopolies of information and data, you know, delivery? And do you think GenAI could mean the end of the post-truth era, where we had been limited in the freedom to receive information from the old tech?
Satya Nadella
The old – yeah.
Sir Simon Fraser
Can I take the other gentleman, as well, please, yeah, in the middle of the row? Yeah, that’s right.
Henry Yates
Hi. Henry Yates, Retrace Software. How important is it for Microsoft to work with startups to drive innovation?
Satya Nadella
Yeah, I’ll take the second one. You know, it is very important. That’s why we are here.
Sir Simon Fraser
You still have to answer the first one.
Satya Nadella
I will after. Because I mean, OpenAI was a startup. I mean, at some level, I think, if anything, our commitment to working with companies all around the world – in fact, just this morning, I had a chance to meet the CEO of Wave, which is an autonomous driving company right here in the UK, take a drive in his car. And it was wonderful to see what they’re doing in terms of building a, really, a new class of models for autonomous driving that, you know, take a very different approach.
So, to us, I think building platforms that allow for broad, I would say, innovation, you know, to me, there’s that concept of, as a platform company, the value created about the platform needs to be more than what the platform captures for the platform to be stable. That has been the company that I joined in 92. I subscribed to that view, that’s how we built our PC ecosystem, that’s how we built our server and then, subsequently, cloud and now the AI. So, that’ll be the approach we will bring. We are not about creating a marketplace that tries to arbitrage between supply and demand and extract all the rent. Our entire goal is to create economic surplus in every geography, community, and you have to do – you can only do that if you even support startups.
To your point about what happens to just distribution of information, I think this generative AI technology will reset, I think, a lot of how we seek data, how we seek answers, how we seek information. It even goes back to how information gets created, distributed. So, you could say there are places – today there’s – let’s face it, there’s real aggregation power in a few places, right? Search is one, newsfeeds is another. Both of these things could be up for disruption. The question – and that’s where even every startup also has a shot. So, I would say if you’re a Publisher, if you’re a Journalist or what have you, you should welcome this, because at least at a time when – because what happens to the information ecosystem when there’s high concentration is things deteriorate. It’s not even – even take – you know, look, web search is a fantastic thing, except SCO itself has also had unintended consequences to being able to get at information.
So, therefore, I think us having a reset, it’s – I think it’s the classic cycles of, as technology evolves, you – and it even cleans up some of the unintended issues of the previous generation. So, from that perspective, I think this is disruptive to the existing power base, if you will.
Sir Simon Fraser
Okay, I’m going to take two more in the room. I need to – somebody towards the back of this side. So, gentleman there and then, lady over there, take these two in the room. If none of you – neither of you ask about climate, I’m going to come in with an online one about climate. So – over here.
Dominic Hurndall
Dominic Hurndall, from Oaklin Consulting. Given the ubiquity of Microsoft and the rate with which the technology is changing, do you feel – who do you feel accountable for – who do you feel accountable to for how the technology is developing, or do you feel accountable to anyone? And is there any single nation state organisation? You know, you touched on the importance of us having the conversation, but actually, of the tech firms like Microsoft running so far ahead, that there isn’t really anything effective being done.
Satya Nadella
So, it…
Sir Simon Fraser
Can I take the other one? Sorry.
Satya Nadella
Yeah, please, yeah.
Sir Simon Fraser
Lady there.
Olivia O’Sullivan
Hi there. Olivia O’Sullivan, I direct our UK in the World Programme at Chatham House. I’m very optimistic about the way AI could improve public services and the functioning of the state. Equally, the biggest story in this country right now is about a disastrous digitisation programme in our Post Office, for example. So, it really underlines that there are risks when government deploys technologies it doesn’t fully understand to make decisions about people’s lives. What would you like to see states and governments do to improve their capacity, their skills, to use AI well? What do you think is missing in the capacity of the state?
Satya Nadella
That’s great. Yeah, it’s…
Sir Simon Fraser
Two, sort of, linked questions in a way, so accountability and delivery.
Satya Nadella
Yeah, so, I think, on the first one on accountability, here’s – you know, at a firm level – you know, I love this, you know, definition of a corporation that Colin Mayer of Oxford came up with, which I think he talks about, “The social purpose of any company is to find profitable solutions to the challenges of people and planet,” right? And the two keywords there is ‘profitable’. So, after all in our, sort of, formal markets, you have to drive businesses that are generating profit by allocating resources to innovation and competition and what have you.
But there are solutions to challenges of people and planet, and that’s where I believe we get a license to operate, whether it’s in the UK or in the United States. So, that’s why I think some – I, at least – our shareholders should care about this, I care about this and the company needs to care about it, that we can’t go and do things that in the lo – you know, break things around us. If we – if that happens, we’ll lose license to operate and it’s – it’ll really be – our shareholders will suffer and therefore they should care about us doing the right thing. So that’s the self-governing mechanism.
That said, let’s be also grounded in the fact we are a company that’s based out of the United States, so we are subject to US law and regulation. We can’t outstrip, quite frankly, the United States’ own credibility around the world. I always, sort of, say that, like, you know, we are a US company, and so, you know, what the UK thinks about US will matter to us, and so, therefore, we take both that – as a multinational company, I don’t take the license to operate anywhere for granted. I think that unless and until we are creating that local surplus, I can look at the local startups here that we’re partnered with, the small businesses that we are making more productive, the public sector that we are making more efficient, health outcomes, education outcomes. Unless and until I can come see the Mayor of London and, sort of, talk about this incredible ways, I don’t – why should you allow us to operate? So, I say that’s where the accountability comes from, I think, in the long run, and with our understanding that we will be subject to regulations, laws and the norms that everybody expects us to follow.
On the public sector, this is one of the most exciting things for me, right? The idea that this technology can improve, right? When you take the – you know, as a percentage of GDP, when you add up healthcare and education, that’s probably one of the largest pieces of government spend anywhere in the world, and both of these you can make real material difference using this technology.
Then on the second part, is that from a diffusion and deployment of it, right, like take even any – I’m not particularly familiar with the issue of the Post Office, but if you take any complex IT project, the cost of deploying a complex IT project – because one of the fundamental revolutions that’s changed is software development, right? When I look at GitHub Copilot, I – as I mentioned this, right, to me, my belief that this technology can actually be real came about when I first saw GitHub Copilot, right, back in the early 2022, and what it was doing for Software Developers and their productivity. That’s perhaps one of the greatest democratising forces. You’ve taken the most elite knowledge work and made it simpler, easier, more fun, people can stay in flow.
So, even this – you know, there is not an, you know, I would say, equitable distribution of software engineering talent today across sectors, whereas we can have that with something like this, right? Where we can even take someone who is not trained, perhaps, as a Software Engineer, can pick up more of the skills on job and so on. So, therefore, I do believe that the public sector, public sector efficiency, taking education, healthcare and general deployment of digital systems and the complexity out, can make a huge difference.
Sir Simon Fraser
Okay, we’re coming, I’m afraid, to the end. We’ve got just about a minute left. I did want to ask you about climate because there’s a question that’s come in online saying, “How will the generative AI help us to develop the new technologies we need in order, really, to deliver progress on the climate agenda?” I think that is an important one, so maybe we could end on that one?
Satya Nadella
A 100%. I mean, like, to me, the – that point you asked earlier around, “How can it accelerate science?” Perhaps the place – we definitely, as we build this technology out, we should be the ones driving, even, more of the renewable sources of energy that help us power the compute that then creates this technology. But the most promising thing is to use generative AI to take 250 years of chemistry and accelerate it so that we can find the new molecules, the new materials, that allow us to make the energy transition more successful, because I think that’s what we need. I mean, the – you know, the value chain here is so ingrained that unless and until we can accelerate science, it’s going to be tough for us to achieve some of the climate goals that we have set ourselves up for.
Sir Simon Fraser
Thank you very much, and it’s good to end on an up note about the prospects that this new technology holds for us all. I think, very often, we are focused on our concerns about and that, of course, is something that’s true of – it’s been true of new technologies throughout history and human reaction to them, but it’s been a great privilege for us. Thank you very much for joining us today to explain all the many issues that you have explained to us, and we’re pleased that you’re going on to Davos to have those conversations. Thank you for choosing Chatham House for your stop here in London. Thank you to Microsoft for organising that. A round of applause [applause] [pause].
A lunch, and there is a demo, a Microsoft demo of some of their technologies, which I think were very interesting for all of us. Thank you for – all for coming. It’s been a great pleasure to host this event [applause].