Bella Pollen
Good afternoon, everybody. On behalf of Northeastern University and Chatham House, I’m delighted to welcome you all to today’s event: Artificial Intelligence and the Public. We’re going to start with a conversation with Joseph Aoun, President of Northeastern University and learn a little more about a recent survey from Northeastern and Gallup, on the public perception of AI and its impact in the UK and North America. Following that, we’ll move into a panel, with leaders across several key industries, to discuss the implications of AI and strategies for the future. Please be aware that this event is on the record and will be livestreamed. Those joining us via livestream are welcome to comment on Twitter, using the hashtag capital #CHEvents. For everyone here with us in the room, please take a moment to silence your mobile phones. Thank you.
We’re in the throes of a digital and robotic revolution. Robots will one day be an extension of us and also our replacement. But fear it, resist it or embrace it, AI is our future. It will change our lives in a myriad of fundamental ways. Today’s conversation is about broadly assessing the impact of AI on the global economy and how higher education should evolve into the best possible training tool to meet the challenge of tomorrow. It’s my great pleasure, therefore, to introduce Joseph E Aoun, President of Northeastern University.
President Aoun is a well-known higher education thought leader, a renowned scholar in linguistics and an internationally respected voice on the value of global and experiential education. Under his leadership, Northeastern has globalised its signature co-op programme, nearly quadrupled external research funding and established a network of campuses, with six locations in the US, Canada and as of last fall, here in London. He’s the Author of numerous articles and books, including Robot-Proof, available here: Higher Education in the Age of Artificial Intelligence and it’s an illuminating read. Please join me in welcoming President Joseph E Aoun.
Kenneth Cukier
There you go [applause].
Joseph Aoun
Thanks for doing that.
Bella Pollen
And nice to see you again [applause].
Joseph Aoun
That’s great. No, of course, thank you.
Bella Pollen
So, President Aoun, obviously, you are somebody who follows AI trends very closely. Do you still believe that AI will eliminate more jobs than it creates?
Joseph Aoun
Yes, for the foreseeable future, there are various projections and you have seen them, you have read them, you have heard discussions about them that they are project – we are projecting that up to 50% of the jobs we know are going to disappear in the next 20 years, in the emer – in the advanced economies. In the emerging economies, it will be up to 70%, according to the WHO. Now, the – you know, when – what The Economist discuss as well, it’s going to be 50 or 40, but it’s going to be substantial. If – also, there are new jobs being created as we speak, but the point is that, we, you know, we don’t expect that the jobs that are going to be created in the short-term are going to compensate for these losses.
But the – let’s go beyond that. Let’s step – let’s go one step further, and what is the implication for us, as humans? You know, we – what is the sweet point for humans? And clearly, we have to start thinking about human machine interaction, everybody’s talking about, and – but also, we have to start thinking about what we humans do that machines – that cannot do and don’t do at all, at least for the foreseeable future. By foreseeable future, I mean our lifetimes, maybe 60 years/50 years and, essentially, if you look at it, that is the subject of that is going to be central to society and central to high education, because it’s going to play a role in feeding this discussion. So, I’ll be happy to expand on that, Bella. So, call me Joseph. I call you Bella, call me Joseph.
Bella Pollen
Okay, it’s…
Joseph Aoun
Okay.
Bella Pollen
…there’s a lot of Presidents. Well, in Robot-Proof, which I’ve just read, you talk about 21st – I mean, you said that 21st Century universities should, and I quote, “liberate students from outdated career models, to give them ownership of their futures.” Can you maybe give us a bit of an idea of what that liberation might look like?
Joseph Aoun
Yeah, I mean, look, if you look at the framing that we just discussed, what is the mission of high education? The mission of high education is to help people become robot-proof. How do you help people become robot-proof? Becoming robot-proof is a journey, it’s a lifelong journey. It starts by looking at our – what we do, what kind of learning we are dispensing. And the learning we’re dispensing should be a learning based on what we called humanics and what is humanics? Humanics is the integration of three literacies. Understanding tech literacy, namely understanding machines and how – you know, to interact with machines. Data literacy: understanding the sea of information generated by machines. And the third is the human literacy, what is it that we do that machines cannot do, or don’t do at all, for, at least for the foreseeable future? And it’s the – and what is the human literacy? The human literacy is the ability to be creative, to be entrepreneurial, the ability to be culturally agile, the ability to bring people together, to understand the body language, when I look at you and see whether you are agreeing or disagreeing with me, the ability for people to work in teams, the ability to be global, etc., etc.
So, what I am saying is, yes, focus on the human and machine interface, but go beyond that and start thinking about what we do that well that machines cannot do. The idea is that if we start competing with machines on – based on their turf, we’re not going to succeed. If we look at machines and if they – and look at what we can do that they cannot do, then that’s where our sweet spot is as human beings.
Bella Pollen
And teaching that is something that’s very different from what you do at the moment and in higher education?
Joseph Aoun
You know, with high education, first of all, we are – we like to be siloed. Whereas, here we are talking about integration of the three literacies and second, you know, there is something beyond that. We defined our universe as being the campus, per se, or being what – you know, the magnificent setting that the students are living in. But, in fact, you know, if you – as you know, if we talk about creativity, if we talk about entrepreneurship, if we talk about teamwork, about cultural agility, etc., you have to live it and this is the experiential component, namely every learner, every student, has the opportunity, the ability to go work and test that in a real world situation. And, you know, because when you are in a real world situation, when with long internships, where you are embedded in an institution, whether it’s for profit or not for profit, you are, you know, trying to understand yourself, what you’re good at, what you’re not good at, you understand what others are good at. You understand how to interact with others and how – people are looking at it from different horizons, that’s the cultural agility part and you see gaps. You say, look, the – this is an opportunity no-one is covering, I can go and launch it. My – and so, the idea is that we need to focus on humanics, but humanics cannot be studied, only it has to be lived. In other words, you have to integrate the classroom experience with the world experience.
Bella Pollen
And are employers broadly in agreement with your new ethos for higher education?
Joseph Aoun
And we’re going to ask Bill here, because Bill is, you know, going to enlighten us about that. But let me mention that, in fact, we have surveyed employers, we survey employers constantly and you – they, you know, they say the type of talent that we’re looking for is not somebody who is only proficient in tech literacy or data literacy or whatever silo, but there is somebody who can go beyond that, you know. And, in fact, when you start asking them, they use the term ‘soft skills’, but it is, in fact, it’s the human literacy. They, you know, they want the integration of that, because, ultimately, they’re investing in talent and investing in leadership. Somebody who is narrow in one field is not going to be able to lead and going to allow others also to flourish.
Bella Pollen
Creativity and imagination are quite hard things to teach, they’re quite hard things to actually get a handle on. Will you be changing the way that you teach creative subjects?
Joseph Aoun
I mean, the best way to teach creative subject is, first of all, you know, to stumble, to fail and that’s an experiential component, you know. You want people to be able, and yes, they will understand the great crea – you know, creators of, in our history, or the current ones, but if you don’t do it yourself, you’re not going to move forward. And the best way to do it is when you are in an environment that is relatively safe as a student. So, this is why, you know, we have to foster creativity in action, which – and there are many ways, student runs accelerators and new, not for profits that are launching, for profits that they’d be launching. But also, creativity is not only in the new start-ups that you think about. You can be part of any organisation, large, medium-sized, etc., and start thinking differently about it. That’s entrepreneurship.
Bella Pollen
Absolutely. Tell us – I know that you did a Gallup poll and that was released yesterday and showed that most people in the US, UK and Canada expect employers to provide skills training. Do you think that’s the optimum place for today’s workers and learners to access…
Joseph Aoun
No.
Bella Pollen
…or do you think this has to begin and end at university?
Joseph Aoun
Look, we’re all becoming obsolete. My colleagues and Bill, especially, is going to discuss that we’re all becoming obsolete. We all need to retrain, to re-educate ourselves, to reskill ourselves, reinvent ourselves and then what’s interesting is, you know, who is going to provide it? And this is where the attitudes where, you know, the Northeastern-Gallup survey of UK citizens, US citizens and the Canadian citizens, is very illuminating, because it shows commonality. People want to lifelong learning, but, you know, people are looking at different providers. So, what is interesting is that, for instance, and this will be discussed further, is that, you know, in the US, the people expect the employer to provide it. In other places, and you’re going to hear that in the UK, isn’t – it’s going – is it – what’s the role of Government, or in Canada, what’s the role of Government, what’s the role of universities? What’s interesting, from this perspective, is that it – this is worrisome from, if you look at it, because the employers are providing less and less lifelong learning in the US. Why? Because the average tenure of an employee is less than five years. In Silicon Valley, it’s two years and they say, “Why should I invest in an employee, if the employee is not staying?” So, that’s the worrisome part.
The other part is that, in the US, for instance, I’m focusing on the US because my colleagues are more versed in a – on a global scale, is that the gig economy is playing a major role. A third of people who are employed are employed through a gig economy perspective, so they don’t have employers. So, who’s going to help you retool and who’s going to help you redefine yourself and always skill yourself? In fact, if I may quote Bill, I was discussing this aspect and he said, “We employers are good at helping people reskill themselves or upskill themselves within the confine of our company, but if somebody wants to reinvent herself, totally, and move, from instance, from finance into a completely different field,” like become a Writer like you…
Bella Pollen
Hmmm, gosh, okay.
Joseph Aoun
…she won’t – she – “we will not be able to help her, so somebody needs to go beyond the companies to do that.” And unfortunately, universities have not embraced lifelong learning as part of their commission and frankly, that’s the opportunity. And if you think about it, we are an ageing population in the US, in the UK, in Canada, and therefore, you know, focus only on 18 to 22, which is essential, is not going to be enough, because we all are going to need lifelong learning. That’s the opportunity and that’s why I believe that if – from this perspective, high education is facing its golden age, but it doesn’t realise it yet.
Bella Pollen
No, and governments, are they doing enough to upskill their citizens? I mean, do they listen to your advice, do policymakers…
Joseph Aoun
Look…
Bella Pollen
…actually…
Joseph Aoun
…I think it’s a great point…
Bella Pollen
…seek your advice?
Joseph Aoun
…for instance, because in, you know, the attitudes here also shift. In the United States, for instance, people don’t want Government to play any role. They want the employers with the – and we talked about the limitation. In the UK, it’s different, there is more of an acceptance of the employer and in Canada, too. But let’s go, on a global scale, go beyond that. If you look at the intervention of governments throughout the world, for instance, in Singapore, there is a notion of lifelong learning account that every citizen has. This is something that hasn’t been tested in the three countries. The Scandinavian countries have that. In Canada, in some pockets, you know, is working on this. But that’s one of the aspects of intervention that the – Singapore and some Scandinavian countries are saying, we, as Government, can play a role there.
Similarly, if you look at the UK, per se, there is an attempt, there is work to be done – that is being done now to ask every company, of a certain size, to put half a percentage point of the – its payroll for lifelong learning ad this is a step in the right direction, but it’s not enough, as we said, because of the gig economy. So, that’s – you know, we are in a transition here. People are waking up and saying, AI is going to affect us all, but the answers are not there, are not uniform, and frankly, and my – the panellists are going to discuss that, is the – what – how can we build a human centred AI? You know, not only from an ethics perspective, but also, you know, from a governance perspective and other perspectives that they will discuss and will be happy to elaborate.
Bella Pollen
I have so many more questions, but I’m going to leave you in the hands…
Joseph Aoun
Absolutely.
Bella Pollen
…of the panel. Thank you.
Joseph Aoun
Thank you [applause].
Kenneth Cukier
Thank you, Bella, that was absolutely fantastic. Yeah, good. I’m now going to bring the panel, so the panel start – just looking, already coming on stage, I’m sure. Fantastic, so, let me explain to you who’s here. On my right, on your left, is Oli Buckley, who’s the Executive Director of the Centre for Data Ethics and Innovation. Next to him is Bill Winters, who is the Chief Executive of Standard Chartered Bank. Welcome, Bill, and Kriti Sharma is the Founder of AI for Good and is also a Board Director of Oli’s Centre.
What we’re going to do now is listen to their, sort of, opening statements, for several minutes, and then we’re going to have a moderated discussion on stage and then we’re going to give you a chance to ask questions. However, before we do that, I need to get a show of hands, because this is all about the public perceptions of AI. I’m interested in getting a sense of the room, who here is nervous, versus who here is very optimistic? All those who are a little bit worried about the AI future ahead of us, raise your hand. Okay, so, that’s like 9.6% of people. Yeah, can you do both? No, actually, you can’t. And the reason why is, that’s just too wishy-washy. We have to crystalize these debates.
Joseph Aoun
Quite right, yeah.
Kenneth Cukier
Right, you can’t just sit in the fulcrum, right, exactly. So, that was about 15%/10%. You can say undecided. No, you can’t, and you have to come to a conclusion. Going to force the issue. Who here is very optimistic about the future of AI? I think it says more about your personal constitutions than the issues. Okay, that’s great. Good, it’s almost half and that sounds pretty good. And before I get started with – by inviting Oli to be the first speaker, and there is a couple of housekeeping notes I have to say, which is, first, to remind you that although we’re at Chatham House, this is not under the Chatham House Rule and in fact, it’s on the record. Secondly, that this is being co-hosted with Northeastern, thank you, and that is part of a Digital Society initiative that Chatham House is forging forward, in which we are taking these big issues that combine the policy world and the technology world and looking for a common understanding about it, for the purpose of maximising the potential of these technologies. Having said that, Oli?
Oliver Buckley
Thank you. So, in a moment, I’m just going to tell you a little bit more about the organisation I work for, so that you can put that in context. But I guess my opening reflection is to say I always think there’s a danger in these conversations that we end up emphasising the ways in which society needs to respond to technology and don’t think enough about the way that technology needs to respond to society.
Now, the Centre for Data Ethics and Innovation, the CDEI, it’s a new organisation set up to advise the UK Government on how to maximise the benefits of AI for society and the economy, across the board. We’re led by an independent board of experts, of which Kriti is one, and this is a multidisciplinary group, so, draws on people from business, from academia, from regulation and policy environments, from faith and community backgrounds, too. And our focus is on developing recommendations for how AI and data driven technologies should be governed and we think about governance in its broadest sense, so, right from national policy, laws, regulations, through to voluntary codes and organisational culture. And it’s our belief that if we get the governance right and, to be clear, I think that we’re already getting quite a lot right, so, we’re not starting with a blank sheet of paper, that’s how we can ensure that AI is working in the service of our values as a society, that innovators have the clarity that they need to innovate and also, really crucially, that these technologies are developed and operated in a system that is both trustworthy and trusted. So, ultimately, you know, without public trust, we won’t get the sustainable rewards that AI can bring. We might not to get to benefit from AI driven improvements in cancer diagnosis, for example, if there’s a public backlash that makes it more difficult to progress.
So, for us in the Centre, ensuring that the public voice is accounted for is absolutely central to our work. And in some of that early work, we’re just getting started, one of the things that has struck me as a theme is a sense of powerlessness in the face of technological progress, the idea that this is somehow beyond our control, not just individually, but collectively, too. So, people tend to speak about these developments as inevitable and that it’s for Government, citizens and most businesses simply to sit back and wonder how to respond to the impacts that these technologies are bringing. And, you know, to an extent, well, that has been people’s experience of development of the internet, you know, in lightning fast change, huge disruption, an emergence of a new status quo, followed by a general sense of bewilderment about how we got here and who exactly it was decided that that was what we wanted.
So, I think we have an opportunity here to do better than that in the development of AI, that we can shape the future, not just respond to it, that technology should serve society and not the other way round. Now, in order for this to happen, we need to start with a vision for how we want the world to be. What new trade-offs do we want to make in a world where AI makes things possible that simply weren’t possible before? So, you know, for example, if banks today are able to use AI to identify vulnerable customers from their transaction data, you know, to spot someone with a gambling problem from the patterns of spend they’re making online, should they do that and should they take steps to protect those people? If the capability is there, do they have a responsibility to use it, or is this a gross infringement of personal privacy? You know, if better predictive power enables some people to get insurance much more cheaply, but for others it becomes prohibitively expensive, are we okay with that and in what circumstances?
I think there’s another set of questions, you know, what constitutes the good life in a world where human brainpower can increasingly be replaced by machines? So, you know, are there some jobs that we might want to reserve for humans, no matter how good the machines become? Perhaps in matters of justice or the caring professions. And what about the opportunities, how should we share the productivity gains that might come from AI? You know, does it provide an opportunity for us to think less about the quantity of work and more about the quality of work, to understand not only what jobs people are good at, relative to machines, but also, what jobs are good for people, that enhance wellbeing, resilience and social cohesion? And in a world where it’s now becoming quite respectable to advocate for a universal basic income, a world where we’re potentially paying people, regardless of what they do, is it so farfetched to think that we might pay people to do jobs that benefit them and society, but not a, sort of like, you know – might not meet the kind of economical, commercial criteria that we emphasise today?
So, these are big societal questions that require societable – societal responses and that includes agreeing on who should decide what, you know, what are we happy to leave to the market, to Company Executives responding to market incentives? What are the decisions that we think need to be debated through democratic institutions? And then once we’ve decided that, we have to think about what does good governance look like in this world where machines are complicating things?
So, I’m an optimist. I think that we can answer these questions, but, of course, we won’t get there overnight. There’ll be winners and losers. There will be people that are disadvantaged, through the period of disruption that inevitably follows and we have a responsibility to look after those people and to prepare them for the new age. So, it’s going to be a collective endeavour and I think that this conversation today is vital, and I look forward to hearing what the fellow panellists have to say.
Kenneth Cukier
Great, thank you, Oli, fantastic. Kriti.
Kriti Sharma
Hi everyone, I’m Kriti. I – just to give you a little bit of background on myself. I am a Computer Scientist by training and there were moments in my career, in my life, that I had access to hundreds of millions of transactions and profiles and data of users, of people, for my research and for my work and I went through that without ever having to think of the word ‘ethics’. And so, for me, living in a world in London where we have a data ethics, or AI ethics, event every week, that usually Oli, Ken and I and ten of our peers are on, so, it’s actually, it’s positive and it has some challenges, as well. First, positive that this is becoming more mainstream, as an issue in how we create AI and what are the design patterns, what do we use it for? But the challenge is, I think we are doing a bit too much talking and we need to translate this into action faster and sooner, and this is where most of my effort at the moment is focused on.
The way I look at the challenges of AI is, first thing, who creates this technology? And I genuinely believe that the future of our society should not be designed just by geeks like myself. And this is why I, proactively, bring together people from different perspectives and challenges, who ask questions. They – a big call to action is in, you know, all very powerful people making influential decisions in your organisations, in the temporary structure of your teams, not just gender or race or background, but what skills do they have? One of the most hottest jobs right now, in the field of AI, is not necessarily a Data Scientist, it’s Anthropologists or Conversation Designers. People who can understand the interface of human and machines and how to build that trust and that network and those are the skills that we’re really struggling to hire. I’ve hired, over the last five years, at least anywhere between 70 to 100 Data Scientists of all kinds and often, the challenge is, you’ll find certain skills and not the others. You will hire – get great people, who do incredible maths and computing, but not necessarily understand the human process. Hence, they’re following human centric design principles or – and even some of the approaches of the European Commissioner are recommending it, the human rights driven approach to AI. When we build these systems, think about the person. So, bringing together a diverse group of people and taking a human rights and human centric design based approach is absolutely going to translate the fear and the positive balance, or imbalance, you saw this morning, into more optimism, for sure.
The other area I spend a lot of my time on these days is what we use it for. So, we hear a lot about AI and its potential and various applications, but in reality, in the business world, a lot of applications of AI today, in production, are limited to making people click more ads or recommending more products to buy online or financial systems. Some of the projects that I’ve worked on, using AI to make decisions on who gets access to your product or not, and making you spend more time on your screens and I think, as society, we can do better than that. We should be investing more in tackling some of the most difficult challenges that society is facing today. So, a lot of the projects that I’m working on at the moment is creating a network of Data Scientists, of people who care about social challenges, bringing them together to address issues in healthcare, in women’s rights, in climate change. And there was a point, comment made in the previous session with yourself, just around the – how much time do employees spend in these companies today and how do we train them? And a big way, especially in data and AI, is to bring experiences from different fields. So, if you have colleagues, if you’re building, you’re cultivating this talent, give them a bit of time to also work on AI projects in something completely different, and they’ll bring a very different and more interesting perspective back into the – into this world.
And lastly, I just pause for a – we go into more comments, is think beyond your role today, and that’s something I’ve had to work quite hard on. My ideal path will be just immerse myself in AI applications and work with data all day, but I’ve also started to zoom out and think about policy issues. So, it should not just be Oli and his team in this Civil Services, who have to deal with that, but we all have to take more accountability and bring ourselves to the table. Show up and start to try and make those – influence those decisions and provide our critical output and our input into these projects. And I think all of us have a lot to contribute and do not think that this field of AI is either left only to people who understand it or those who are investing in it. There’s a wider opportunity for everyone to get involved. Thank you.
Kenneth Cukier
That’s great, thank you very much. Bill.
Bill Winters
Yeah, well, it’s a pleasure to be here. Thanks very much for having me. So, I run Standard Chartered and Standard Chartered is, if you don’t know, it’s a global bank. We operate in about 70 countries. We’re headquartered in the UK, but we have operations in all three of the countries that the Northeastern-Gallup research covered, US, UK and Canada. The bulk of our business is in Asia, the Middle East and Africa, which, obviously, were outside the survey sector. And President Aoun and I discussed before how interesting it would be to see how different the perspec – the perceptions are, coming out of China, for example, where AI is certainly, not only extremely advanced, but also used in some very different ways, at least for the time being, relative to the way that it’s used in the West today. But we can imagine convergence, over a period of time.
But we were having a quick chat before we came down this afternoon. Part of the question was, you know, “Why are you here?” And the answer is, I mean, I don’t have any of the expertise of the other colleagues of the panel. I’m delighted to hear that some of the questions that Oli framed are being addressed in a serious way from the perspective of Government and we also frame those questions, but I can tell you some of the answers, from a purely commercial perspective, as well. Will we make loans to somebody who has an AI indicated history of gambling? No, we won’t and we’re very happy to have that information. Is that ethically correct? Yeah, I would argue that it is, but I can absolutely see the other side of the argument, which is that that’s unfair.
Not too long ago we had debates about whether it was okay to ask somebody applying for life insurance to have a physical check-up. That was viewed as unfair, because the person that smoked or that drank heavily, or that had a hereditary disease, about which they could do nothing, would be disadvantaged, relative to those who were able to demonstrate that they led a healthy lifestyle. Nothing AI about that, that’s good old fashioned I. But we debated, at the time, whether that was an acceptable practice or whether insurance should be mutualised.
You can tell by my accent I come from the United States. I’ve spent half my life, the second half of my life, to date, here and quite interesting perception differences that President Aoun called out between the US and the UK, as it relates to AI. Also, very, very, very different perception differences, as it relates to the degree to which health benefits should be mutualised, right? In the US, you’ve got private insurance, which, to a degree, discriminates between healthy and unhealthy people. In the UK, you have the NHS, which doesn’t discriminate between healthy and unhealthy people. You could argue both sides of that, given the way each system works. Now, my point is that the ethical challenges that AI presents I think are very familiar ethical challenges to all of us: is this happening faster? And it’s going to happen in ways that, as Oli, I think quite directly pointed out, we may only realise after the fact how far down a particular road we’ve gotten before we realise we don’t quite like that road and it’s quite right to ask that question upfront.
So, I won’t bore you with what banks do with AI. I’m also on the board of a Swiss drug company; happened to be there for a board meeting this week. The way that Novartis is using AI to improve the efficacy of clinical trials, it’s mindboggling, it’s – and it’s – it’d be hard to argue that that’s not for the good, right? It’s making the discovery of drugs and the safety of associated drugs much, much, much higher as a result of AI. I’m on the board of the International Rescue Committee, you know, a fabulous charity operating out of the US, from the UK, that deals with frontline refugee situations, overwhelmingly in and around Syria now, but who had seven year history all over the world. Extensive use of AI to target the aid programmes, to prevent fraud. When a group of Mafiosi set up on the Syria border with Turkey to inflate the prices of all the goods and services being delivered into Syria, you needed some really good data tools to understand the patterns that could trace that back to the source of the fraud.
So, AI is everywhere. There are some fantastic applications. In banking, it’s allowing us to get the appropriate advice to customers at the appropriate time, customers that are showing signs of distress. There’s two responses to that: one is to help them through the distress. By the way, if we’ve already lent money to them, we have a really big incentive to help them through their distress, spot it early, get them onto a plan. It won’t be obvious. They usually don’t come and say I’m in distress. There are indicators in their payment patterns. There are obvious indicators, like in the old world it would be if their salary stopped coming into the deposit account. That was an indicator that distress was likely to come. There’s many more sophisticated tools that we can use today. But we also use it for marketing and we also use it for differentiating between one and another. We use it for credit decision-making. Will this insinuate itself into every aspect of our financial lives? Absolutely, right? Should it be checked or, at least, should we understand how it’s being used? Absolutely, and hence the debate. So, I’ll stop here and let the discussion take itself where it goes.
Kenneth Cukier
Absolutely, that’s great, how interesting. Let me start here, to hear all of the four speakers discuss AI and how it’s going to change society, I’m both thrilled by the promise, but I’m nervous that the – that it seems a lot to get one’s mind around. And it’s going to be about, well, first, what is the role of humans, and secondly, how will humans work with AI to see that we have our values? And when I look outside rooms like this, into the wider world, I’m not quite so certain all people are going to be able to make that leap into working with, sort of, the cognitive plasticity with the machine. And I think society’s going to cleave into two parts: the people who work with AI and the people who are off the grid, are, sort of, victims of AI or AI has done to them and they’re going to be simply taking their stoma, like in Brave New World, while the Alphas and the Betas do all the work. That’s a question.
Bill Winters
Yeah, I guess that would be very consistent with the technological change through human history, that there’s – at every juncture and we, you know, we know the stories going back to shifting from being hunter gatherers, to farmers, and through the Industrial Revolution, etc., etc., that people have been displaced. And I think the statistics, that Joseph mentioned, are very familiar to all of us and very scary, ‘cause it’s a higher proportion happening over a shorter period of timing. So, the societal challenges will be enormous, that’s even…
Joseph Aoun
And if I may, with respect, Ken, to your point, you cannot predetermine who can redefine herself and who can flourish. So, what we do is to provide the opportunities. Our systems should be based on the idea that you are providing equality of opportunities, not necessarily equality of outcomes. But if we don’t provide equality of opportunities, namely, if we don’t provide specifically opportunities for people to redefine themselves, to reskill themselves, to upskill themselves, then we are in trouble. So, the UBI, Universal Based Income, it is not going to be a panacea, because you have to provide the opportunity for people to be able to say, I am going to move in a different direction and what kind of support is society going to provide me to allow me to do that? Whether it’s the – and we talked about what Singapore has been doing and others.
But your point is the – is well taken. We know that some people will not be able to do it, but we cannot predetermine who or not. Let the system provide opportunities and give incentives for people to do it and we know, by definition, that not everybody is going to be able to do it, but at least, if we don’t provide those opportunities, we’re all doomed.
Kriti Sharma
And so, I was just going to add, yeah, those opportunities are very important, but how do we make it easy for everyone to think that this is for them, too? And even if you look at within the field of computer science or data sciences, about ten years ago you needed to be – even five, you had to do a lot of low level coding to be able to get to the answers that you need and now we are at a point where AI is even writing to – starting to write its own code, so even my job will be automated, right? So, it’s not necessarily just following the certain paths that you go do these degrees and study these courses and that’s the outcome, but it’s available to a lot more people. The threshold is reducing massively and I think it’s more of a marketing and positioning and perception challenge at the moment, than anything else.
I’ll give you an example. One of the most prominent ways where people are reskilling themselves in data science and machine learning is online courses, like Coursera, SIMOOC, those kind of platforms. It’s fascinating, fantastic, millions of people have graduated from those programmes as Data Scientists and beginner level Data Scientists. If you look at who these Data Scientists are and where they come from, a big chunk of them were Software Engineers, who are upgrading to data science, and this is where a big positioning in marketing and perception barrier is. If you could start to bring in more examples of people who have come from different domains and are starting to maybe create AI, or in some cases, applying AI to some of those use cases you were describing earlier, then that would be great progress.
Joseph Aoun
So, let me be – go ahead, go ahead running.
Oliver Buckley
Oh, sorry, I was just going to build on Kriti’s point. You know, these tools also offer the opportunity to make more effective learning opportunities than we have been able to, to date, to understand much better than we do now the modalities of adult learning, versus learning for young people. And, you know, yes, some of that may be to upskill people in data science, but also, we could envisage a world where we’re using AI driven approaches to develop skills in a whole range of domains and at much lower cost than is possible today.
Kenneth Cukier
Joseph.
Joseph Aoun
I want to pick on you a little bit and you told us, when you looked at – you know, you discussed your journey, that you started as a geek and you became a human being.
Kriti Sharma
I’m trying.
Joseph Aoun
Yeah, and so, essentially, the – you know, that is – it has to go both ways.
Kriti Sharma
Yes.
Joseph Aoun
And so, you said, “Give us examples.” Let me give you a concrete example of how the access to tech and AI is not as difficult as we think. We, in – have a campus, or campuses, in Silicon Valley and in Seattle, etc., and we were challenged by the tech industry. And they told us we need more people going into the tech world and then – so, can – they challenged us and we devised a programme, a curriculum, where we take students who – nationwide, who finished a BA or BS in history, in English, in economics, in physics and chemistry and then they give them those opportunities, internships for – long-term internships for 12/16 months and they end up with becoming Computer Scientists, because we give them the training and the degree. What did we discover? We discovered a certain number of things: that this allows women to go back into the workforce. We discovered that under-represented minorities were able to go back into the workforce, be – and then move into the tech world. But in addition, the employers were telling us, Bill, that, in fact, those people are much better than the geeks, because they are coming at it from a human perspective.
So, essentially, I don’t believe that the Courseras or – and, you know, are going to be the solution. In fact, when you look at the results of what they have done, there are millions and millions of people, there are limitations. You mentioned one of them. We have in the audience somebody, a colleague, who really was the Chief Academic Officer of Coursera, she will tell you all the great things and the minuses, too. But it is our failing, in the sense of ‘we’, educational institutions, including the Coursera’s of this world, to demystify the tech world and provide those opportunities. So, I agree with you and your journey went in this direction and we want people to go in all directions.
Kenneth Cukier
Let me build on this and pose a question. At Harvard Business School, at the beginning of World War II, the faculty just tried to avoid the whole idea that the madness of war was taking place and pretty quickly they realised that the war was not going to the liking of the US Government and they needed statistics. And so, they routed the entire faculty of Harvard Business School and all of the Statisticians to go to the Pentagon, McNamara, among others, we’ll leave it there, to work on the war effort. But what was interesting is that after World War II they didn’t take the previous courses and then start teaching them again. They completely rethought their curriculum and taught completely new things. All the courses were different. With AI, it seems like we’re, sort of, carpetbaggers, bringing yesteryear’s institutions into the modern world and I wonder, what does a university look like for tomorrow that’s actually sui generis for a world of AI and isn’t, sort of, bringing its old legacy, practices, institutions and curriculum, to it, in terms of what the skills of tomorrow will be needed and how you will run an institution to train people?
Joseph Aoun
Are you asking me?
Kenneth Cukier
I’m asking all of you.
Joseph Aoun
All of us, so, why don’t you start, since you’re not the Presidents of universities and I’ll tell you how I’m – we’re looking at it?
Kenneth Cukier
Yeah, what do you need? What are you going to do? Do you believe it, and how does Government play a role? And that’s all, yes.
Bill Winters
We need – obviously, we need a set of technical skills and they’re not so difficult to deliver and we need people that can translate technical skills into real world applications, and much harder to deliver. I went to a liberal arts university, so I have a bias. I happen to think that that’s going to be a really useful skill in the AI world, but you have to be able to have a foot in both camps. Employers need to be part of that and so, we work with universities like Northeastern. I can tell – you mentioned Singapore. Singapore has – yeah, it’s a small country, but a very advanced country and they’ve got a very, very advanced approach to retraining workers. Joseph mentioned lifetime career allowances, and there’s a flipside, or another side to that, which is that the corporations are pressured/required to contribute to that lifetime learning. So – and we do it willingly, we do it happily, because it’s a chance for us to retrain workers, in some cases, for applications inside our company, in other cases, to go someplace else, but partially funded by the taxpayer in Singapore, partially funded by us. It’s not required, but it’s, kind of, required. It’s not a bad system and the result is a continuous upgrading and a sense of liberation for the individuals. So, I think there’s a role for governments, there’s a role for the academic institutions, for employers and, obviously, individuals themselves have to take control of their lives.
Kenneth Cukier
Describe – in 15 years, describe the sort of people you’re going to be employing, what are they going to have learned? What do they need to know to be good at what they need to do?
Bill Winters
Yeah, we got – I got a, kind of, a simple, but not simple, there’s a simple philosophy in banking, for, sort of, take a – individual banking, is that transactions are going to be done by the equivalent of Amazon or Facebook, and things that require some investment of time, emotion and trust are going to be done with technology and human beings. So, now, people are going to manage their retirement, their savings, the education of their children, their health, with a heavy dose of human intervention. Well after the point of singularity, when the machines are smarter than the humans, people will still be relying on humans, because we’re human. And now, maybe not forever, but I think for a long time and what we’re looking for is people that can develop those technical capabilities, but who can provide the human interface as well.
Kenneth Cukier
Okay, so, people are interfaces?
Bill Winters
We’re always an interface, yeah.
Kenneth Cukier
Doesn’t sound like they’re going to earn a lot of money as an interface.
Bill Winters
I don’t know. I think the people that are able to bridge technologies today, the people that are able to bridge technologies and an element of trust in the future, will be highly valuable.
Kenneth Cukier
Good, are you going to train these guys?
Joseph Aoun
Me?
Kenneth Cukier
This is for Joseph, before we get to…
Joseph Aoun
Oh.
Kenneth Cukier
…Kriti, ‘cause, Kriti is saving the world as well through…
Bill Winters
Yeah, for good.
Joseph Aoun
Okay, let me answer the question in two ways? One is, I already alluded to that, that maybe we have to rethink completely how we are – you know, what we are providing and in terms of education. The opportunities have to be based, as I mentioned, the idea of the humanics, namely, understand machines, understand the human machine interface, but go beyond that and understand what we humans do that machines cannot duplicate, like, you know, the creativity, the entrepreneurship, the cultural agility, the communications, all these aspects that we discussed. So, that’s the first thing. It’s – so, if you’re asking us if you have the tabula rasa, that’s how you should start.
Second, you cannot divorce yourself from reality. That’s where you have to bring together, you know, the experiential education, where people have to test their knowledge, refine it, change it.
The third aspect, as I mention, is that there is an enormous demand. In the United States 74% of the learners are already lifelong learners. No-one is providing the opportunity for these in a – you know, and we need to provide lifelong learning opportunities and it cannot – and it’s a humbling experience for universities, because universities were built on the model we are going to design the curricula and they would come. Whereas, here you have, as Bill said, to bring together the employers, the Government, what you mentioned, Oli, and then look at what we need, what does society need? What do humans need? And design curricula that are customised, personalised, to the needs of the individual, will allow the individual to flourish, will allow the company to look at it and say that’s a great investment and look at society as a whole.
I would not have picked the Harvard Business example, because that’s a microcosm that is so artificial, historically. I would have picked something else. I would have picked, during the Second World War, what happened with creating, you know, all this confluence of Scientists that allowed the United States to build the nuclear bomb. Where – you know, and if you look at it, what happened there was a sense of urgency and that’s the point that you were raising, namely, not Harvard per se, but the urgency, and then this allowed universities not only to attract people, but also to rethink completely the – what they were doing.
Kenneth Cukier
So, let me ask, who’s your competitor? And I’m – well, let me actually reveal my hand. If the traits that we need are people who interact really well with other people and they can get the knowledge from either an AI doing things or from Coursera micro courses, maybe your competitor is country clubs. Because students should just interact with each other, have fun, learn teamwork and collaboration and co-operation and communications and they can do it at a country club, and a lot of American universities are a lot like country clubs, you’ve got – visit Amherst and Williams you’ll see. And then you – they can just, sort of like, in the evenings, on the beach, you know, with, like, the person playing the ukelali – ukulele, they can learn machine learning from a Coursera course and they get together and prob – solve problems and talk together…
Joseph Aoun
You see…
Kenneth Cukier
…during the day.
Joseph Aoun
…first of all, what you are describing, there is nothing bad in what you’re describing. But is it enough? And the answer is no. Why? Because the question is not to be versed in technology. The question is not to be versed in data sciences. The question is not to practice in a closed environment, like the country club, you know, with your human skills. It’s how you integrate all that in a real world setting and that’s the difference between doing something in a siloed way, because what you have recreated, in your example, and I know that you’re trying to be provocative here and we don’t believe in that, is the – is another siloed approach, whereas, in fact, learning is about real world context. So, you know, you have to integrate the classroom experience with the world experience. The world is too interesting to ignore. So, if I put somebody in a class – in a country club experience, she is going to remain in this country club and don’t – doesn’t understand the world, doesn’t understand inequalities being created, doesn’t understand the differences that globalisation is creating, etc. That’s the beauty of bringing all this together and higher – what you are describing is the fact that high education has mostly been built on a country club model and it’s time now to change it, so our competitors are ourselves.
Kenneth Cukier
Okay, I’m going to leave that [applause].
Joseph Aoun
No.
Kenneth Cukier
Yes.
Joseph Aoun
No.
Kenneth Cukier
I’m going to start asking questions of the audience, why are you clapping? Like, what was this…?
Joseph Aoun
Yeah.
Kenneth Cukier
Okay, I’m very interested to hear what Government has to say about it and all the eyes, sort of, look to you, to Government, and what the practice of AI has to say about it, but I’m going to hold my fire in asking those questions, to get some questions from the audience. So, what I think I’ll do – I’m sure everyone has lots of things to say and lots of things to contribute. It doesn’t have to be a question. It can be a comment, as well. Make it as concise as possible. We’ll get several views from the room and we’ll bring it back to the panellists. So, I see a hand here, one there, and one there. Why don’t we start here, please?
Tanji Morgan
Thank you and Tanji Morgan, a Member of Chatham House. My question is, whenever I come to conversations such as AI and ethic – technology and all, there’s very little conversation, with regards to the data itself and where the data is coming from. Because, as you all know, you know, the datasets are what you – goes into the algos, whatever methodology you use, and you know, before you even start to talk about ethics, governance and things of that nature. My point is that, you know, how diverse, quite frankly, are the coders, how diverse, really, are the people that are determining the AI, okay? We are already seeing, you know, negative effects of the AI on society, and I’m in financial services and it – I can see where certain parts of the population will be disenfranchised, if you will, from banking and insurance. So, just curious about that, thank you.
Member
So, the educational system teaches us what to think, instead of how to think and I think this is what kills creativity, in early stages of life, hence the dissonance between history worldwide. We are taught to use AI as a resource of consumption, instead of a resource of production. The narrative of us and them, winners and losers, right and wrong, is what creates the fear of failure? But as we all learnt to walk, our parents told us, “Keep on trying.” They didn’t stop, “Oh, no, no, no, you dropped, let me pick you up and you’re going to stop walking. I’m going to hold you for the rest of your life.” They told us to try again. So, the integrated life of human flourishing is the result of failure – failing to success. The whole point of science is to experiment through trial and error, to find a truth, not the truth, a truth. The question here is, how do you envision early stage education to harness our natural human curiosity?
Kenneth Cukier
Good. Let’s take one from up there, as well.
John Gilani
John Gilani, Member of Chatham House. The first thing is, I never liked the title artificial intelligence and I prefer the title autonomous intelligence, but that’s my view. I think the problem is that we look at different tools and skills, whereas, we don’t look at it from the point of view of the consumer and the consumer is usually unwilling to give as much information as it turns out he is giving. If we look at the way that humans have progressed, books is how we shared intelligence, because we felt we had to transcend the lifespan that we had. Now, that intelligence is not being transferred willingly from a person to the mass, to humanity. It’s being transferred, in the eyes of many, unwillingly. Second, we – most people believe that this transfer of knowledge and sifting through that information is getting us towards a minority report type situation, where predictive ability, as you mentioned, is so powerful that it has the reverse effect. And in banking and finance you, for example, have the derivatives industry that is not based on a real product, that starts driving the physical side of whether commodities or FX or anything. So, I think this – these are the real problems that we are facing and we might not be addressing. Thank you.
Kenneth Cukier
Yeah, what I’m going to do, ‘cause we’re going to be – I can see we’re going to run out of time if we go back and forth, back and forth, is we’re going to take more questions from the audience and then I’m going to try to make a list and hear the summations from people here. So, there’s a woman here.
Staff
Yeah, it’s this side.
Kenneth Cukier
Yeah.
Silvia Cambie
Thank you. Hi, my name is Silvia Cambie, I’m with IBM. I wanted to ask a question about gender bias. So, the public discourse at the moment is very focused on AI, you know, and taking away jobs from humans. But we’re – it’s not – the emphasis is not strong enough and biased and I work on – in AI and I can see the risks of gender bias. And often when I’m talk – you know, I’m asking people, “How are you going to deal with those?” They tell me, “Well, just take another dataset that is bigger and more representative,” and I don’t think that’s the right answer. I think there aren’t enough women in AI and there aren’t enough women in, sort of, development and, sort of, developing AI and in architecture. So, there was a statistic that was released by McKinsey the – two weeks ago that in the UK it’s only 15%, 1-5 of people working in IT are women. So, there’s – you can imagine the risk, going forward, the kind of society we want to build are enormous. So, my question is really for Joseph, what kind of role is higher education playing in this scenario? Thank you.
Kenneth Cukier
There’s a gentleman there and we’re going to bring the mic all the way here. Everyone will get a chance to – please.
Member
Hi, I’d like to hear more about Government regulation, with regards to AI, specifically campaigning and elections. I think maybe some of the public knows about Brexit, for example, and the use of AI in Cambridge Analytica with the elections, as well as in the United States with Donald Trump. And I think that’s really concerning for me, that not enough people are talking about this point, that we have big groups, elitist groups, who are using AI to, you know, manipulate public opinion.
Kenneth Cukier
Great, thanks. Let’s hear from this side of the room now. There’s a gentleman there and a woman there and then a gentleman there.
John Warren
Thank you. John Warren, Chatham House Member. I have lots of machines that beep at me to tell me things I don’t want to know. My life is dealings with software and I’m trying to stop it doing what it wants to do and was desperately hoping for some simple programme, and I’d pay a lot of money for, that would actually just do what I want it to do. I look around me and see lots of people with screen addiction and the political consequences of AI are absolutely petrifying. Just look to China and see where it’s going. So, what I want to know, what is the size of the economy and what’s the future for universities for developing the rebel economy, people who really have had enough of AI already?
Kenneth Cukier
Great, okay, yeah, yeah.
Member
Hello, I work for Office for Students, which is the UK higher education regulator. But separate from that, I wanted to ask about which some of the other questions are alluding to, in terms of the ethical and moral compass of human beings themselves. And I feel like sometimes we jump in to try and rectify AR and data, but we don’t look at, actually, how much we should commit to perfecting ourselves before we then apply it to whichever instrument. So, in the process of this, as well, instead of assuming that we are people who have the good moral conscience and we’re going to try and impart it into technology, how should we actually commit to changing our ideas? And I studied economics and if you look at how you teach, they call it economics, but we’re learning capitalism and we’re learning productivity and we’re working in neoliberal markets and we don’t actually neutralise the things we learn. So, if you see how much disgust communism is treated with, if we are going to diversify the ideas that we do have, how come – in what proof do you have to say that when we don’t even teach things neutrally to begin with?
Kenneth Cukier
Great, thanks. I see a woman there and a gentleman there.
Marina Mossgots
Hi, Marina Mossgots, I’m a student at Northeastern University. So, I think you’ve all given a wonderful perspective on why we need diverse thinkers and humans to create the future of AI, but none of you have talked about the importance of security applied to AI and what would happen if a malicious adversary figures out the features that these algorithms are training on, in which case they can imply data to throw off these algorithms and train it the way they want to. And I was wondering, how are you guys going to go about mitigating that, because it’s not going to matter who’s creating these algorithms when someone else can manipulate them.
Kenneth Cukier
That’s great. Gentleman right – on the same row?
Member
And you mentioned inequality. AI is going to make it worse, not better, isn’t it?
Kenneth Cukier
Okay, great. I see one other gentleman and one woman, then we’re going to call it over, good.
Duke
My name is Duke, a Member of Chatham House. I’m concerned more about inequality, as well, especially in terms of the structure of the economy as it is at the moment, which is, basically, you know, shared between labour and capital, but with AI coming into prominence, we’re going to see more and more capital accumulated in intellectual property. Is there a way in which that can be stopped from ending up in a few hands?
Kenneth Cukier
Okay and there?
Sara Burch Khairallah
Hi, my name is Sara Burch Khairallah. I’m with the US Programme at Chatham House and my question, I think, is for Bill Winters. You admirably referenced your experience and exposure to AI across your work, both as a board member, a Non-Executive board member and professionally. On the point of inequality and the barriers to entry for people that want to adopt and use these wonderful tools, how broadly available are they in the NGO and not for profit sectors in the US and also, potentially here in Europe? Thank you.
Kenneth Cukier
Okay and these are a – and this is a cornucopia of questions and we’re not going to have time to answer them all, unless we go to 3 o’clock, but we do have to end at two. So, what I’m going to do is invite the panellists, who can probably suss out the questions that are most prone to them, Joseph and Bill, Kriti and Oli. Why don’t we start, Bill, with you, I’ll set an order, then Joseph, then Kriti, then Oli?
Bill Winters
Yeah, a few of the questions are about bias and I – it’s an extremely well understood risk, not a well understand prescription for change. And I, of course, I’m not the expert, but I’ll make an observation about the way, in my professional life, the business approach to environmental, sustainable, social development goals, has evolved. And, you know, the gist of it was, 20 years ago nobody paid attention to any of it and bad things happened. There were people that were educating us, there were people that were advocating for different things, but it took a while before there was an accumulation of focus to get companies to actually change the way that they approached ESG matters. But we have and some will say not enough, but I can tell you that the change is dramatic. I mean, my bank, which operates in emerging markets, is the leading sustainable finance bank in the global markets, right? That’s what we do. We do that because we’ve identified some ways to do that commercially, but because we were incentivised by our own colleagues, right? I mean, we can’t get great young people to work for us, or some older people to work for us, if we’re not socially responsible. Our clients won’t deal with us, our regulators aren’t happy with us, our shareholders won’t buy our stock.
So, we changed our utility function from make more money, to make more money responsibly. We need to change further our utility function to be make more money socially responsibly, but with due care to the bias that could – that will be invented, not could be, will be invented into our systems, if we don’t take steps to do that. Because we all know that the algorithms can be adjusted to identify different social outcomes and they will be adjusted to identify and pursue different social outcomes, exactly as has happened with ESG.
But let’s just – and so, I think that the usefulness of this kind of forum is to educate, to inform, to start to embed into the psyche of the organisations and then mitigate the, at least the worst of the effects of bias and ideally, go the opposite way and, actually, use these tools to eliminate bias.
Kenneth Cukier
Great, thank you. Can I ask you, ‘cause we are running against time, to choose just one that you think is the last thing.
Joseph Aoun
Okay, I’m going to focus on your question and actually integrate your question. I’m going to say, very briefly, if the input is biased, the output will be biased. If the input is not diverse, the output will not be diverse. Therefore, what is our responsibility? Our responsibility is, I – as I mentioned, to build with the biggest players in society, from Government to industry, to universities, programmes that will allow the diversity to be back. And I mentioned to you an example where we worked with the tech industry, in order to bring people who were not in tech, to move into tech. I gave you this specific example. We’re also working with IBM on bringing these badges that you have the 5,000 and precisely along these lines. So, as I mentioned, those programmes allowed women and under-represented minorities to move into the tech world.
Yesterday, at midnight, I received a call saying that a foundation is going to give us a lot of money, I am not allowed to say what it is, which foundation, bec – stay tuned. But the purpose, the purposes was the following. They said, “You have succeeded in bringing women and under-represented minority and then let them enter into the tech world. We want you to help other universities understand that and scale up.” So, they gave us the carrots to bring other universities to think differently. We stumbled on that. Therefore, your answer to your question is that if we – you feel that your responsibility as an educator is only to educate your students, that’s great, that’s essential, but that’s not enough. If you feel that your responsibility is also towards the field in general, to bring the field to understand that things can be done differently, that’s our opportunity. That’s what we have done.
Kenneth Cukier
Great, thank you. Kriti?
Kriti Sharma
I’ll just quickly touch on the bias and diversity problem and also the non-profit issue that you raised. Data bias, in reality, is actually a lot more understood by DAEI and the tech community today. There are emerging processes and frameworks to make sure a facial recognition system does not discriminate, based on gender or race. In fact, it’s becoming – you know, it’s surprisingly, a lot more okay to ask is this algorithm racist or sexist when you’re buying or procuring a product? Is it embedded into processes everywhere, in all the organisations? No, absolutely not and that’s where we need to do a lot more systemic change, rather than individual teams and companies raising their hand and trying to do that.
With regards to diversity, the numbers are pretty bad in AI, worse than tech. It’s about 12% women. But having said that, there are a lot more new roles that are opening up in the field of AI, which are at the interface of human’s design and machines and that’s where there’s a big opportunity that we really need to build on and invest more on and whether it’s at Government level, university or businesses.
Lastly, about making these tools available to non-profits and charities, and that’s exactly what I started to do a couple of years ago and we are building AI tools and sharing it across various organisations. So, a lot of our work is in predicting domestic violence before it happens, bringing reproductive health, mental health solutions to frontline organisations. AI is not going to solve these problems, but it can help play a big role and also, a lot of work on predicting impact of climate change, those kind of solutions. And the more we can share these resources, the more opensource we can make some of these solutions available, not just the technology, but also the data, the faster we will progress.
Oliver Buckley
So, I just wanted to reassure you all that when it comes to a number of the questions that were raised, I mean, Kriti and I are on this, like, we’re going to have it sorted by the end of the year. So, in the Centre for Data, Ethics and Innovation, we, so, we’re doing two major reviews. So, one is looking specifically at the issues around algorithmic bias and thinking about that in the full supply chain, if you like. So, starting with the data, thinking about the tools that can be applied to it and thinking about the governance that needs to surround it. Equally, there’s a question around targeting, particularly, of political advertising. That is another one of our major reviews. It’s looking more broadly than political advertising, but we’re thinking in detail about the role that targeting and personalisation plays, the impact it has at an individual and at a societal level. And also, we’re doing thinking around the, sort of, data infrastructure that underpins all of this. So, we do need to investigate new models for how data is taken from individuals and used by others to think about the role of consent, but also, other mechanisms. So, you know, we are part of a much wider set of conversations, but if it is any reassurance at all, do know that these things are very much on our radar.
Kenneth Cukier
Good, thank you. Now, before we end the panel, I have – I want to, first, thank them, but before we thank them together, I have some sad news. So, Peter Montagnon was a dear friend of Chatham House and was also on the Council of Chatham House. I served with him. Recently, he published a paper on AI and ethics and actually spoke at the house and sadly, he passed away several days ago. So, his contribution to these issues of AI and ethics were formidable. His contribution to Chatham House was, of course, greatly appreciated by everyone. So, I think it’s suitable, at this moment, to recognise him and his contributions as we recognise today’s panel. Thank you very much.
Joseph Aoun
Thank you [applause].