Marjorie Buchser
Right, good evening, welcome, thank you for joining this conversation on Artificial Intelligence Applications and Misapplication. I’m Marjorie Buchser. I’m your moderator and I’m really delighted to be in-person in this room with you. I don’t know about you, but it has been a really long time since I was in this charismatic venue with actually a lot of people. So, thanks again for being here. So, this event is on the record. We have an in-person online audience, but also audience online. For the people that are joining us through the webinar, you can submit your question through the Q&A. For you guys, you can just raise your hands when we have the Q&A time of this meeting.
So now, back to the substance. So, in 2015, one of the leading investor in tech was sitting on this panel, and was saying that he was investing in AI, AI and AI, and essentially he was telling us it was the only type of investment that they were pursuing at that time. And while it’s still very much the case, you can still see a lot of investment in the field, the enthusiasm and also the interest for its – the promise of AI has somehow dampened. And one of the reason is obviously the limitation of now AI, but also the risk associated with the technology has become very apparent, and so now it’s not only a theoretical issue, it’s really a practical one.
Almost on a daily basis you see in the press the really concrete harms that AI may have on its user. Over Christmas it was Alexa telling a ten years old to put a penny in a live plug. Amazon, a few years ago, had to also decommission their AI power recruitment tool, because it was favouring heavily male candidates over female candidates. Uber, you may have seen, is also in the press because they’re using a software system to verify their drivers, and of course, this system – not of course, but is shown to actually unfairly dismiss some of those driver on the basis, because they failed to identify some of them, because their skin tone are darker.
So, basically, what we’ve seen again and again is the fact that AI system that have been demonstrated to be biased and discriminate are still being funded, are still being developed and are still being deployed. So, tonight, with my panel, we’re basically going to have a pragmatic look at this technology, of what has worked and what has not worked so well, and the challenges that AI still pose to society and businesses. And I’m also interested to hear their views on AI predictions, where they see it going, and to which extent they think that it’s AI winter, or actually a continuous interest in this field.
So, tonight I have a very diverse, but also very knowledgeable panel, with Caroline Gorski, who’s the Global Director at R2 Data Lab with Rolls-Royce, and have Marie Oldfield from Oldfield Consultancy. Toju Duke who is responsible of managing a responsible AI programme with Google, and online, Professor Gill, with the University of Brighton, and also Editor-in-Chief of AI & Society.
So, Caroline, I’m going to give you the difficult task of starting this conversation, and you’re very much at the, I would say, the heart of industrial transformation. Your labs really is in charge of improving efficiency and driving productivity for Rolls-Royce. So, what – you know, it’s really interesting to hear your perspective, in terms of what are the key concern when you consider industrial transformation through AI in, sort of, big industry like yours and – but also the ethical consideration you’re viewing when you’re developing those processes.
Caroline Gorski
Yeah, thank you, Marjorie, and hello, everybody. It’s very nice to be here, and thank you for the opportunity to join this conversation with this panel, so it’s great to be here in person. Yeah, so, I’m Caroline Gorski. As Marjorie said, I’m the Group Director for R2 Data Labs, which is Rolls-Royce’s advanced data analytics and artificial intelligence division. We don’t – I should say, we’re not responsible for all of the productivity improvements in Rolls-Royce. My colleagues in the operational teams would be very cross if I tried to take that on, but we are responsible for thinking about how those technology areas in advanced data analytics and artificial intelligence can help to change the direction of travel and fortunes of what is a very marquee brand, in the UK perspective, and has gone through a significant degree of digital transformation over the last ten years or so.
My team really works both with Rolls-Royce and indeed with other industrial organisations, really to unlock the power of AI, by augmenting human intelligence with artificial intelligence, and that’s very important, because I work, and we work with our clients, with profoundly technically and mechanically qualified individuals. These are very, very advanced individuals, you know, the people who have skills that are extremely important. And what we’re focussed on and interested in is how we augment their capability with what artificial intelligence can bring.
We work on problems that come from mission critical, asset intensive and often physically process-driven contexts, and Rolls-Royce has more than 30 years’ experience and developing and applying data analytics and AI into areas such as engine health monitoring and predictive maintenance, which is a fact that many people don’t necessarily know. When we first started talking about our role in the AI space, many people were like, “Rolls-Royce, what on Earth do you do?” So, we’ve tried quite hard over the last three or four years to set some of that record a bit straighter.
Essentially, I’d like to talk about the fact that I can see three critical areas in the effective application of AI to industrial contexts, and those – when I talk about industrial contexts, I’m talking about the places where the models that you’re developing and the insights that those models provide can – are being deployed into contexts that have a real physical manifestation in the world. So, they are showing up in physical processes and physical products that ultimately could result in, you know, catastrophic outcomes, loss of life potentially, if we get it wrong.
So, firstly, the first area for focus is how well engineered is your training data? Is it accessible, is it trapped in mainframes or even in paper, which is true across a lot of the industrial sector? Is it machine readable? Is it of the right quality? Does it suffer from bias? Is it sparse or poorly labelled? Does it include sensitive, classified or highly regulated data? How well designed and parsed and processed is it for the purposes of model training?
The second question is, how ready is your organisation to deploy artificial intelligence as an augmentation? Have you understood the social skills and employment implications of what you’re intending to do? Does your business, from the CEO all the way down to the shop floor and up again, have the right mindset and digital culture and capability to embed AI into the organisation? And what impact will that deployment have on your employees and on your supply chains and on your customers?
And then thirdly, the last point is, how trustworthy and safe and ethically appropriate are the applications that you build and the end to which you use them? Can you validate that your data is free from bias? Can you assure yourself and your users that your models don’t suffer from algorithmic drift? Does your AI behave in the way you expected it to when you built it? Do you have clear, transparent, auditable safety controls, governance and processes for validating your cycles? And can you show that the ethical ramifications of what you’re building have been considered, both at the start of the process, but also continuously through your development cycles? And those questions really are why Rolls-Royce released the Aletheia Framework at the end of 2020, under Creative Commons licence, and that is our contribution to the debate and our participation in the conversation. It’s our framework for developing trustworthy and ethical AIs in that industrial context. So, thank you for the opportunity to…
Marjorie Buchser
Fantastic.
Caroline Gorski
…start by explaining some of that.
Marjorie Buchser
And, Caroline, because you were brief and concise, let me, let me follow-up maybe with a question, and I think it’s really interesting how you – and your emphasis on augmentation, not replacement, but I guess, because you’re talking about physical manifestation of the technology and the concrete supply chain. I guess there’s always a question of automation and workers losing some of their job. I guess that’s a key concern…
Caroline Gorski
Yeah.
Marjorie Buchser
…in the workplace. How do you answer those?
Caroline Gorski
It’s absolutely a key concern and I think we have to be realistic about understanding that our responsibility as a thoughtful employer is partly about where we have an opportunity to reskill our population. It’s also about being very clear with our population about where we believe the business is going, because we need to help our organisation to understand the transitions that need to go on, and we need to support them with the kind of training programmes and the kind of mindset, ‘cause some of it is mindset, programmes that can help them to work with us to go on that journey.
One of the first stakeholders that we got involved in the development of the Aletheia Framework was our trade union, so, they were involved right from the very beginning. They were involved in assessing with us and helping us to think about, you know, what – how would we sit down and assess and be very clear about the possible ramifications of AI deployments in our context, on our people. And that was a very important stakeholder conversation to start with, to have at the beginning of that process, and not to leave to the end of that process, and to be involved in those discussions going forward.
Marjorie Buchser
Thank you, Caroline. Marie, let me move on to the conversation and we – I think we had a really good overview of the industrial process, but obviously AI is also now apply to the public sector, and some of the work you’ve done involve the UK public sector, specifically cybersecurity and the defence sector. So, are there different ethical consideration for those application, and, you know, what’s your key advice to government, essentially?
Marie Oldfield
So, I think that the ethical considerations for cyberspace and defence are very similar, just because when you’re using things like social media and you’re using cyber, you know, agents, you’ve always got the risk of exploitation of users and society. And what can happen is, we can pave any road with good intentions and we can say, well, actually, we’re doing this for societal freedom. But how easy is to use social media to manipulate a society? It’s very easy, it’s very easy to oppress.
So, we need to make sure that when we’re deploying things like AI, we are moving right back to the beginning and saying, “Well, hang on, are we teaching users about what AI can do, the risks, the inappropriate emotional attachments, the, you know, the interaction with these different agents, are we teaching them about exploitation?” And that goes far beyond things like, do you need a password to get onto your bank? It goes to, what interactions are you having with things that seem human-like? What interactions are you having with things that seem technological-like? Is there a difference between these things?
And it’s beginning to be proven now in research that there is, and I think that, you know, it’s quite concerning seeing the new mandates that have come out from the UK Government and from NATO, in terms of how we can use social media to keep society safe, when that is a very – it’s a knife edge that you can fall over very quickly, and start to make, you know, manipulations, so that people behave certain ways. You know, it’s easy to say, “Well, we don’t want this type of speech,” but what is freedom of speech then? What are we actually employing AI to do? We can’t just then say, “Well, actually, we don’t want that speech.” You have to have freedom of speech fully if you’re going to have it. It’s an ideology, it’s principle.
So we need to consider, when we’re implementing AI, how are we actually doing it, and are we doing it responsibly and ethically? And that, we can become very hypocritical very quickly, because we can run round saying ethics all day long, but what does that mean, and what does that mean to society, and what does that mean to how we design things? We can reverse-engineer, you know, implementations, we can say that there’s bias, but what were we doing in the design phase when this happened? What were we doing in the conceptual phase?
And if we look back to educational systems, we’ve actually got very little ethics in the educational system, so then people are coming out of that educational system, developing AI, developing data science, doing what I would call machine learning, because I think AI philosophically is, you know, a general intelligence is very far away. So to actually have an AI is, to me, at this point, not particularly possible. And Philosophers say, you know, “The machines are coming to get us,” because that’s what they maybe think AI is. And there’s a fundamental misunderstanding, both in terms of language and of knowledge and of education of what is AI.
If we can’t get that straight between ourselves and we can’t use the correct language to explain what AI is, and then we’re more bothered about, how can we get funding, how can we sell it, how can we get the next, you know, big thing to there? Than what is our responsibility to society, how are we – you know, we’ve seen Boeing Air MAX crashes, we’ve seen disadvantaged people. We’ve seen people left with no money on the streets, we’ve seen loss of life. How far are we willing to go before we’re willing to say, actually, we’re not doing ethics right? We need to actually look at what ethics is and implement that correctly in the beginning.
Marjorie Buchser
And you’re very also plug in into the research side of things. We were just talking in the briefing about this new concept that you’re, sort of, exploring of dehumanisation of AI and anthropomorphism, and I guess that when you talk about AI with the general public, there’s this notion it’s magic or it’s just, you know, this overseeing human-like intelligence. So, can you tell us a little bit about your research in that space?
Marie Oldfield
Yeah, so dehumanisation is quite a new concept, and there’s not enough research as yet, and some of the research that has been done is not thorough enough. However, what we’re starting to see is, fundamental belief systems are being altered, in terms of how people interact with AI and they use AI. They’re not particularly aware of the risks. The way that they’re sold AI is this magical thing, and then they form inappropriate emotional attachments to things like Siri or chatbots. And then what can happen there is, their belief system can be fundamentally altered, which then impacts the individual and society.
And at this level of philosophical, kind of, concept that’s not something that you’re going to consider as a technical person necessarily, but it is actually what you’re doing, you’re implementing this onto people. When you do that, you alter what people do in certain ways, and you start to cause things like out-groups and in-groups where, we can say out-groups and in-groups, but it’s perceived, it’s whoever’s perceiving the person that is not in their group or is outside of their, kind of – there’s a thing called Fady in Madagascar, where you have to follow all of these certain social norms, in order to be part of the group. It’s easy to then detect people that have come in after a few years away, or that are actual interlopers.
That is the same kind of principle as what we can see on social media, so where you have groups and you’ve got in-groups and you’ve got personalisation, where you’re constantly clicking the same kind of material and being shown the same kind of material, in this echo chamber you then perceive yourselves to be the in-group, which is superior to the out-group. We’ve seen this type division again and again over the centuries, and it’s exceptionally harmful, but this is something again that, when we’re implementing technology, we’re not necessarily thinking about the actual effect that it’s having on the people at the other end, and they might not actually know what the effects are that it’s having on them either, until it’s too late.
Marjorie Buchser
So an effect back on how society operate, essentially.
Marie Oldfield
Yeah.
Marjorie Buchser
Let me move now to, to Toju. Thank you for, for joining us, and building on bias and transparent AIs, sort of, what you do on a daily basis. Especially, I think, a topic close to your heart is women in tech, women representation, but let me put maybe a controversial question out there, which is, is there such a thing as unbiased AI or bias-free AI, considering that, you know, our society, we may argue, is still quite biased?
Toju Duke
Yeah, it’s a very, very good question, and I’ve asked myself that question. Can we actually reach 100% statistical bias-free AI? It’s not possible, and there many reasons why, but the first reason is AI’s trained on data, volumes of data, and guess where we’re getting this data from. The internet, and who puts the data on the internet? Society, and is society perfect? No. Is it ever going to be perfect? No. Excuse me.
So, no matter how much work we do, in terms of improvement of human values and human norms and reducing discrimination and injustice and all of that, there’s only so much we can do, the data is still going to be biased. AI uses old data, historical data, it doesn’t conform to the new social norms or social revolution, the data is, like, ten years old because we need volumes of data to train an AI system. And you can’t build a robust AI system that is working properly within two years. Sorry, excuse me. So, I’ll just take some water [pause]. I’m also having a cough. It’s not COVID. Now I always have to justify myself, any time I have something in my throat. I was going to say that it’s not COVID either. I was just thirsty.
So, yeah, so, based on the way AI systems are built, it’s impossible to have 100% bias-free AI, but we can actually improve the bias in AI, right? We can turn the needle a little bit more and just aim for at least 20% increase and less bias in your AI systems, looking at different subgroups in the world, like gender and abilities, which is – which we know as disabilities, or age, you know, or income, or socioeconomic background, or race or ethnicity.
So, there’s so many different subgroups. We need to look at the AI systems and, you know, depending on what the problem is and the amount of knowledge that you have, we should always – if you’re building on the AI system, we should always think about the impact it will have on different users in society, and there are lots of tools out there, like fairness tools and transparency tools, more fairness. And fairness is basically just what I’m talking about, thinking about the different uses of AI on different people in society, and the impact it actually has. Many times is negative on a small percentage of people. Many times it’s people that fit within the subgroups.
Now, the next reason why AI cannot be bias-free is, even if you do all of your fairness evaluations, and you fix the problems, sometimes the methods in which we build these AI algorithms lead to some bias as well. So, you could actually have data that is quite clean, you know, your data curation is quite clean, but depending on the method that is being used, there’s a possibility that there could be some bias or seed-back into the AI.
Thirdly, we have lots of tensions between fairness and privacy. Privacy is another segment of ethical AI. It’s quite important that everyone wants their data to be anonymised or protected, but the privacy laws many times talks about anonymising data, right? You don’t want your address to be shown on your AI system, because you don’t know where it’s going to go. But on fairness, if we want to evaluate the AI system to know what impact it’s having on these different subgroups, we need to know who they are. We need to know their names and where they live, and there will be – always be that tension. So, if you’re a privacy expert and your focus is on your privacy, then the subgroups are going to be affected, and that means there could be some bias in the work that you do. So I think that’s a full wrap of, no, we’re not going to be – ever make it there, but we could actually move the needle and improve the work that we do in AI.
Marjorie Buchser
And maybe just a quick follow-up on facial recognition, because I think that’s one of the system that is, you know, front and centre, about the clear bias. Do you think this is – there seems to be a type of bias that could be – could it be fixed?
Toju Duke
I’m not going to say it can be fixed. It can be improved.
Marjorie Buchser
It could be improved.
Toju Duke
It can be improved, because I think the thing is, again it’s still the same thing, the training data, right? So there’s something called the Fitzgerald scale, which is similar to the Pantone colours that we have in graphic design, and it looks like different skin tones. Apparently, the Fitzgerald scale wasn’t perfect, it hid a lot of skin tones, it was missing a lot of skin tones, and that scale was used over the years for AI, for computer vision and it’s only recently that different companies coming up with new scales and say, “Hey, we have so many skin types in the world.” You know, but the problem is the previous Fitzgerald scale that has been used by literally 100% of companies doing computer vision has already been deployed. We already have CCTV cameras using it, right? We have so much technology, iPhones and everything else, using this technology, so even if you make changes, it still takes a long time to be adopted by the different companies, or even the awareness to be spread across, and by then there’ll be another problem that we’ll find that we’ll need to fix, and it take dozens of years.
I know I’m sounding very pessimistic, but it is possible, it’s just, the work needs to be done holistically, the knowledge needs to be shared across the communities. That’s really missing. A lot of people just do the work in silos, publish a paper, publish it up there and that’s it. But there needs to be more collaboration, and the more collaboration we have across different communities, I think, the more progress we’ll make in these areas.
Marjorie Buchser
Thank you, and I think it’s a good segue, to some extent, to broader collaboration, and not only work in, you know, a specific company or a specific continent, but actually, how does AI tech translate overall, you know, at a planetary or, sort of, you know, different countries, international level? So, Dr Gill, I’m going to turn to you. Could you hear us well? I’m sorry, you’re online, which creates, you know, an additional layers of interaction, so, I’m just checking if, could you hear us? That’s the first question. It doesn’t seem so. Alright. Can I have our colleague in technology help me there? So, I’ll try again. Dr Gill, are you with us? It doesn’t seem that he is, unfortunately. Well, yes, I can definitely type, but I don’t think – I’m not sure he sees that.
Right, so, what we can do is, once – alright, let’s continue the conversation with the lovely ladies, and then, Dr Gill, if there’s delay, if you hear me, just say something, interrupt us, we’ll be happy to hear your views. My question to him was basically, I was interested to hear, ‘cause, to hear the ethical perspective in different countries, and ‘cause he’s been involved with Indian colleagues, Japanese colleagues, and whether ethics is the same in different regions.
But I can still see that he doesn’t hear us, so, I’m going to go back to our panel, and, Toju, I actually had a follow-up question for you, and I – we discussed it a little bit, but it’s just – and I’m sorry to make you the spokesperson of all big tech. You’re not, but I’m still going to ask you this question, which is, of course, big tech has a lot of resource and put a lot of investment in AI, and I think one concern is to say, well, are they shaping excessively the space in the type of applications they develop, especially ‘cause it’s probably more a commercial application than beneficial application, let’s say, application to solve big, sustainably, geosocietal challenges. What’s, what’s your take on the excessive, sort of, or, like, influence of big tech on AI?
Toju Duke
It’s a huge influence. That said, I’m happy to say that I’ve seen some increased responsibility coming from big tech towards these applications, and some increase in the knowledge and the impact it’s having on users, especially when it comes to ethical AI. And I’m talking across, you know, the different organisations, ‘cause if you go online you actually see, they, you know, they publish a white paper, a research paper, there’re lots of tools there. So, you can see that they do have a lot teams that are working towards operationalising, you know, the principles around ethical AI, which is so important.
There’s lots of research going in the space, new research, but we don’t want it to get lost in the research communities. We want to make sure that the knowledge that is being discovered is being implemented by these companies and scaled across the organisations, especially when we’re talking about big tech. These are big, humongous companies, right, with so much red tape and bureaucracy and, you know, sometimes knowledge just stays within the one team and it doesn’t get scaled across this organisation.
So, well, that said, I am seeing improvement on the responsible front, and again, back to the conversation around collaboration, if the big tech companies, you know, just talk to one another, maybe there’ll be actually speed and progress. And I am hoping that that will translate to the SMEs, you know the SMEs that are actually using the AI applications. Hopefully they’ll learn the different principles around AI ethics and the frameworks, ‘cause there’s so many out there which, they still all have a similar theme. A lot of people don’t know it.
The question is, you know, you talk about AI ethics, you talk about all of this. How do I do it? And there’s not enough knowledge out there to teach an SME, or even a big tech company, how to operationalise and implement what we’re talking about, and I think that’s the next thing, the next level to understand.
Dr Karamjit S. Gill
Katarina said you can’t hear. Katherine.
Marjorie Buchser
My technical team say that he – they’re emailing Dr Gill, so, it seems that he can’t hear us, but he’s just online. I’m going to follow-up. I think that it may not sound with my question, but I’m actually quite tech optimist, and while I see the misapplication, I’m also quite interested to hear our panel, their views on what they think are the most successful and beneficial applications so far. So, Marie, you’ve been in many, you know, across different sector. What are for you the highlights of the AI deployment today?
Marie Oldfield
So, I think that the way in which we’re able to automate, and the way in which we’re able to predict and use data science and analytics to be able to do that is definitely improving, and I think it’s a step change from what used to be statistics in 2014, that was data science, then was AI. I think that is giving us more freedom, and a lot more control and ability to understand what it is that we’re dealing with, in terms of systems and data.
I just think that there is – where that step change has happened, there’s been a gap that’s been introduced where we make models as statisticians and we understand the context and we understand the data, we’re not necessarily getting what people are saying about bias, because to us that’s completely different. And I think that bias in itself, reverse engineering it is not something that ever would happen, in terms of fairness and transparency of the statistics. And I think where that step change is actually happening, I think we need to take the lessons with us rather than just try and invent new ways of doing things, because when we start to reverse engineer, that’s where I think that we get a bit lost. Where we’re making the step change improvements to, you know – and there is an element of, you need to understand that not everybody wants to be part of a digital environment. But where we are having a digital environment, it is, you know, it’s becoming more and more – it’s easier to do things.
However, where I try and look at the balanced view is, it’s easier for us, we’re getting more out of it, but what are the effects of that, and where we are – sorry, he’s saying something there.
Marjorie Buchser
Dr Gill?
Marie Oldfield
Sorry, I’m getting distracted, I thought – think he keeps talking.
Marjorie Buchser
I’ve seen Dr Gill moving around, but not actually hearing us, which is okay.
Marie Oldfield
Yes, so just to finish the point, I was just going to say that I think that we need to just – sometimes where we think we’re making advancements, we may necessarily not be making advancements, and we just need to be careful that actually when we’re implementing those, we’re looking at both sides, rather than thinking, “This is really great, I can do predictive maintenance with this new algorithm.” That’s fantastic, but then what happens when you remove the operator? And it’s the end user questions and the ultimate questions that aren’t being asked.
What does fairness look like, what does that mean? And when we get to that point where it’s completely fair, what are we looking at, is everybody just the same, judged the same, or are there different principles, or how are we doing that? So for me, philosophically, what is the end state that we’re actually trying to get to and why? Do we actually need to do it? Is it something that we should be doing, or is it just something that we can be doing? And that kind of disconnect between the technical people that are doing the research and then the final implementation in society, there’s a gap there as well, where we can invent things, and we can do things, but when we implement them, there’s a lot more considerations to be thought about before we actually take that step. So, where I am optimistic about the movement forwards in technology, I think there’s just a lot of caution there, in terms of the interdisciplinary work and collaboration that needs to happen to do it safely.
Marjorie Buchser
Thank you, Marie, and I’m going to try again to see if we have a connection with Dr Gill. Dr Gill, can you hear us? Hello.
Dr Karamjit S. Gill
Yes, yes.
Marjorie Buchser
Oh, yes, fantastic, success. Technology has not failed us…
Dr Karamjit S. Gill
Yes, I guess.
Marjorie Buchser
…on this technology-related stage. So, Dr Gill, I was just saying. as way of introduction, that you’ve been involved with expert networks, in various countries, in India, in Japan, in the EU, and so my question to you, we were talking about AI ethics, but specifically in a Western context, and so, are AI ethics or AI considerations universal, or do you see actually significant cultural differences when it comes to AI application and the consideration related to those?
Dr Karamjit S. Gill
Well, my experiences really come from the AISociety Journal and people who send papers and I’m obliged to read them. So, basically, most of the discussion was around the aspects which already have been discussed by the participants, that trust, just about the fairness, social responsibility, privacy, governance, transparency and so on, so forth. Now, that is basically seen to be common to the European, as well as the North American Researchers.
Now, when you go beyond that, but then I think you have a – for example, Japan, basically, one could say that the relationship, the object is basically finding the distance between the object and the human, it’s not much. So basically it focusses on human – enhancing human life, appropriateness, highly collective, and very interesting, the collective were tacits, so it’s a tacit practice. So basically, maybe it is because of the – when you look at these social robotics, it’s connecting with a machine. I am not afraid of open, embracing in a more social way my companions, practical way of being and behaving.
Now, when you come to China and India – I’m just giving a reflection of my experience of what people have sent papers, not making any specific, general statements, it becomes more functional. It’s rather than social, so ethics is not really on the high agenda, more on applications. And Chinese, maybe Chinese, such as maybe community focus, Confucius’ values of harmony, avoiding conflicts.
Now, when we come to Europe, which basically – and I think I have to go back. The reason AISociety Journal was established, it was a concern about the Silicon Valley. Concern here was that too much focus on individualism, and it can do, technology can solve all the problems. And as a consequence, it was discussed and to launch a journal, basically on the European philosophical and intellectual predictions. So, we wanted to have diverse opinions about AI and of course, related on ethics. So, basically this, so we can see that the European Researchers, when we look at papers, were interested in Uncanny Valley, question of purpose, justice, human rights, fairness, and also, implicitness, collective and consensus. Very interesting is the utility of the machine, a focus on utilitarianism. Of course this, that means transparency, justice, human rights, not raised in China and Japan.
Now, the US papers is very interesting. It’s certainly focus on defence and military applications, certainty, seeking certainty, explicit, manageable justice, conflict resolution, transparency and object reliability, meaning implicit safe and secure, reliable and vulnerable, judgement and care, minimising unintended bias, safety and security, managing unintended behaviour, ability to disengage. So, these are the journal perspectives coming from there.
Now, when you say, the global ethics, I suppose, there is no such thing as global ethics. It has to be some sort – like the – we can’t have technology without society and society without technology. We can’t have the objective knowledge without tacit and tacit without objective. So we can say, global ethic can’t exist without diversity of cultural ethics, so it has to be good [inaudible – 38:33].
So, basically, we have to, I suppose the – what comes across is different cultural traditions of AI, not in the sense of conceptualisation, but mostly in application areas, and the focus. For example, we get papers from India and China, very brilliant Researchers. Most of the papers are about algorithms, comparison of algorithms for solving social problem, but they focus on comparative algorithms, and when – something very interesting. Many times, in the European context, Researchers deal with the problems within their own context and collect data and argue on the data they collected.
Most of the papers come from India, the data is collected from Google, and that is worrying, and therefore the comparison’s always about American data, or the European data, and I suppose that reason being that there is a, sort of, focus on publishing papers, and the European or American journals most probably will accept the papers, which they can see relevance of it. And that causes a problem for our journal, because we then have to reject those papers, and suggest that, “Please, could you talk to the healthcare workers, talk to the people who are going to use it, even if you’re not able to engage with them, at least talk to them and see what they say about your research?” And it’s very difficult, so that’s the perspective, because it’s a very functional way of doing technology. And could it be that India is a social collective, implicit and tacit, and there’s no need to worry about ethical issues of justice and fairness in the same tradition as the European do? Maybe, and same with China, explicit, collective, and seeking harmony, harmony, a collective harmony.
So, when you look at Japan, one could say in general, tacit collective. It’s a difference between the social collective and it’s tacit collective and harmony, and now of course Europe is social collective and implicitness. That is the broad brush of my way of looking at the various cultural perspectives of the scene, but of course, the main issues remain common. Privacy may be contextual, seen from a cultural perspective. Trust could be again seen from different perspective, trustworthiness, but these are the, sort of, common themes, which are seen in different contexts and from different cultural perspectives.
So that’s my take on – and I can, sort of – I’d say that’s my take so far.
Marjorie Buchser
Thank you, Dr Gill, it’s a good world tour of the AI research, and I may say as well that – otherwise my international colleague will not be happy with me that, of course, there’s human rights and they should be universal, to some extent. I’m going to maybe have one last question to our panel, and maybe ask Caroline, and that was my question in the introduction, if you had a crystal ball and if you were to define or say to us already, the 2022 trajectory of the AI uptake, if you think that the investment and the interest is continuously going to increase or if, you know, we’re talking – I think that, more and more, there’s this notion of the new AI winter, maybe it’s a little inflated, but what’s your prediction?
Caroline Gorski
So, okay, I just want to start this off by saying, I’m not an investment specialist, and anything I say should not be taken as any form of advice of what is going to happen in the future. Okay? Just, I think what we’re seeing is not the dawning of an AI winter. I think what we’re seeing is actually the percolation of machine learning techniques, because you were absolutely right, we are nowhere near a general AI, but machine learning techniques into processes and areas of our lives, in a way which is making them imperceptible, but they are not going away.
So, I think the honest truth is – and let me give you an example. So, we work in Rolls-Royce very passionately, and have been doing for at least the last 20 years, but with a real focus for the last five, on the question of how do we contribute to the progression towards more sustainable power generation? How can we be part of the answer to how the world gets to a net-zero position, in terms of power generation across all of the sectors?
And, and bear in mind, of course, that we, alongside GM, are one of two businesses who essentially power wide-body planes as they fly on long-haul flights, right, so it’s a really big question. The honest truth is, we will not get to net-zero without artificial intelligence. The scope and range of the data we need to understand and the degree to which we need to use that data predictively, and the degree to which we need to be able to respond to changes in energy demand with models that can actually pool then across properly integrated grids that have real choices for sustainable energy production in them is going to need machine learning to make it work.
So, what’s the single biggest investment area that you have seen in the last 18 months? It’s sustainability, sustainable tech. Almost all of that sustainable tech has artificial intelligence machine learning specifically embedded in its heart. So, it isn’t that we have gone into an AI winter, it’s that AI has simply ceased to be at that point on the hype curve where it’s the buzzword that everybody puts on their PowerPoint slides when they want to go to investors to ask for money, it’s just become part of how we do things.
Now, that raises some really interesting and difficult questions, because if AI is becoming invisible, and machine learning is becoming invisible to consumers, then, you know, to the points you were making, if we haven’t educated the marketplace about the fact that it’s there and what it’s doing, it’s becoming less and less easy to see. So, that is a really important question.
I also do just want to take this opportunity to leave you with a thought, which is, a lot of the time we talk in the context of AI ethics about fairness, and that’s really, really important. It’s really important that we have fair and unbiased artificial intelligence and machine learning. But I also want us to start talking about safety, because if artificial intelligences, if machine learning is becoming embedded in that invisible way in the things that actually show up physically in our lives, our cars, for example, then we need to not only talk about fairness. We also need to talk about safety, and so far, I’m not seeing enough discussion with the kind of industrial players who have that safety management built into their DNA, to help us challenge ourselves to think about, are we actually taking these steps in a way which thinks about artificial intelligence safety, as well as artificial intelligence fairness?
Marjorie Buchser
Excellent way to end this direct panel discussion, so, and I can open it up to the audience, and I’ll take first question from our in-person audience, so if you have one, please raise your hand, so I can see it. Mr in the front, please state your question. Oh, yes, sorry.
Member
Thank you.
Marjorie Buchser
The mic is coming your way.
Member
Thank you. Fascinating discussion. I was quite intrigued on a number of points, but let’s pick up on bias, and I’m a Statistician by training, and you mentioned, sort of, statistical bias. It’s used in a very pejorative manner, the word ‘bias’. We say, “Oh, if it’s a facial recognition system, if it can’t recognise ethnic minorities,” I’m an ethnic minority myself, I’m Anglo-Indian, “then it’s inherently bad,” and that all facial recognition systems are actually racist. I disagree with that. They can be poorly trained. We might actually use bias to correct that, simply by saying, “I’m just going to overrepresent the underrepresented examples in my training data.” And there is a conflict, in terms of the context of, that the word is used in, the, sort of, general language context, “Bias, bad”, and the statistical context, which isn’t necessarily bad, which actually helps us to solve a lot of problems, [inaudible – 48:10] aggression, for example, I’ve deliberately introduced bias in order to stabilise my parameters. Don’t want to get too technical here. But I think that more education is needed for the general public. What does the – what do the panellists think on this?
Marjorie Buchser
Toju, I’m going to – I think that’s a good question for you, as you’re obviously a bias expert, if we can say so.
Toju Duke
Yes, I 100% agree. I agree that we need to do a lot of education across different people in society and across different groups, and not just users and consumers, but even people within different communities, like research and industry. That said, I’m, kind of, happy that we’re even talking about bias, because five years ago we were not, right, and AI was slowly creeping into technologies, and shooting out, spitting out this bias [inaudible – 49:09]. No-one knew about it until people started getting falsely arrested for crimes they did not commit, due to computer visual programmes not being trained properly on data, and people’s lives were getting wrecked. And then Researchers started looking into these subject matters a bit more and started making some noise about it and saying, “This is absolutely wrong, this should not be happening.” So, I think with time, there’ll be a slow trajectory and change and shift in the mindset and all these different terms that we’re talking about.
And Caroline brought up a good point. Fairness is not the only problem affecting AI ethics, or fairness is one thing that a lot of people know about now, because research has, kind of, advanced in that area. And I think, you know, the next point you made was around bias and actually using bias to fix the problems. I do agree with that, but I think the problem we’ll have is the training data, how much representative samples can we have of these underrepresented groups to feed within the training data to actually fix these problems?
Again, remembering that training data is built on internet data, and how much of the internet is actually representing society is, it’s quite mixed and biased as well, right? And for example with India, you know, the data that they have, 70% of the women are not using the internet, so if you’re actually building an AI system or a product from India, it’s not representative of the full society and, you know, that’s not going to fix bias, per se, right? But, yeah, it’s definitely a very good point and…
Member
It doesn’t have to be just Indian data. It’s convenient. Sorry.
Toju Duke
Shouldn’t be.
Member
Absolutely.
Toju Duke
It’s easiest and quickest way to get it.
Caroline Gorski
Can I offer a different – an example from a different domain, which I think might be – throw some colour on the other side of this question, right? So, industrial data for training MLs suffers significantly from a scarcity problem, an event scarcity problem, and we are all hugely grateful for that fact, okay, because the data scarcity problem is essentially linked to failure events. And because industrial datasets have a scarcity of failure events, that means power stations don’t commonly explode, aeroplanes don’t commonly fall out of the sky, driverless vehicles actually don’t commonly crash. I know when they do they get into the newspapers, but honestly, I promise you, they crash less than cars driven by human beings. So, it’s a – that failure of a – that lack of failure data in an industrial dataset is because industrial organisations are actually really good at safety management processes, for the most part, not all of them, of course, but for the most part they are very good at it. But it creates a huge problem.
In one particular use case in my world, I have 200 million lines of data and seven failure events. I can’t train an ML to find a failure event if I’ve only got seven of them in 200 million lines of data. I have to think about bias, statistical bias as a way of interpolating that dataset, in order to be able to train the ML. I have no other choice. Either I generate synthetic data, which I can use to train the ML because it, you know, it’s replicating a scenario, or I simply re-interpolate the data I have, but I have to use – both of those are bias techniques. I have to use bias to do it. I’ve no other way of actually producing a meaningful algorithm that will – a model that will help me to do what I need to do. So, your point is absolutely right.
Marjorie Buchser
Any other question from the in-person audience? Come on, don’t be shy. I know it’s difficult, social interaction again, but – well, if you have more, don’t hesitate, I’ll take them. I’ll take some question online. I’ll start with Samina Zaman Miram. Maybe, Samina, you want to ask your question live, if the – my team can unmute her.
Marie Oldfield
Can I just ask a quick question in the meantime? Sorry.
Marjorie Buchser
Yeah.
Marie Oldfield
If you’ve only got seven failure events, and you’re interpolating and maybe simulating, how do you know that your algorithm actually works?
Caroline Gorski
Oh, we can get into the technical discussion about it, but I’m not sure it’s…
Marie Oldfield
But that’s – it’s a context thing, so how could you test something where you’ve not got enough failures, or you’re simulating what you think to be failures, but then how can you apply that to understand if that works in the real world?
Caroline Gorski
So, if you look at the detail in the Aletheia Framework around how – which has actually been derived from similar examples – in that particular case we didn’t progress with that, because we simply couldn’t, right? So, you know, that was too scarce for us to work on. But when you look at how we’ve worked with other cases where we have challenges around the scarcity of data and we need to put that synthetic data in, we are usually running a completely controlled experiment between the synthetic data and the real data. So, we will have a real data category running, we will have a synthetic data category running, and we will match to see whether the synthetic data performs at the same outcome as the real data. An example would be our EHM system, which monitors 11,000 engines in flight 365 days a year. We pump an entire airline’s worth of synthetic data through that system every 16 minutes, in order to check that the synthetic data we use to model some of the activities is not suffering from algorithmic drift and is accurately representing the real dataset. So, that’s how we do it.
Marie Oldfield
Do you worry that maybe it’s a closed ecosystem, in terms of, you’re testing against something that’s already happening, and that’s being validated and verified, but then there is no – nothing outside the problem space that you’re actually then testing there?
Caroline Gorski
The problem space in the context we’re dealing with is not perfectly controlled, but a lot of it is known. We’re dealing with physics and therefore known physical reactions. So, for example, the stress rates on metal, or what happens when you put thrust through a particular engine, the impacts of heat. There are, of course, occasionally unpredictable events, like when volcanoes explode and we see gas clouds. We have done a lot of work actually modelling after the unpronounceable, and forgive me, Iceland volcano. I can’t remember how to pronounce it, but our AI, one of our, you know, some of our R&D team did some ground-breaking work actually that’s been adopted across the industry, modelling ash cloud formation, that came from that experience, and of course, we can now feed that back into the system. So, now we’ve got a base of data to use, but of course, before that we didn’t have that. So, what tends to happen is those unpredictable events are rare, and as soon as we have one, we will work to then reintegrate that into the system. But they are rare enough that most of what we’re doing is within the grounds of the rules of physics, and therefore somewhat more predictable.
Marjorie Buchser
I want to reassure everyone that I’m texting the technical team, I’m not just doing anything. It’s a bit weird in a hybrid setting, but I want to go back to maybe Samiha. Are you with us?
Samiha Zaman
Yes, hello, can you hear me now?
Marjorie Buchser
Fantastic, good. Samiha, do you want to state your question?
Samiha Zaman
Yes, thank you. Thank you, firstly, for a very interesting discussion. I was very interested and I’m concerned particularly about the questions of ethics in the incorporation of AI technologies, and I wondered, going to your question, your concern, rather, that safety was not being focused on as much as it should, whether in fact questions of ethics do form part of the discussion when safety features are being incorporated into AI technology?
Caroline Gorski
In our case, certainly. So, the Aletheia Framework has 32 principles in it and they cover both questions of ethics: is this the right thing to do? You know, does Rolls-Royce? I mean, it’s Rolls-Royce’s ethical framework, so it starts from the position that Rolls-Royce considers itself in the world, thinking about its own ethical position, but it includes questions that are about, you know, is this the right thing to do? Is this for good?
And we count, by the way, because we are a commercial entity, we count economic growth as part of good, so we don’t consider it bad to be generating a positive economic impact on the world. But we also ask questions of, you know, does – is this going to have a detrimental impact on our employees, is this going to have a detrimental impact on our supply chain and their employees? And so, absolutely, those 32 principles are a blend of trustworthiness principles, which are about some of the things I was just explaining, the methods by which we demonstrate that our algorithms are not drifting away from our expected outcome ranges, but it also includes ethical principles. So, I would absolutely put questions of ethics into questions of safety, and vice versa. So, I would also say that questions of ethics should include debates about safety. And let me again give you an example, because it’s – it can sometimes feel a bit abstracted in the world of, you know, Toju’s world, which is, you know, not about things that, you know, might blow up in your face in quite the same way. But if we are living in a world where artificial intelligences can have an impact on our democratic processes, that for me is a question of safety. I don’t think that’s just a question of ethics.
Toju Duke
Yeah, and I’ll just answer that about safety is considered as AI ethics, you know, thinking about user harm, you know, if you’re interacting with a chatbot, how far can the chatbot go? What sort of recommendations is it going to give? Is it going to give you relational – relationship advice? For example, you know, if you consider yourself as gay, but you’re from a family that doesn’t believe that, a chatbot could tell you, “Oh, go and tell your dad, he’s going to love it.” You know, that’s not reality, and that could lead to further harm, so safety is really key, it’s important.
But to Caroline’s point, I do think it’s not been concentrated enough, you know, people haven’t done as much work on it, but I think safety in an industrial context is quite similar to safety online, online safety, and it’s all about user harm, interaction, you know, what’s the impact on kids, digital mindfulness, and all of that stuff. It still falls under safety.
Marjorie Buchser
Yeah, and I think that generally there’s new principle and ideas, ‘cause we talk about fairness, we talk about bias, but as Toju said, they were – we didn’t talk about those before and I imagine that new concept may appear, as well as environmental safety, or element that were not – haven’t been included in the principal framework, but may be added over time, as we consider the technology.
Caroline Gorski
And even the fact that the – I haven’t seen the latest draft, but the draft regulations from the EU around AI it calls out high-risk use cases for additional levels of governance and oversight. And I think for me, that’s a reflection of the fact that the safety question is beginning to come forward, right, because they’re recognising that there are certain higher-risk contexts in which the, you know, the standard, the burden of proof that you’re thinking about these things needs to be higher.
Marjorie Buchser
Well, it proves that the conversation is by far not finished, but I want to thank my panellist for this great discussion. Also, you, Dr Gill, for joining us, and sorry for the hybrid being still a work in progress, and then, and for all of your participation in this in-person event, and as well online. Thank you very much [applause].