Marjorie Buchser
Good afternoon. Welcome in this Chatham House webinar on Artificial Intelligence and Democracy. My name is Marjorie Buchser. I lead the Institute Digital Society Initiative, and it’s my pleasure to be your virtual host for the next hour or so.
So, today, we would like to explore an issue that has certainly become one of the frontier of the AI debate today, not only looking at how AI application have transform and improve our economic and industrial processes, but also, how they may impact our societal and political system, and to some extent, the very fabric of democracy. And when I was preparing for this discussion, I was reminded of some of the historical quote about democracy, and I’m not going to recite here Churchill or Roosevelt, but all of the thinkers really much highlight the importance of well-informed citizen, of deliberation, of argumentation, of mental autonomy, and also as well as human [inaudible – 03:07].
And of course, all this proposition are very much challenged by the very notion of automation and algorithmic curation. And while I think it is quite acknowledged that artificial intelligence may improve a lot of processes and increase efficiency of policy tools and potentially responsiveness to citizen needs, there is also the perils of artificial intelligence deploy in democratic context. It could increase polarisation, it could increase as well the question of bias, and predators in society, and sort of, undermine the very precondition of democracy.
So, the goal today is – of this panel and this discussion is to provide an overview of how AI may impact political institution, democratic institution, and principle, and this is – I’m very pleased to say that today we have a very international panel to discuss this issue, starting with Rebecca Finlay, who is the Acting Executive Director for Partnership on AI. At the moment, joining us from Toronto, Professor Philip Howard, with the Oxford Internet Institute, Cornelia Kutterer joining us from Brussel with Senior Director with Microsoft, Rule and Law and Responsible Tech Team, and finally, Matthias Spielkamp joining us from Berlin, Co-Founder and Executive Director of AlgorithmWatch.
But before I turn to them, I just need to remind you all that this session is not on the Chatham House Rule and is on the record. The recording of the discussion will be available on Chatham House YouTube channel, shortly after the event, so you can tweet, comment, as much as you want on social media. We also would like to hear from you during this discussion, and if you want to address the panel, please submit your question through the ‘Q&A’ function that you have on your screen.
That being said, Philip, I want to turn to you first, and ask you the difficult task to kickstart this conversation. So how to save democracy from the machines is essentially your life work, or at least the title of one of your book, but I think, from a general public perspective, there’s still very much uncertainty in the unknowns, in terms of what type of AI applications have been deployed in the democratic context and what could be their implications. So, could you kickstart this discussion, and, sort of, provide us of – an overview of the space?
Professor Philip Howard
Certainly. Certainly, and thank you so much for inviting me, including me in today’s panel. I wanted to offer some examples of how AI is already shaping politics, and I’ve got, sort of, four bullet points here, four examples, four ways in which I’m going to refer mostly to machine learning, to – just for some – the specific reason that I think AI comes with a lot of baggage, and AI, in some ways, is a metaphor for something that might be. But machine learning is actually the set of – may more specifically refer to the set of fairly smart and sophisticated tools that make use of very large amounts of data and produce usefully political predictive analytics that are already shaping democracy.
So, I think there’s four things that we need to be aware of. The first is that machine learning is already being used for electioneering, right? There’s already multiple examples of how AI is being built with political bias in mind, right, with the project of trying to identify very conservative voters or very liberal voters, with the project of trying to get certain kinds of content to those voters, and perhaps even to customise the content in sophisticated ways. So, machine learning is already being used to identify supporters, identify doners, identify voters.
Depending on which country you’re in, different kinds of data are used for this kind of work. If you’re in the US and you’ve ever bought contraceptives on your credit cards, the National Organization for Women wants that data because you’re clearly not pro-life, you must be pro-choice. If you have never bought contraceptives on your credit cards, then conceivably you’re pro-life, and the pro-life movement wants that data. Anything a lobbyist can do to marry data from which we can make political inferences with your social media feed is – anything they can do to play with that kind of data is worth the investment, and the big money investments in machine learning and democracy happened in the US during a Presidential year, and that’s when innovation happens in this space.
One of my great fears on this point is actually not so much what Data Miners do or Political Consultants do, but the prospects for collaboration between device manufacturers and social media content firms. So, a Sony Facebook content alliance that would put the behavioural data that comes – that gets collected out of the devices we all have in our homes to work with social media content and to profile our – profile us on the basis of race, to produce the faces that we respond to. There’s all sorts of behavioural research that suggests male voters respond well to prompts from female electioneering contacts, women respond well to men with deep voices. These kinds of things exist – this kind of knowledge exists in the behavioural research, and could be put to work more for electioneering in our democracies.
AI, machine learning, is already being built for political argumentation, and at the moment this is mostly – these are mostly publicity stunts. The best example of this at work is the IBM Watson’s debate with a couple of Israeli high school student debaters. They were given an inherently political question, right, should we be spending more public money on space exploration? It was – it’s not unlike having Watson play Jeopardy, or, you know, play chess, it is a publicity move, but it’s also illustrative that machine learning is already being imagined as a mode of producing content and interaction that would convince somebody of something.
So, machine learning is already being built for machi – for electioneering, it’s already being built for political argumentation. The third point is that it’s already being built for manipulation, and we’ve seen several kinds of news stories about gangs and deepfake videos. I think we’re at a lucky moment in technology design in that most of the generative deepfake videos can get caught once they’re compressed for YouTube. And for the most part, my own lab study’s disinformation, we have not found examples of deepfake political videos out in the wild, right? The ones we’ve seen have been purpose-built as – again, as stunts.
But it’s not so much about the deepfake video, it’s about what lobbyists might be doing with text and sentiment analysis. I have a colleague here in the business school who says that Google – using the Gmail content that it collects and analyses, that Google can predict public sentiment three months out based on today’s emails, email traffic. What Politician – say if a Politician or a Prime Minister has to pick an election date, what Politician wouldn’t want to know public sentiment three days out when it comes to setting an election day? So it’s already built for electioneering, AI is already built for political argumentation, it’s already being built for manipulation.
The fourth point is the positive one, I would say, the upbeat one. Machine learning is already being built to solve public policy problems, and there are several great examples of this. My favourite involves the significant reanalysis of datasets about global poverty, which had been used to turn measures of light generation at night. So, if a town generates a lot of light, political – Economists assumed that it was a wealthy town, and so you could look globally to see where light was being generated at – in evening periods around the world.
Reanalysis of the data found that this was a poor predictor of wealth because golf courses are significant, and the significant dedications of land that don’t generate a lot of light, and there are plenty of neighbourhoods that generate light, but are extremely poor. The reanalysis of data found that the same survey, the same satellite data, was capturing information about roofing materials, and it was actually the roofing materials that is the best indicator of poverty, global poverty, something that our Economists did not catch, did not think of.
So, AI has been built for electioneering, it’s being built for political argumentation, it’s being built for manipulation, it is also being built to solve public policy problems. And I think the key question for us, for those of us living in democracies, is how to take advantage of that fourth category to put AI in – to work for us in public service in ways that fit with our democratic values.
Thanks, Marjorie, again, for the invitation to participate. I’m looking forward to the discussion.
Marjorie Buchser
Thank you. Thank you very much, Phil, and if I may, ‘cause we still have some time, and so do you think that there’s enough – ‘cause you highlight very well those four categories, three of them being potentially, not only potentially, but quite directly negative for democratic processes, do you think that there’s not effort that is put into the fourth one, into this – ‘cause you give one example, I’m wondering if you think – ‘cause it’s also a question of investment in research…
Professor Philip Howard
Yeah.
Marjorie Buchser
…and do you think there’s enough of that for the space?
Professor Philip Howard
This is a good question, and this is a great question, because of course I want to advocate for more research. The answer is yes, the European Commission, and the ERC in particular, is investing in smart applications, the UK is making its own research investments in this space, but they tend to be much more closely linked to industry, whereas Europe tends to think of it as a public service issue. The US is – much more the innovation happens within industry.
I guess my last thought on this would be that one of the biggest thing – the – one of the biggest things we know the least about is what applications are being developed in China, and what our own militaries do. So, a lot of the cutting-edge work in machine learning is – actually involves national security issues and is done by the military itself in ways that are somewhat out of bounds from what we would normally talk about in a public policy conversation, and that – well, what the military does with artificial intelligence will have a significant impact on how – what our democracies look like ten years from now.
Marjorie Buchser
Thank you very much, Phil. I want to remind our audience, as well, that they can pose a question at any time during this conversation and exercise their freedom of expression also virtually. Cornelia, I want to turn to you now, so obviously Microsoft has been very active in that space, you have a Defending Democracy programme, you have a set of, sort of, self-defined AI principles. So, what is the role of the private sector in, sort of, establishing some boundaries for beneficial AI applications, but also in terms of investment, as Philip mentioned?
Cornelia Kutterer
Thanks, Marjorie, and thanks to Chatham House as well for inviting me, I’m really super excited about this panel, and I’m pretty sure I will learn a lot from the other panellists. So, let me frame this in the context of policy because when we think about Microsoft’s engagements, we are developing our own policies and standards with operationalising those ethical principles that you were just mentioning and I’ll come back to that point a little bit later.
We also drive escalation models where we look specifically at sensitive users, so that they can get reviewed against those principles again, and in this context of course we are also engaged in the broader policy discussion. I am sitting in Brussels, am European, and so, my world focuses around European legislation but also, and this is something I want to specifically mention, the Council of Europe, that I think is very relevant for the discussion on AI and democracy.
So, the first pieces that I want to say is really that, as Philip mentioned as well, it can increase democratic processes. Of course, we have seen many occasions where there was a disagrees and a real challenge and moreover democracies, unfortunately, are in the decline, so we all need to take this into the consideration, so we need to make democracies more resilient against abuse. And there’s two points that I think are important, one is how do we address the sociotechnical dimension of AI in this context, and then the second is how do we actually engineer for public good, how do we maintain good intentions throughout the AI life cycle?
In this context, back to the Council of Europe, there are 12 principles of good governance, and I think we can go back to them. Some of them have already been mentioned, what is patient representation, access to information, etc., I will not list them all now here. And of course, in this context also the debate around the convention AI at the Council of Europe level is really important because they really focus on human rights, democracy, and rule of law, and these are – it’s eventually the beauty of living in Europe and having this rule of law framework to work with.
Now, when we think about specifically AI democratic, potentially really problematic issues, as listed by Philip, synthetic media systems that generate misinformation and political propaganda are of course a key concern here, and also, content moderation systems that might recommend the removal of controversial yet lawful political or social content on online platforms is another of these debates that we are having. And both those examples are challenging because it is so hard to regulate freedom of – you know, regulate spaces where the values are freedom of expression, access to information. We do have – of course, in the European Union’s regulatory world, we’re working on the Digital Services Act that relates to that, but also, the forthcoming AI regulations. So, one of the questions is, okay, what will they regulate? What is a high risk in the minds of the European Commission? Would the synthetic media systems or content moderation systems fall in this category or not?
Now, as you mentioned yourself, we have Microsoft as a technology company, is engaged in the policy discussions, but we’re also, of course, looking at technology ourselves and have, in the Defending Democracy programmes, a number of technologies developed that try to identify deepfakes, but also partner with media to authenticate content. One of those projects, Project Origin, is also important, and there is a number of others.
What – the last point that I want to raise is the laws are not in place yet, but as mentioned by Philip, the systems are already in place, so it is the responsibility of the companies to actually put something in place to counter this. And so, Microsoft, in this path towards operationalising these principles that are fairly common with those of other companies, but also with the OECD principles or these developed by the high-level group on AI of the European Commission, it’s a daunting exercise. It’s actually fairly difficult, and so, I just want to say that as a last point, in what – I’m struggling with the high-risk definition quite a bit.
We, in our work currently, and in absence of legal denominators that exist, we have, sort of, three triggers in this context. Consequential impact on legal positions or life opportunities, and most of the public sector automated decision, our tools would fall therein, then risks to physical and psychological injury, and threats to human rights. And the threats to human rights is, of course, where we’re overlapping with some of the first trigger, but the synthetic media or the content moderation tools would definitely fall therein. So, trying to already do our part in this process is really important, but of course we have always said that we need to have those guardrails in place to help put the risks to – in – mana – more manageable, let’s put it that way, and with this, I stop and I’m looking forward to the broader discussion.
Marjorie Buchser
Thank you, Cornelia, and just maybe a follow-up question, if I may, so you have this, sort of, set of principle, is it principle where you shouldn’t deploy that technology or Microsoft decided not to develop it at all and say, “This is our red line and we’re not going to engage in research or development for that form of AI applications?”
Cornelia Kutterer
It can be either. I mean, the triggers will basically trigger review and this can lead to there’s technologies that are not right for being used, and they will not be. It might be that in the design of the technology, there will be mitigation strategies, or it could be that in the context of a customer, we will either contractually or otherwise restrict the use in – to maybe only specific customers or not at all, or with specific restrictions to be used. So, there is – how we then mitigate this is – there is a broad variety of – it’s context-related, but a broad variety of mechanism, and it really starts at the developing process with an impact assessment and then mitigation efforts that – through tools or more – or the marketing process, etc.
Marjorie Buchser
Thank you, and I think it’s a great segue, in terms of impact assessment, turning to Matthias because, Matthias, your organisation specifically look at algorithmic decision-making process and the one-step social impact, and sort of, to watch and explain those, sort of, algorithm to the public. Where do you see today the main gaps, in terms of transparency, explainability, and the one that are maybe the most likely going to affect the political process, but maybe less known by the public in general?
Matthias Spielkamp
Well, what we’ve found out is that there’s actually a lot going on in Europe already, with regard to automated decision-making. We use that framework because we think it’s a little better, it’s not perfect but it’s a little better than artificial intelligence. Phil already said, you know, you can also frame it as machine learning, but then, there may be other technologies coming up that we, in two years from now, regard as artificial intelligence. So, what we mean is what Cornelia already alluded to, the decision-making that is delegated to machines, that actually affects people in a meaningful fashion.
Now, this is pretty broad, you know, life chances, human rights, such things, and we began our work on basically mapping the use of this in 2018, and in 2019, we published our first automating society report, and now, at the end of last year, we published another version of it where we looked at 16 European countries and how they are using these automated decision-making processes, and we think that we’ve found quite a lot.
So, basically, in each country, systems like those are used. Most of what we found was usage by the public sector. There is, sort of, a bias in that result in itself, in the sense that, for example, we specifically looked at applications that were already developed and used by the – developed and sold and deployed by European companies and European public services, and in public services that is clear, but also European companies.
So, for example, we did not use or we did not map the consequences of large platforms such as Facebook, Amazon, Google, and whatnot because they are active everywhere, and it wouldn’t have made much sense to say if we want to map how this is done in Europe, to look at those companies, because then we would come to the result that in each and every country, all of them are influencing decision-making and whatnot.
So, the result then is that the systems are being used everywhere in different, let’s say, modes or in different intensities. For example, the Nordic countries or the UK are countries where there has been a lot of digitisation in the public sector over the last decades even, and then, you know, surprisingly, I guess, to many, is that I would say that Germany itself is lagging behind at least ten years when it comes to digitisation of public sector entities. So, it has an advantage because we don’t see so much high risk or problematic applications of this in the public sector.
Now, having done that mapping, what we did is we argue that one of the major problems for holding the institutions who are using these systems accountable is the lack of transparency. Now, transparency is very ambivalent, I mean, many people immediately criticise the use of this and say, “Yeah, but transparency for whom and transparency with what – to what end?” You know, it’s not an end in itself, it can only be a means to an end. I completely agree with that, and we as an organisation also agree with that, but it’s a basis, it’s a prerequisite to know as a citizen what especially kinds of systems are used by public entities to be able to hold the stakeholders accountable. So, what we suggested then, and I’ll post some stuff here in the chat after I’m done with my input statement here, what we proposed then is a couple of recommendations of how to at least alleviate that a little bit.
First of all, we would like to see that there are impact assessments being made for such systems being used by public sector entities, and I’m now focusing on public sector. We can discuss the private sector maybe a little later, and when these impact assessments are done, they also need to be published in a public register because at the moment, you know, it’s almost impossible to master this challenge of finding out what is going on.
Different organisations, Journalists and civil society organisations have done this in different countries, and they all came across huge obstacles, because basically, if this information is not made available proactively, it’s very, very hard to come by. So, this is a recommendation, or rather a policy demand that we are making to inclu – increase that level of transparency by using impact assessments and making them public. And also, then, the next step would be to think about auditing these systems.
Now, that is a huge challenge because at the moment, you know, you could argue that we don’t know a lot about how to audit these systems because it can be very, very different. Auditing, for example, an – the system of an automatically driving car in contrast to which the European Union is actually trying to do auditing the – Google’s search algorithm and how it categorises, for example, Google’s own services in contrast to competitors’ services, is an enormous task.
So, we need to think about this hard, and this is also part of the Digital Services Act, the new proposed regulation on this, and that is additional to the upcoming European AI regulation, and this is something that will, let’s say, keep us busy for a long time. But this is basically what it comes down to, to some a – this is an impression I get, this demand for transparency seems weak because we are not asking to, for example, prohibit the use of a certain system. But it’s not weak at all in the sense that we have to have mandatory requirements for transparency to be able to discuss these things in a meaningful way as a society, with all the different stakeholders who are involved in this process, and I’ll leave it at that, and, as I said, I’ll put some things in the chat and also, we have time to discuss.
Marjorie Buchser
Thank you very much, Matthias, and on the – so on your point that you’ve noticed a quite broad deployment of algorithmic decision-making for the public sector, are the areas that you thought were particularly problematic, and I’m thinking, for example, of the thing that Canada has deployed, sort of, a decision-making process for refugee claim for instance, and so there it’s also not citizen and not necessarily people that will have the capacity to assess the transparency of the system. Have you seen particularly problematic applications?
Matthias Spielkamp
I think we did. We see those problematic applications everywhere. One of the most prominent examples in Europe is the SyRI system in the Netherlands that was supposed to be used to identify where there fraud. And, in the end, it was struck down by a court in the Netherlands saying that it infringes on people’s human rights, with concrete respect to the European Convention on Human Rights. And then, after this first instance court basically prohibited the use of it, and the government didn’t challenge this, and everyone was quite surprised, including the activists who had brought the challenge and said, “Wow, this is a great success,” but then they found out they didn’t challenge it because there was a new law in the making that would, sort of, provide the basis for the use of SyRI 2.0, you know, an even more intrusive system.
So, we have seen examples of this basically in very many countries, and I would like to remind the audience and everyone here that we are not, in general, against the use of these automated decision-making or augmenting decision-making systems by the public sector. We think there is a lot to be gained by the use of this, but the way they are implemented at the moment is basically not appropriate because there is not enough expertise in the public sector itself. They don’t even have the capacity to assess what they’re buying from, for example, commercial vendors, and also, there is just not enough oversight and discussion about this before there is a decision made on, for example, what purposes to use these for.
Marjorie Buchser
Thank you very much, Matthias, and I want to turn now finally to, last but not least, Rebecca, and I think that we talked a lot about the deployment of AI in a national public sector, but I think there’s also a recognition that obviously, that is a transnational technology, and – on some aspects and in terms of capacity building or knowledge-sharing, there’s a need to have a – you know, a global perspective on that issue, and to some extent, this is what partnership on AI is attempting to do. So, what’s your sense on how could we mitigate the impact and the negative impact of AI on democracy through a global lens?
Rebecca Finlay
Thank you. Thank you, Marjorie. It’s a real pleasure to be with you virtually today and to join all of the experts on this important topic. And I have to say, I’ve already started taking notes, listening to these opening remarks, and I’m looking forward to the discussion, and thank you all for being with us.
So, yeah, when I think of AI and democracy in the global context, Marjorie, as you noted, the real question for me is that question of both how do we ensure that AI is beneficial for people and society, and at the same time, as Cornelia noted, prevents real and potential harms for people? And when we think about people, we have to know particularly those who are the most vulnerable, and I think when we think about societies, we have to be aware of those that are also vulnerable and are often not at the international decision-making table when it comes to so much of the norms and approaches, which will be put into place.
And because of the prevalence and the remarkable potential, as others have noted, for AI predictive systems across sectors, it’s clear that we need to have, within democratic, national and international ecosystems, strong and responsible individual sectors. So, we’ve talked about the role of regulation, we’ve talked about the role of the private sector and their need for voluntary best practices, but it’s interesting that when I think about this, in terms of my current role as the Acting Executive Director of Partnership on AI and coming from CIFAR, I’m really trying to puzzle out, and I’d really welcome others’ thoughts on this, the unique opportunity and role for multistakeholder initiatives in this space. So, those initiatives that bring together both interdisciplinary, but cross-sectoral experts to work on the questions posed by AI in society.
So, for example, the partnership on AI, we have about 100 partners, from several countries around the world. I’m really happy to note that Chatham House is one of our partners, as is Microsoft, and also the Oxford Internet Institute, so it’s great to be with you all. And our role is really to think about how do we create those spaces for our partners across sectors to come together to invite diverse perspectives and voices into the process of technical governance, design and deployment of AI technologies, but the hard problem of course is, how do you do that effectively and how do you make real change?
And I think it starts in some way in thinking about what are those topics that are most impactful, in terms of how we deploy the model and how we measure success. So, one of the topics that we’ve already talked about is the importance of a well-informed citizenry, with strong and clear freedoms and moral agency, and one of the concerns of course is misinformation and disinformation and how that is proliferating within our democratic systems.
So, you know, as everyone’s aware, in the months leading up to last year’s US Presidential election, Facebook labelled more than 180 million posts as misinformation, Twitter flagged about 300,000 more posts, including several, as we know, by President Trump himself and his family. And while AI has allowed for synthetic media and new media to really advance work around art and creative expression, privacy, and even, in some cases, the capacity to identify problematic content, we also know that it has created this space in which one can create and promote content that misinforms, that manipulates, that harasses.
And so, one of the areas that the partnership on AI has laid out is this work specifically focused on media integrity, this is work led by my colleague, Claire Leibowicz, and it’s an interesting case study because of the unique group of organisations, international organisations, that have come together to work on this because they are all, in their own way, on the frontlines of information integrity. So while we have Facebook there and Microsoft there, participating in Adobe, there is also the BBC, and the New York Times, as they think about the role of news agencies in this environment, and also civil society leaders, organisations like Witness and First Draft, that really are very effectively advocating in this space.
So, it’s really because of the alignment of interest and expertise that they’ve come together to tackle several areas of mutual imperative for all of them, both individually and jointly, around this question of misinformation and the authentication of content. And what, of course, they have found is that these are deeply complicated sociotechnical challenges within our democratic systems, so some recent work looking specifically at that question of labels and the labels that have been deployed by some of the tech platforms around misinformation, this is work that’s done both by Claire and Emily Saltz at First Draft, they’ve been able to really identify that these labels are in – within a system where there are already deep public divisions around the role of platforms and the broader confirmation of communication and information ecosystem.
So even the public’s ability to trust in the social media platforms to determine what’s fake, without error or bias in the very first place, is complicating the capacity for these labels. So, it’s – these are deeply complicated questions, both for regulators and for the private sector and for broader civil society organisations to come to work with. And so, one of the areas that we’ll be identifying moving forward is looking at are there ways to benchmark, actually measure interventions across platforms in a way to really, sort of, put together some more public understanding and awareness about what is working and what might not be working as well.
So really, when I think about that, there’s, sort of, four pieces that strike me around thinking about the role of multistakeholder initiatives within democracy. So, first is, what are the right questions, what are the places where these multisector actors really have an alignment of interests to come together, and then how do you have organisations that sustain that dialogue?
So I think about Chatham House exactly in this space, as well as the partnership on AI, whose day job is it to make sure that we’re creating these mechanisms where the public interest can be centred in a conversation about how to tackle some of these really intractable challenges? And I do think it’s time to focus on solutions. I think principles and frameworks and approaches have been helpful in setting, sort of, the broader context for this conversation, but how do we iterate solutions, how do we think about measuring and monitoring and evaluating solutions as we move forward, and then how do we create a space where the public can be truly informed about what is actually taking place and creating some of that?
So, you know, this work that we’ll be doing around benchmarking practices around – labelling is just one example about really trying to make transparent what are the practices that are in place and how can citizens better – and regulators better understand that environment. So, I’m looking forward to the discussion today. Thanks for the opportunity to touch on that one example, in terms of some of the work we’re doing.
Marjorie Buchser
Thank you, Rebecca, and we do have a lot of question from the audience. Maybe just as a final thought, I think that all of you really flagged the importance of socialising these questions and this discussion with the broader public, and not only in your country, but across the globe, so I don’t know, Rebecca, is if partnership on AI is trying as well to engage not only the technical and the extra community, but actually citizen in general, and that could be a question for all of you, but I’ll ask you, Rebecca, and then I’ll turn to the audience.
Rebecca Finlay
Yeah, I think this is a critically important question, and I think it really speaks to understanding how best to communicate, to inform, for engagement and action moving forward, as well. And so, a lot of the work that is happening within the partnership is thinking about those translational pieces of research that support better understanding the broader communication context within which so much of this activity is taking place. And so, some of the work that we’ll be doing moving forward will be digging in even deeper into understanding how some of this is received.
We talked about explainability as being this core piece of, you know, what regulators want and need, in terms of the capacity of responsible deployment of algorithms, but even how the public understands explainability, what is explainable, what is the context within it, so really engaging the public in the conversation about what they need to better understand. How do we take transparency, as you say, Matthias, from just being transparent about what the application is to real understanding of the role is such a critical piece of work that we need to do?
Marjorie Buchser
Thank you. Yeah, what is explainability, as we need Philosopher here. We have quite a few questions, so I’m going to take two, and because I wanted to be able to give as many of you an opportunity to answer live, so I’ll start with Salama – Salma Abbasi, would you please unmute yourself and ask your question.
Salma Abbasi
Thank you so much for this opportunity, I think it’s a wonderful discussion coming from very, very different dimensions, but all at the core looking at humanity. We are working on developing a series of lectures for new developers, students, to focus on AI for the SDGs, and the core is human – building human resilience. I feel it’s important to influence the young minds, as they drive future machine learning and other aspects for corporate world, which may be driven by other factors. I – we want to focus around the ethical context and the thinking. I would like to understand what the panellists think. Is this the right direction and how can we do more to engage this? And, Rebecca, to your point, recently, the IEEE have just launched a new conversation about measure mentality, and I was on this panel, again looking at explainability and how do you measure the impact holistically of AI? Thank you so much.
Marjorie Buchser
And I was muted, and that’s typical, I still do this. I’ll take another question, so I’ll offer the panellists a chance to, you know, answer to one or the other. I think the next one we had was from Pauline Otty. Pauline, could you unmute yourself and ask your question?
Pauline Otty
Thank you very much, can you hear me?
Marjorie Buchser
Yes.
Pauline Otty
Okay, well, the issues have been topical, and been touched upon by different panel members, the issues being transparency, public trust, accountability, and at the end of the day, who owns AI? When you co-ordinate all the data from different areas, who owns or controls AI centrally? I think Dr Howard and Finlay, I think all of you can actually touch on this explanation, please. And this has been worrying everyone in your collection of data, Microsoft and all. Thank you very much. Sorry, my voice is a little bit – I’ve got cold at the moment. I hope you understood what I said, otherwise…
Marjorie Buchser
Yes.
Pauline Otty
…Host, can you read it out to them?
Marjorie Buchser
No, I think that – I think it was clear, Pauline.
Pauline Otty
Yeah, okay.
Marjorie Buchser
Thank you very much.
Pauline Otty
Thank you.
Marjorie Buchser
And I hope you get better, but it was…
Pauline Otty
Thank you.
Marjorie Buchser
…loud and clear.
Pauline Otty
Thank you.
Marjorie Buchser
Two question, one specifically on the accountability of such system and second one of how do you make sure that future generation understands the system and builds, sort of, resilience around AI and also AIs for SDG? So, I’m going to start with you, Phil, I think that a few of the questions were addressed to you, and then I’ll go to the rest of the panellists.
Professor Philip Howard
Certainly, I’m happy to offer a quick answer because I’m confident Rebecca and Matthias and Cornelia know more about the ownership structures for AI. My sense is that most of the sophisticated machine learning systems are leased to governments or public agencies, the ownership does not transfer. IP often resides in private firms and they’re concentrated in Silicone Valley and Seattle, and there’s a good handful in Germany, and another batch in China, so I think ownership is a critical question. I’d have to defer to the other experts on the panel on what it looks like.
But I think it’s – the ownership systems for AI are important to understand if we want to advance the sustainable development goals because in – I think there’s – I think that the research shows that AI systems perform best with fresh data that has been purposefully collected in clean and ethical ways. Those systems tend to generate outcomes that we like or can be more confident of and are more likely to be – achieve a sustainable development goal than AI systems that are trained on data that doesn’t have a clear provenance, that is patchy and incomplete, that didn’t have an ethics review, while it was collected. And so, in a sense, these are two great questions with linked answers.
Marjorie Buchser
Thank you, Phil. Yeah, Cornelia, does Microsoft own AI? That was the question, essentially.
Cornelia Kutterer
Let me first answer the other one, which I find really extremely relevant, which is how to influence the minds of future drivers and developers of machine learning, and just from the experience of the last two years within Microsoft, it – this human centricity is actually the core of what we consider requires a mind shift of developers.
So the way we have done this is starting with the – we call them AI champs, so in each engineering groups we have dedicated people that have as a task now to actually train their colleagues and we restored this mind shift in how you think about developing AI. And so our standard really starts with the idea you have to first think why you’re actually developing it, for whom do you develop it, who is impacted positively and who might be impacted negatively, and that’s a different way of thinking about developing AI.
So, once you have this done, you start to think about the values that might be intention eventually and then starting, how can you actually advance both values, how do you – you know, inclusion versus privacy, for example, is one of the value tensions in the training that we now deliver to the Engineers. So doing this internally, yes, but starting to do this right at university level would be even much better. So, I think that’s a really, really great and significant question, so that should be actually now part of the curriculum for people in this field.
Now, the data ownership question is something I’m struggling with because when we do – first of all, what has changed, in the context of technology, is that we are co-developing with our customers where we often projects, so it – there’s more co-ownership than anything else. So, IP has also significantly changed, and it’s not my field of expertise, but it’s an area where I think there is a misconception around who owns something. So, I would – data in my mind, in my legal mind, cannot be owned per se, and that’s the first thing, but it – the entire process and actually developing technologies in this space is very often a long-term and very co-operative adventure in our minds and so the structure and the contractual licensing agreements have significantly changed in this space, as well. And just to say, I entirely agree with Philip on the data.
We have an open data campaign where we have, first of all, provided ourselves data that we have to the public, but where we really enhance and try to advocate for the sharing of data for the public good, so there is a lot of great projects that we’re working on in the health sector or in other areas. AI for sustainability, and in particular, biodiversity, so there’s many, many areas where you can actually do really wonderful things in the space with the open data.
Marjorie Buchser
I see Selma replying exactly and, sort of, forcefully agreeing with you, which is a very nice interaction. I’m going to draw one more question also to Matthias and Rebecca, and [inaudible – 53:30] asked me to read her question out loud, but specifically on transparency and accountability, “Referring to the fact that Facebook is not planning to inform the people that have been affected by the latest data breach, how can accountability and transparency be enhanced to ensure users are made aware and given control of the data – over their data?” So, a little bit backward, but it’s ownership and whether, sort of, users and citizen could, you know, help you to gain greater transparency and make sure that the accountability system is in their favour, and I’m going to turn now to Matthias and Rebecca, and Matthias first.
Matthias Spielkamp
Okay, thanks for these great questions, and they are all really connected to each other because – I’ll start with the last one. Basically, you know, what we need to understand is that it’s – as in any other domain, it’s a power struggle. If Facebook just says, you know, “We are not going to be transparent about it,” and no-one’s going to challenge them on it, then nothing will happen, right? So, this is why we are asking for these much stricter regulations on transparency, accountability, also, auditability. It is necessary.
It’s basically – you know, it’s a travesty that a company like Facebook can lose 550 million datasets and then say, “Yeah, well, we’re not going to talk about it. We addressed it, like, ten months ago,” and ten months ago they said, “We are not going to comment on it.” It’s just ridiculous. So, is that a problem about data? No, this is a problem about power, you know, that’s all that can be said about this, and we need to have aggressive governments who say, “This is not something that we will accept.”
And, you know, I could go on about this for an hour, as you can imagine, but I’ll leave it at that, and of course this is deeply connected to the question of, who owns this stuff? And I think this is a – just a very spot-on question because I would frame this as control because, as Phil and Cornelia both said, the idea of ownership is not really adequate here. So who controls this? And the control of it depends on the – basically on the contracts that are made, and I think there are a lot of bad contracts that have been made in the past and that lead to a situation where, for example, public services are not actually controlled by the public sector anymore but they are controlled by private companies.
Right at the moment, we have a debate in Germany because there is a large crowd infrastructure that is offered by Microsoft to the German Federal Government, and there’s a lot of criticism about this because this begs again, the question, what does this mean for the future sustainability also of public services? Because if we hand over control to private companies too much, you know, as I said, depending on the contracts, then this will harm us in the future, and we have seen many examples of this happening in the past, where we lose control about this.
And then just briefly, I think the question about making people more aware of this is also a core issue. It’s one, you know, it’s not the core issue, it’s one of the core issues, and this is also why we are arguing that there needs to be enhanced expertise and literacy on these things because, first of all, we need to demystify the claims that especially private companies make about the use of AI, but also the public sector saying, you know, there is some magical silver bullet that we can use to solve societal problems with technology. It’s usually not the case, but, you know, it ends in smaller – in larger disasters, and we need many programmes to address this, and of course, developing curricula for the people who are developing this is a very, very important step. So, I can basically just encourage everyone who is in this field to try this.
As a matter of fact, one thing is that Mozilla Foundation is offering, at least so far in the United States, some grounds for scholars and academics to include this into their curricula. I don’t think they have broadened it to Europe yet, but they’re thinking about it, so if you’re interested, get in touch with them and basically push them in the right direction.
Marjorie Buchser
Thank you, Matthias. We have two minutes to go, but Rebecca, there’s a list of questions, I’m sure you are interested to reply, I just want to also highlight two more that are from the Q&A. One is, “How can we make sure that algorithm are free from bias, especially when used by the public sector?” And “To some extent, how could tech firm use Sendbox and, sort of, protect its environment to test AI technology to limit the effect it may have on real life and democratic processes?” So, these are additional couple of questions, if you want to comment then, up to you.
Rebecca Finlay
Thank you. Well, I think I’ll – sort of, thinking about all of the questions that have been asked and thinking about potentially some things that cut across those questions. So I do think there’s a recurring message that I think all the panellists are talking about, in terms of documentation, ensuring that we have clear documentation, practices, even standards and norms in place across industry, but also in terms of government across the lifecycle of machine-learning applications, all the way from the provenance of the data, the genesis of the data, the – what – you know, as Phil mentioned, the – often the mixing of data that can happen, in terms of the datasets and the evolution over time. So really understanding the – and documenting the core elements both of the training data and the data that’s being used and the monitoring and managing that moving forward, as well as the models themselves, and coming up with through the lifecycle of an application of a machine-learning system, how do you clearly document and communicate what is there? It’s almost, sort of, to me like the first step in thinking about how we create understanding within the public, but also, within the regulatory sphere and others around the way in which some of these machine-learning systems are being used.
And then the other I think is trying to come up with those mechanisms by civil society and others where we are shining light on incidents, such as the one that we’ve talked about, so we – we’re piloting an – something that we call an AI incident database that allows for the reporting of incidents where there has been harm occurring. We’re really just getting up and running, but I’d be – encourage participants here to take a look at it, to think about how it could evolve, how it could be better, one of the pieces of work that we’re looking at is how do we create a taxonomy that’ll better inform both the collection of those databases, but then the reporting that could emerge out of it, as well.
And I just want to say that, you know, I talked about multistakeholder, and I think sustainable development goals are the ultimate multistakeholder initiative, and the more we can encourage the young minds that are working on AI to put AI to good use and thinking about how to solve some of these complex global challenges, the better. And I think it’s a great way to introduce all sorts of questions around ethics, impact of research, and real influence, in terms of making change, so I strongly encourage that. Thank you.
Marjorie Buchser
Rebecca, that was a great conclusion. Rebecca, Matthias, Cornelia and Phil, thank you so much for your contribution, for your insight. As I say, this is very much a first discussion and there’s much still to discuss and explore in this space, so we hope to host many more conversation on that topic. Thanks again for your contribution, thank you again for also you question from the audience. It was a lovely Thursday afternoon, at least here in The Hague, where I’m based, and hopefully also in Berlin, Toronto, Oxford and Brussel. Thank you all, and have a lovely afternoon or day.
Matthias Spielkamp
Thank you; you too.