Harriet Moynihan
Well, welcome everybody to this event on Who Should Regulate Free Speech Online? A veritable hot topic at the moment. I’m Harriet Moynihan. I’m an Associate Fellow in the International Law Programme here at Chatham House, and this event is part of Chatham House’s Digital Society Initiative, which aims to bring together policymakers and technology communities to try and tackle some of the challenges that rapid spread of technology is having on the impact on our society today.
So, in recent years, Big Tech has come under intense criticism for the ways in which digital platforms have enabled the spread of online content, such as extreme hate speech and trolling and abuse of content. And social media companies in particular have struggled to develop and enforce appropriate standards to try and address these issues. Against this backdrop, our panellists today are going to look at potential solutions to dealing with the problems of hate speech, disinformation and other issues on online content, such as incitement to violence, and we’re going to be analysing questions such as who should define what standards govern these issues? Should it be the private companies themselves, or should there be a role for external actors? Can the private companies themselves take into account the need to, sort of, safeguard democracy as well?
So, I have a fantastic panel here today, from a range of constituencies. I’m going to start with introducing Brent Harris, who’s the Director of Global Affairs and Governance at Facebook. Brent also worked at Redstone, which is a strategy consultancy, and before that, he worked at ClimateWorks. Also have here today Rasmus Kleis Nielson, who’s Director of the Reuters Institute of Journalism in Oxford. Rasmus is also Editor in Chief of the International Journal for Press and Politics and his work focuses on changes in the news media, political communications, and the role that digital technologies have in those. And finally, my pleasure to introduce Dhruv Ghulati, who is CEO and Co-Founder of Factmata, an AI start-up developing community-driven explainable algorithms, and I like the sound of an explainable algorithm, it’s very useful, to solve the problem of online misinformation, and to build a quality media ecosystem. Previously, Dhruv worked at Weave.ai, developing technology to provide context to information passing through mobile applications.
So that’s our panel. This event is on the record, and it’s also being livestreamed, so we’re creating online content ourselves here today. Before we start, a few housekeeping points. Do join in the conversation on Twitter, if you can, using the #CHEvents. We’re going to have about half an hour of panel discussion, and then I’m going to open it up to you, the audience, ‘cause we’d love to hear your views for Q&A. Grateful if you could put your phones on silent and then, at the very end, we’re all going to retire upstairs. There’s a reception up in the Neill Malcolm Room and you’re very welcome to continue the discussions up there later on.
So I’d like to start the panel presentations by talking to Brent about a new initiative that Facebook have come up with called an Oversight Board. Some of you may have heard of it, and it’s very exciting in the sense that there’s a report actually being released on it today, there are copies on the table there. The Facebook Oversight Board has been open to consultation, over the last few months, and Facebook has consulted quite extensively with a number of workshops around the world and town halls, as well as having an online consultation, and the report’s available there and online today, summarise the feedback that Facebook have had about this Oversight Board. There was a – the final workshop was actually in Berlin earlier this week, which I attended, and the Oversight Board is basically aiming to provide a layer of independence and accountability for Facebook’s decisions about content moderation. So when Facebook makes a decision about whether to keep something up or take it down, there may have an op – there may be an option to refer it up to this Oversight Board, made up of independent experts, I believe around 40, who will then make a final – a binding and transparent decision, which Facebook will follow. So it’s quite an innovative idea, and it would be really useful to hear a bit more. Brent, I know this is hot off the press and you don’t have the final details pinned down, but it would be really good to hear more about the Board from you.
Brent Harris
Perfect. Well, thank you for coming here today to hear about it, and we’re excited to hear from the panellists about what we’re building and how we should build it, and we’re also excited to hear from all of you and get your ideas on how we can get this idea of an Oversight Board for Facebook right.
So, Facebook today holds a tremendous responsibility. Every day about two billion people use our products to communicate, to share, to create, to learn, to discover information. And in the course of doing so, the company makes decisions about what’s allowed on those platforms, on those products, and who’s allowed on those products. And in order to do that, in order to exercise that responsibility, we’ve set out standards, we’ve also hired thousands of Moderators around the world, and we’ve built products, not only that help people to share, but also that help to keep people safe. But we’re a company, and we’re a private company, and we don’t feel that we should hold that responsibility alone, and so, that’s why we’ve called for greater regulation, and it’s also why we proposed building this board. And the idea behind the board is to compose a group of independent experts, who will come together, who will deliberate and make reasoned decisions on some of the hardest questions about what’s allowed on this platform, and those decisions will be binding and Facebook will have to follow what the board recommends.
In the course of making those decisions, the board will also look at the system of content moderation and it will look at our principles and our policies and it will make recommendations on whether or not and how best the company can live up to those standards. And in the same way, that we felt that we shouldn’t actually exercise this responsibility just on our own, we also felt in building this board that we shouldn’t just build it on our own, and so that’s why we’ve gone out, over the last six months, and we’ve heard from over 650 people around the world in about 30 workshops and roundtables, people from 88 countries and over 1,200 people who’ve actually submitted ideas on the right way to build the board and what it should look like. And later today, I think right at the end of this session, we’ll actually fully and publicly release the report that shares their feedback, and the feedback so far has been that folks want us to build this. People want us to build this board, however, they’re not sure we’re going to build it right, and they have a lot of ideas on exactly the right way to do it. And so my hope is that what we’ll hear today, from some of the panellists and from all of you, is your feedback on how we’re going to get the trade-offs right on this and also, how do we as a company not only, in the course of the board, but also alongside governments and civil society and alongside press and alongside all of you, how do we actually start building a digital world and technology that is truly more just and more fair? So we’re that excited to be here and really excited to hear some of the feedback.
Harriet Moynihan
Yeah, thanks, Brent. I can’t resist asking, because I’m an International Human Rights Lawyer, the extent to which the board would have recourse to international human rights law when it’s looking at these decisions. I imagine that some of the panel – some of the board are going to be Lawyers, and I was just wondering if there’s any, kind of, way in which this framework that we have, this universal framework of rights would be taken into account and as part of that board.
Brent Harris
So I am a Lawyer by training, and one of the things that we have heard, and I’ll get to your question in a second, but it’s been striking, as we’ve gone around the world is that people don’t want the board only to be Lawyers, so…
Harriet Moynihan
I agree.
Brent Harris
…alas, however, I love this question, because it’s something we’ve really powerfully heard as we’ve gone out around the world, and we have heard a call that we should draw on and be informed by human rights and by international law that a century of work has gone into figuring how to shape these norms and shape these laws and shape some of the international institutions, and we as a company have signed onto the Global Network Initiative and signed onto the set of principles that they stand for. The board itself draws on some of the human rights, norms and ideas, including fairness, including expression, including elements of process and equal access, and the principles that will guide this board almost certainly will be rooted in and formed by, guided by notions that come from that longstanding set of work in the human rights sphere. And so, the very product of Facebook is actually grounded in free expression, Article 19 of the UN Declaration on Human Rights, and the broader set of principles that we stand for as a company includes safety, and we anticipate actually going out and, in the course of how the target that will guide this board, we anticipate based on the widespread feedback that we’ve gotten, will actually incorporate a broader set of some of these norms and ideas into that charter directly.
Harriet Moynihan
That’s encouraging and certainly the transparency, I have to say the transparency in the idea of a report that provides feedback and the amount of consultation that has been done is also encouraging, and I gather the board itself will be going live around in maybe six months or so. That’s going to take – you’re going to take time to actually put the board into place, so it remains to be seen what the board will look like, but I gather you’re drawing on the feedback, in order to put the board together.
Brent Harris
So that’s exactly why we went out. So that’s why we wanted to hear from people around the world. When this idea was first proposed inside Facebook, my reaction, and I’ll be candid, was I have no idea exactly what this should be, and what it should look like, and we’ve tried to live that, in the course of actually going out around the world. We’ve tried to go out and actually hear from folks well, what exactly would an Oversight Board for Facebook look like? How would you want to design it? What would you want to see out of it? And that’s because we truly don’t have all the answers in Silicon Valley and certainly, we don’t have all the answers just inside Facebook. And so that’s what this report is about, that’s what the consultation is about, and we’ve really tried to document, in great detail, the wide array of feedback that people have provided, including conflicting opinions and conflicting ideas on how to structure this, so that we get it right.
Harriet Moynihan
Well, I’m sure we’ll have lots of questions, burning questions about that, as we come onto Q&A. But, Rasmus, in the meantime, I wanted to talk a bit about disinformation. I’ve mentioned hate speech and abuse of content, but disinformation is a real problem as well, as we’ve seen, and in particular in the context of elections in the UK and the US, and I think it’s led to a sense of, sort of, scepticism about what we see in the news, indeed, we’re often encouraged to think before we share now. I’d be really interested in your views on what impact that’s having, in terms of legacy media and online media and whether Journalists can play a role in trying to come up with solutions to, sort of, combat that.
Dr Rasmus Kleis Nielson
Sure. So, I mean, I think the baseline here is that digital technology has made it easier to share and publish any kind of information, and whether it’s in good faith or in bad faith, and whether it’s factually accurate or not, or something else entirely. We should remember that most human communication is not actually assertions of fact, which is one of the reasons it makes very hard to govern the space surely on the question of, sort of, factual accuracy if you will. And we’ve seen now that the construction are very large, relatively open network structures that operate for profit that enable all sorts of different things, and we have now seen that it also enabled various forms of abuse and things that some of us might think of as abuse, but that are not clearly defined as such by law. But I guess we need to remember of course that free speech sounds very nice, and it is very nice, but it’s not uncomplicated nice. Free speech protects also the right to impart or receive information that may be shocking or offensive or disturbing, and we see a lot of that too, on large commercially operated networks, as well in the wider open web.
It’s clear that both the experience of coming across this, as well as the very active public debate around it, is undermining trust in political institutions and their integrity in the news media as a whole and their integrity, though some individual brands may stand out from the noise, as well as of course in the platforms that enable a lot of how we engage with digital media. So the confidence in social media in search and the like, is also suffering as a consequence, both of the reality of disinformation, as well as the debate around this information. And in that sense, I think it is one of those things where even if responsibility and blame is not, sort of, equally proportioned, it really is a collective problem, in which a lot of different institutions that we rely on to be able to engage in our societies, politics, the media and technology companies are all facing a crisis of confidence that tie all of them together, and I think it’s hard to imagine a way out of this that doesn’t involve some element of collaboration, even if the responsibility is not necessarily equally distributed.
So how do we go about that? I suppose I’m an academic, so I naïvely try to answer the essay question that was put to me, how – you know, who should regulate free speech online? I mean, I suppose, my starting point would be to say, that’s not for me to decide. I’m an expert. I have professional expertise that can help assess the consequences of different choices that we may make as a society or that individual companies might make, and I want to make that expertise and expertise of my colleagues available for that and I have personal opinions, but my personal opinions are no more important than anybody else’s. So, in that sense, the question is, you know, who should regulate free speech online? Is that we should, right? And that’s why I’m glad that we’re so many people in the room today and that it’s a more diverse crowd than many of the discussions that exist in this space.
I think we can say some things about what it could look like, where I think David Kaye, the UN Special Rapporteur on Freedom of Expression has been a leader in this space, the idea that, as Harriet intimated, that we start from the principles of international human rights law that impose a duty on states to protect our right to free expression, both impart and receive ideas, a responsibility on private companies and then, the idea of opportunities for remedial action or recourse, if our rights are being violated. And that I think provide a framework for thinking about what protecting and promoting free speech looks like, by recognising that there are situations in which we may want legal provisions for restrictions to protect other fundamental rights than free expression. But I think the legal provision here, as provided by law, as I think the frame is, and this is quite important, because fundamentally, I suppose that my personal view would be that it needs to be elected officials, who make the fundamental decisions about what the framework looks like, even as individual companies can make further decisions about how they want to operate in this space. And in that sense, I hope we can see some legal movement in this space, and I think we can also think of some examples of what, sort of, good practice might look like in that space, that are broadly similar, across both legal regulatory interventions, as well as what individual private companies might do.
One is precision, someone must do something about bad people doing bad things. There’s not a policy, neither from the point of view of Politicians or private companies. We need greater precision about what exactly it is that we are trying to contain, and that precision is sorely lacking right now, and I think the absence of it is indicative of the complexity of the problem and also, perhaps, that we may need to face a choice, a choice in which reasonable people can disagree, about what is the shitty price we’re willing to pay for freedom? What is the bounds we want as a society in this space? So, some degree of precision that will force some hot choices.
I think we need greater intelligibility about, you know, what are the decisions that are being made and why? The legal system has a tradition of, sort of, caselaw and jurisprudence that provides some insight into that decision-making technology company that I think have grown somewhat better at this. But we don’t have a similar tradition of trying to explain, proactively and publicly, and give reasons for how complicated decisions are made and why, whether those are automated, in various machine learning classified, as trying to screen things or are alternately, human decisions, but of course, the ultimate responsibility needs to be a human one.
And then I think we need independent oversight. This can take the form of individual companies setting things up, but I think more fundamentally, I suppose I’m personally partial to the idea of industry-wide independent self-regulatory mechanisms, as promoted by – proposed by Article 19, or potentially, in some societies, though I wouldn’t want to see this in every society, the idea of independent regulators backed by statute, as proposed by Damian Collins and the DCMS Select Committee in this country, and finally, in addition to intelligibility, transparency. No-one should get to mark their own homework, in this area, and I think it’s clear that while there are many reasonable obstacles to sharing data, privacy being one, the demonstrable record of, sort of, abuse, of data being shared, and we need greater access for independent third parties like civil society organisations, news media and researchers to actually assess the state of play, as well as the efficiency of interventions that are being made.
So that may all sound, sort of, obvious to you and, you know, it sounds very convincing, these guys are sitting there and laying out what we’ve done, so why are we not there yet? I mean, I would suppose that we can say, sort of, short-term, there are some very clear obstacles, which is at a political complexity and then, sort of, you know, for-profit companies that are busy making money and doing other things, even as they are waking up to the fact that this is arguably is an existential threat to their business model and I think we can see several companies realising that, and making some moves.
But I would say more broadly I think we have, sort of, fundamentally the problem that because this is about free speech and because free speech is in part, in the eye of the beholder, what potentially harmful looks like, misleading looks like neither private companies nor Politicians are particularly delighted to be the ones who are put in the decision of unilaterally making decisions about what’s acceptable and what is not, and paying the political price when the inevitable controversy arrives about individual decisions. So in that sense, I think we are in a situation where there are some tough choices ahead of us, and the role of the – of those of us in the room who don’t work for those companies or aren’t Politicians is to put pressure on those companies and those Politicians to make those goddam choices.
Harriet Moynihan
Thank you, Rasmus.
That was a whistle-stop tour, through a whole range of other options, including the role that states might play. I mean the UK at the moment, as you may know, is consulting on an online harms whitepaper, the idea of setting up an independent regulator. Of course, many states have come out with their own laws, which have come in for some criticism about disinformation for being too draconian, others have been seen to be a, sort of, more positive contribution. And international institutions, the UN, the EU’s Code of Conduct on hate speech, so there’s lots of ideas out there. Article 19’s Social Media Council, you referred to as well, which is, kind of, multi-stakeholder model, and my question is, I suppose, to what extent can the Oversight Board, Article 19, state regulation, can it be, kind of, complementary in this space, or is there a risk of a duplication or even undermining standards if there are different standards? But that’s perhaps something for later.
Let’s pass on to Dhruv, who has a very interesting exciting new company, relatively new, Factmata, and I understand that it basically provides a quality, security and credibility score for online content. So it looks at content and if it’s, sort of, obscene or racist or allegedly potentially of that nature, then you would use algorithms to try and give it a score. It would be really useful if you could explain it better than I have, and perhaps also – perhaps give us a flavour of the extent to which you think it’s scalable, given the huge amount of content that’s online.
Dhruv Ghulati
I was asked to do five minutes, sort of, session notes, but I’ve wrote a speech, but I think it’s probably not the right – so I’ll just talk a bit more about, kind of, what we’re trying to do. So, my background is, I used to work in finance and I was working on a trading floor and one of the things that I got super interested in, and it’s a very different background to obviously, you know, what we’re talking about here is, is how information is framed, has real decisions in the space of minutes of someone clicking a button and, you know, spending millions and millions of dollars, to buy or sell a Government bond or a stock. And it might look like, you know, a lot of rich people making lots of money, but that has real effects on our economy and so, I just, sort of, saw the power of information framing and I wanted to look into that problem.
In 2016, I quit my job and basically, went to learn how to write machine learning algorithms. I got very interested in AI, and in 2016, I wrote a thesis and proposal my thesis, Automated Political Fact Checking, so building an algorithm that could essentially take any statement that someone was saying and basically propose a counterargument to that statement. Not necessarily fact checking it, but providing other evidence to a claim, and I wanted to do something automatically.
Another personal story in this is that, you know, my family is Indian and, you know, all of them, you know, my grandparents – one of my earliest memories was, sort of, some of the very, very toxic statements that some of my grandparents would make. Things like and, you know, I was just writing this speech but, you know, things like being told what I should or should not do, so, sort of, you know, “Don’t go playing in that area. A black person shot three women there last week,” or “Don’t travel there, this country’s got poor hygiene standards. You need to stay away from Muslims, they’re radicals, have you seen how they treat their women?” These are all statements that my grandparents would – you know, my granddad would make, and it, sort of, shocked me and I wanted to build a system where statements like this, and which aren’t actually as uncommon as we think, there are lots and lots of granddads and grandparents making those such statements, I wanted to build a system where this is not the future that we will look at.
So, fast forward from that thesis that I wrote in 2016, we’ve been building this company that I’ve formed, Factmata, not really thinking about, kind of, you know, the next one or two years about detecting fake news or, you know, helping Facebook detect hate speech, we want to build an entirely new system. It’s very, very bold. It’s very, very ambitious. We need lots and lots of funding, potentially, but I think that it’s time – it’s the perfect time, right now, to rethink how this entire media ecosystem works. It is – so, you know, some of our backers – one of our backers founded Twitter, and one of the things that’s really interesting is that we’ve built our media ecosystem based on advertising, where essentially, if you look at the, sort of, grassroots effect, content is now produced not to inform people, but essentially, to serve ads in an ad-driven world, to then make companies sell more products. So content is basically made to sell more products.
And there is no system right now fundamentally in the DNA of a social media platform that rewards quality content and demotes poor quality content, in a very automated scalable way. So the first thing that we’re trying to build is an explainable open system where we can start to monitor and actually measure the quality of content. That means defining what quality is. That means creating ways that we can evaluate our algorithms. That means training our algorithms with people who might be as unbiased as possible, and when you were making the introduction about community-driven algorithms, our vision is that in the future there could be a new media platform where it doesn’t monetise using advertising, it doesn’t monetise using selling – by selling products, and it simply has this quality score that up-ranks quality content, demotes poor quality content, and does it in a way where the system in which that is done is explainable and fair, and open to everyone.
So there’s a few ideas that we are thinking about, but I think now is the perfect time to be rethinking this whole thing, instead of propping up a system that potentially is fundamentally flawed and fundamentally rewards toxic content, no matter how much regulation, no matter how much oversight, no matter how many boards you create, so that’s our vision. And maybe it will work and maybe it won’t.
Harriet Moynihan
It certainly sounds innovative, Dhruv. To what extent is there a human element to it? Is it entirely machine learning, machine based algorithms, or is there a human element involved too?
Dhruv Ghulati
Yeah, so I think, just to talk about rank and algorithms. I think when you – the way that we talk about this is when Google came around – came along, it was obviously page rank was their system, and to rank content where it was essentially saying well, this content’s really good, because lots of other pieces of content link to it in some way. And that’s great, you know, we’ve built that up and they’ve added, I think it has about a thousand signals on Google to rank content up and down. Facebook came along and created a new system, which said that’s not a great way of ranking content, let’s look at what your friends say and if your friends recommend it, or they like it, that’s a good piece of content. What is missing, in these ranking systems, they work really, really well, you know, 98/99% of the time, what we argue is missing in these algorithms is an ability to actually read critically what the content’s actually saying. What claims it’s making, how it’s been written, how it’s been framed, and how might exclude certain information and all these things that we’re really good at as humans, as critical thinkers, and that is that extra layer that we’re trying to build. And just to, you know, is it human related? It’s completely built by communities of people who we think are the best-placed to, you know, actually analyse that content.
Harriet Moynihan
Okay, well thank you. I think what’s bound all these presentations together is the kind of solutions focus to problems that we talk about a lot, and we know what the problems are, the question is, what are we going to do about it? And I think that all of you have come up with some really helpful ideas for that, and it would be really useful to open it up now to discussion and to take some questions.
When I mentioned these state-based initiatives and also the initiatives by NGOs, such as Article 19, I wanted to say we don’t, as you can see, have those interests represented on the panel, but we do, I know, have some of you in the audience, so do feel free to pipe up and put your perspective, ‘cause it would be really interesting to hear those potential solutions as well. If you’d like to ask a question, please raise your hand, give your name and affiliation, and if you just wait for the roving microphone to come to you, so that we can capture you as part of the livestream, that would be great. I’m going to start with the gentleman over there on the left in the jacket.
Dr Alexander Brown
Hello, my name is Dr Alexander Brown. I’m a Professor of Law, Politics and Philosophy at the University of East Anglia. I’m currently doing a research project for the Council of Europe DG for Democracy into innovation in internet regulation and co-regulation, and we’re particularly interested in the Oversight Board model and we’re interested in how it’s working in Germany currently, because in a deep recesses of the net DG law in Germany, it’s allowed for a German court to recognise an Oversight Board or an independent organisation, as having almost the status or the right to operate as a regulator of self-regulation, and so that Facebook can upscale some of its grey areal problem cases to that Oversight Board, and my understanding that’s going ahead in Germany. And my question’s about liability, do you feel that once the upscaling has been done and that hard case has been sent over to this Oversight Board, which has been recognised by a court in Germany, that it would be then impossible for another German court to still hold Facebook liable for having made the wrong choice, having outsourced that choice to an Oversight Board?
Harriet Moynihan
Wow, that’s a good legal question, and I know the Oversight Board is, you know, it’s in the planning rather than actually in fruition, so Brent, I’ll put that to you, with that caveat.
Brent Harris
It’s a great question and while I’m a Lawyer, I don’t know the legal answer to it, and I’m not a specialist in German law. However, the Board, we don’t intend at all as a substitute for local law, and so, Facebook itself can extend and delegate no more authority to the Board than we hold ourselves, and the Board itself then, is helping us to figure out how to exercise part of our responsibility. So, in our view it in no way is a substitute for, as I understand it, either the liability side, nor would it be a substitute for local laws. It’s truly a way for us to figure out how we can make some of the hardest decisions and provide people a mechanism to go forward and appeal and say that some of the ways that we exercise – that we made decisions are ones where it didn’t fully live up to our standards or our principles.
Harriet Moynihan
I don’t know whether you want to comment on that anymore? Susie.
Susie Alegre
Thanks. I’m Susie Alegre. I’m a Human Rights Lawyer at Doughty Street Chambers, and I’ve got a question, I suppose, about the title, which is about regulating free speech online. But Article 19 includes the right to freedom of opinion, as well as freedom of expression, and I think the way our opinions are formed is more about the delivery than the content, and I think what you’re talking about is extremely interesting. But I was wondering how much thought the panellists have given to regulating freedom of speech, I mean, freedom of opinion and freedom of thought online?
Harriet Moynihan
Yeah, I guess that’s – it’s, sort of, interesting, it’s about the technique of manipulation of opinion, as much as whether the information is true or not and it’s interesting to think about other rights that are relevant beyond just freedom of expression. Dhruv?
Dhruv Ghulati
Yeah, so, I worked on automated fact checking in 2016 and that’s basically saying that’s something that absolutely incorrect or not. You’ve said something like the unemployment rate has tripled, you know, over the last three years and you can take a database from Office of National Statistics and you can feasibly compare those things.
I think that – so I believe in automation for this. I’ve seen, in the last few years, the – these techniques of, kind of, like, taking things to oversight boards and having a very one off process, where you have to take a decision on a case-by-case basis. The internet is a – moves very, very fast and these rumours and these cases are spreading extremely fast. We need to be flagging disinformation, or at least the bounds in which the information is not okay at the source. The way that we’ve been thinking about building this system is that we don’t think that something is absolutely right or wrong, but we think that we need to have very efficient flagging mechanisms to flag things that might be highly opinionated. So, for example, we actually have an algorithm that detects highly opinionated content. It’s not saying that it’s fake, but it’s also that algorithm, combined with something that detects something that might be very controversial is flagging things that help a human team potentially trust a safety team to take things down faster. And I think that, you know, some of those things that I ex – phrases that I was talking about my granddad used, these are not things that you could – there’s a grey area of are these inciting hatred or violence in a massive community, but these are highly opinionated phrases that I think should be looked into at least.
Dr Rasmus Kleis Nielson
Let me just very quickly add to that, and thanks for the question. I mean, I suppose one premise that it seems to me underlies some of the public and policy debate around this, is a view of the way in which communication influence opinion information that has been thoroughly debunked by more than half a century of scientific research in this area and it’s a theory that is so debunked that it has a, sort of, almost like a pet name amongst communication media researcher, which is the magic bullet theory, right? The idea that a statement communicated, like, will hit the receiver like a bullet in the head and change their views, right? And everything we know about how communication influence opinion information is that communication is, at the end of a process of causality that really starts a very fundamental social economic and political factors that are about identity, who you are, community, who you are with, and interests. You know, what you have at stake, both ideal and more material ones, and that the situation in which communication will change or evolve people’s views is our situations in which they don’t know very much, don’t care very much, and don’t have very many people around them who know or care very much about something. Or in situations in which they are exposed for a sustained period of time, to overwhelmingly one-sided messaging on something, and those two situations, I think we need to recognise are relatively rare.
That is that nothing I’ve said so far and nothing I will say, for the rest of today or the rest of my life, I think, takes anything away from the – from how urgent and serious the issue of disinformation is. But I think it’s really, really important that we don’t fall into, sort of, the moral panic trap of thinking that someone somewhere, to make some advertising money, posting that the Pope has endorsed Donald Trump will magically lead to tens of millions of people believing that the Pope has endorsed Donald Trump. I think, as a society, we have almost the opposite problem, which Harriet alluded to in her question to me, which is that nobody believes anything, right? And it’s very, very hard to change people’s opinions and if we don’t help each other develop affirmative skills in addition to critical skills, to make informed choices about all the imperfect options available to us, which one of them are slightly better or slightly worse than others, we’re left in this, sort of, very paralysing situation of generalised scepticism and no-one ever listening to anybody on anything, which I think is probably a darker scenario, actually.
Dhruv Ghulati
Can I just say a final…?
Harriet Moynihan
Sure.
Dhruv Ghulati
I think I totally agree with what Rasmus was saying is that, you know, the Pope – that statement that the Pope endorsed Donald Trump, it’s not that everyone’s going to believe in that thing. I think the issue that we have on the internet right now is, we have no idea what the hell’s going on. So we have no idea of actually monitoring how opinionated discourse is changing across these platforms. We don’t know how to monitor, and we’re trying to do this as an independent third party, how much hate speech is on these systems. I can tell you, you know, I can’t publicise, you know, who we’ve been working with, but we’ve found some really, really horrendous stuff on these platforms that’s not been taken down, and I think for us the vision is, is that if we can have this as an independent – and you were talking about this independent regulating force that DCMS has, sort of, talked about, I think there needs to be something outside the platforms. So maybe it’s us and maybe there’s six other people, whatever it is, to actually give you an understanding of how opinionated that discourse is changing.
Harriet Moynihan
Thank you. I’m going to take a question here, on the right, thank you.
John Thornhill
I’m John Thornhill from the Financial Times. There is a very simple, if somewhat radical solution to this, which is to repeal Section 230 of the 1996 Communications Decency Act, which does give legal immunity, basically, to all of the social media giants, as far as I understand it, for all the content that they publish. And after the Christchurch massacre, Jacinda Ardern, the New Zealand Prime Minister, was talking about the possibility of treating Facebook as a Publisher, rather than a Postman, in other words, making them as liable for the content that is published on Facebook, as an Editor of the New York Times is responsible for what they publish. Why is that a bad thing?
Harriet Moynihan
Thank you. So I guess at the moment we’re saying, digital platforms are third party intermediaries and as such, they’re not liable, so the nuclear bullet is to change them to Publishers, so they have liability. Brent, do you have any thoughts, off the top of your head? Or shall we go to Rasmus first? Brent.
Brent Harris
It’s truly a tough one and one that I think is playing out in the dialogue today and folks are trying to figure out what the right answer is to this set of questions. In the case of Facebook and Instagram as well, on our platforms, we are a mechanism of distribution and we are not a source of original content and creation and so, my concern would be that we have to make sure that the means we have for communication are ones that are not unduly burdened and that were thoughtful in the way that we set out regulation and ensure that speech can take place and that news and information can be shared. But I would love to hear from Rasmus and from others on how, sort of, what the right path forward is.
Dr Rasmus Kleis Nielson
I mean, I’m happy to take a stab at that, John, and thanks for raising the question. I’d recommend, for those who are interested in it to read Daphne Keller from Stanford, who has written very thoughtfully and with great nuance about this, and I’ve tweeted some links, for those who are interested in it.
I guess, my personal view is that that move would have catastrophic consequences for Publishers and for individuals who want to express potentially controversial points of view. For Publishers, because if a search engine or a social media company was held legally liable for the potentially libellous nature of something written in the Financial Times, they would necessarily take a very conservative stance on whether they would surface that content. And I think that would be incredibly destructive to the ability of news media, who are really struggling to build direct connections with their audiences, and particularly young audiences online, to reach people where they actually are on the platforms. And with a risk that the platforms would essentially have to behave as if they were, sort of, Singapore shopping malls and being, like, extremely tightly governed, if you will, in ways that would hurt Publishers.
It would also, I think, hurt a lot of individuals, who have potentially controversial views. Think of movements like Black Lives Matter or #MeToo and the ways in which they have been expressed, sometimes for extremely understandable reasons, in very emotional and forceful language, language that I think sometimes could be seen as legally, potentially, problematic if people felt they were subject to allegations or the like. I’m thinking of #MeToo in India for example, the allegations made against a named individual male Journalist, some of those allegations turned out to have been perhaps not entirely substantiated. But if everything was regulated as if the platform was liable for that potential expression, none of it would have come to the surface and much of it was substantiated, I think it would be terrible for public debate.
So – but the problem, I think, with the discussions, we’re still mostly at the stage where people say it’s one or the other, and I think the obvious answer is, we need something else. What that something is, I don’t know. But it’s not merrily dumb tubes, you know, transporting a bit from one place to another, it’s not Horizon, AT&T or BT, and it’s not the FT either. It’s something else, and that something else is, I think, we’re still trying collectively to figure out what that something else is and what responsibilities and liabilities come with that something else.
Dhruv Ghulati
I think parallel to that is, and we think about is your editorial policy at the FT. So, imagine if you were held to account for not following that editorial policy, or if someone else imposed another editorial policy on you, that you had to abide by. I think the good thing that you have at the FT is you actually publish your editorial policy and I think that, for me, is the key thing that I would like to see from the platforms, is that whatever their editorial policy is, which is essentially a ranking algorithm, is made as explainable as possible.
One of the challenges of when we were building our hate speech algorithm, for example, we haven’t built that ourselves. We’ve been working with lots and lots of advocacy groups to define how we would train that system, so what are the different types of hate speech? What is the case, what isn’t the case? And it took us – the difficulty’s not building the algorithm, it’s actually building those frameworks and rules and putting them out there and so, what we’re trying to do is actually all of those ways that we define as hate speech, you can literally see them on our website, and you can argue with them. But I think that’s what I’d like to see with Facebook, not what happened a few months ago where the New York Times had to leak their content moderator policy and it was about 160 pages and it’s kept very, very secret.
Harriet Moynihan
So having greater transparency, yeah, yeah.
Brent Harris
A quick point on that is that we have a set of community standards. The standards are public. The standards are created through a forum called the Contents Standards Forum. That forum includes a set of experts, it includes a dialogue, and we now publish the minutes of that forum, and the standards are also what help inform the algorithm and how we think about our products and so, what you’re calling for is something that actually exists today.
Dhruv Ghulati
Yeah. No, I think there’s been a lot of amazing news in the last two years, I’m just – and I think those are really positive. I just think that for the public, it’s very, very important to push this as hard as possible in making those explainable, and so people actually read them and understand them and see how people, you know, these decisions are making. Which is not just a, sort of, writing a policy problem, it’s a problem actually in technology and how do you actually explain algorithms? How do you explain how they’re ranking your system, you know, content up and down?
Harriet Moynihan
Yeah. I understand the Oversight Board is likely to be looking at how community standards are applied in a particular case and possibly issuing guidance to Facebook on its policies, which will be really interesting, you know, to see how those community standards are being applied in practice, sort of, out there and in public. We’ll go to this side of the room, and we are going to go the gentleman in pink.
Alex Folkes
Thank you. I’m Alex Folkes and I work for – as an Election Observer for OSCE. Every election that I ever go to around the world, social media is increasingly a big part of the campaigning methodology, that’s entirely valid, that’s great. It helps to spread information, and sometimes, unfortunately, false information, but you’ve covered that already. However, every single country has its own electoral laws. They may be right, they may be wrong, but they are those countries’ electoral laws. Facebook has taken some very grateful good steps in the right direction to regulate its content on elections, but they don’t bear any relation to the electoral laws of any country except the US. They don’t affect outside campaigning and they don’t really affect the – or reflect the financial regulations of each individual country. I realise it’s a huge ask, but when you’re talking about something as vital as elections in countries, my question basically is, what more could Facebook and other platforms do to ensure that the content they’re putting out there complies with the individual electoral laws of the countries it’s being seen in?
Harriet Moynihan
How expressible. Thanks for pointing out the benefits of tech, because I don’t think we’ve managed to do that so far and obviously, in elections, there are huge benefits in terms of greater plurality and greater opportunities for participation. But it does raise the question that you posed. I don’t know if any of you have particular views on that? Brent, there’s a Facebook angle to it?
Brent Harris
So it’s not my area of expertise, which then means it’s probably dangerous ground, but there are a handful of things that we are doing, in this regard, and so one is we created something called Election Research Commission and it’s also called Social Science One, and it’s a partnership with Election Researchers around the world in a way to start to share in a private – privacy protecting way more data and more information about elections and about democracies and about what’s going on. And it’s something that we’d love to do more on and candidly, that I think we just need to figure out ways to do more of as an industry. Because as so much information sits in a number of technology companies, it’s incumbent that we find ways to share those insights and bring greater transparency to what we’re doing. And a second effort that relates to this is also – and I think this was actually just announced yesterday by Mark, but we are taking our political ads, [inaudible – 46:24] have been making that global and so there will be also increased transparency that’s available around the world in what’s happening in elections and the political ads that are being run.
Harriet Moynihan
Thank you, and we’ve got about ooh, eight minutes left, so I’m going to group some questions now, and I’m going to start with the lady here in the white, and if you could try and keep them reasonably concise, given the time. Thank you.
Member
So I represent the common person.
Harriet Moynihan
Great, that’s what we want.
Member
I’m a Facebook user.
Harriet Moynihan
You’re a user?
Member
Yeah, and a couple of things that came to my mind is one, that Facebook is developing this Oversight Board and I was told they speak – they have spoken to 650 people or more, and I’m thinking about the billions that we are and the millions of Facebook users and to get only 650 people’s views to build an Oversight Board made up of 14 members and how far Facebook is going to – are they going to do it before something is published and just stop that? Because, you know, with the speed it goes completely viral and it can cause severe damage, and like I said, I’m not amongst the esteemed crowd, so I’m literally just the common person. And I was very impressed with Rasmus, because I think he really summed it up that there needs to be the key words that I got from there were individual oversight collaboration, and that’s where Factmata comes into place where it gives, I think, it would attempt to give us, the individuals, the right to choose, because we’re looking at different people in different parts of the world, with different legalities, with different cultural ways of identifying right or wrong.
So if we had some way to give that right of free speech, but then we choose whether something is right or not and that becomes a collaboration between the whole world as to, you know, what is right or what is wrong and making it a safer place, where things just don’t go out of board and we can control. That’s what I understand. So I haven’t really asked a question, but…
Harriet Moynihan
Yeah, no. I’m going to give Brent a chance to come back on that and the others, if they’d like. I’m just going to take one or two more questions. I’m going to – the lady over there on the right-hand side and with the glasses. Yeah, thanks.
Roxana Raileanu
Hi there. I’m Roxy and I work for The World Today Magazine here at Chatham House and actually, our current issue is about who can you trust in your mind sphere in the future. So my question to the panel is, in the future, extrapolating from that, what does good digital governance look like? Thank you.
Harriet Moynihan
That’s a good question, thank you, and we’ll take one more from this side, the gentleman on the second row there, and thank you.
Member
I’m [inaudible – 49:29] a Member and Behavioural Economist, and my question is to Facebook. Do Facebook consider the psychological impact of monetising the user data and its impact in future Facebook usage?
Harriet Moynihan
Okay, thank you. So we’ve got three questions there. Dhruv, do you want to start?
Dhruv Ghulati
Yeah, so I think the key thing that we did as Factmata at the beginning setting this up is not – and I think that gives us a really good advantage, is that we are going to be really biased, and we admit that. Our bias is the sum of all the people who have labelled the data within our algorithm that makes those automated decisions. And what we want to be doing is actually – and before people actually label data to make our algorithms, they actually have to put in who they are and where they come from, and what their political affiliations are. And it’s that sense of, kind of, admitting that things might go wrong and trying and putting that out there and I think to me, I mean, obviously, this is my angle on things as a third party is that I think we need to encourage more experiments on different ways of ranking content, allowing Researchers access to data. There might be a different way of ranking content, but I think the key thing is making it very transparent that again, the communities that are involved actually shape these decisions, it’s open and transparent.
I think that the – there was a question about data, and the person who released the political ads database, great move by [inaudible – 51:01] Rob Norman, you know, to actually put that out there, I’d like to see more and more data released. The move in the elections to WhatsApp encryption, it’s I can tell you, trying to get access to that data is nigh on impossible, yeah.
Harriet Moynihan
The pivot to private is problematic. Rasmus, did you have some thoughts on…?
Dr Rasmus Kleis Nielson
I mean, I think the – first of all, I’d – I think that question you ask about what does good governance look like? Is a critical one for, sort of, our generation, if I can be a bit, sort of, melodramatic about it, and I just want to say I think it’s big and complicated and I don’t know. But I think there are some people who are doing thought leadership in this space and I would just encourage people to think about the work of say Gillian Yorke or Danah Boyd and David Kaye, Rebecca MacKinnon, and others who are very active in this space and people in the Global South, who face many of these issues too, Maria Ressa, or Rita Kapoor and others. And in some ways, I think sometimes we should maybe think about this almost like a, sort of, a constitutional moment, if you will, and an opportunity to try to think about what do we want good to look like in the future? And again, as said, my opinions on that are no more interesting or important than anybody else’s, but I have some, and I’m happy to share some of them, which is one I would suppose I think, personally think that we should start from the recognition that we live in irreducibly diverse societies. So we will not agree on what good looks like, and I suppose my personal view, therefore, is we need to think about what does a framework look like, in which we can have robust disagreement about that, without making life impossible for minorities or majorities, who sometimes, are treated as if they are minorities, who are subject to amazing amounts of abuse and whatnot, in public life and online. So how do we create the framework for disagreement, if you will, that can be very robust and very uncomfortable but one in which we can argue this out?
How do we do this in a way where we are respectful of interests and identities, without treating this discussion as a question of how do we get back to a romanticised view of the past? You know, I have great respect for the 20th Century, if one can say such a thing, but I think we need to be very, very clear, the 20th Century was a shit century, in many ways, for many people, and this is not about getting back to that. It’s about thinking about what do we want for the future, right? And not about some romanticised view of the past. So that would be the point where I would start, and where I see a lot of really constructive people thinking about this.
Dhruv Ghulati
When you turn it just to Facebook, you know, when the Rohingya crisis was going on, one of the key things that people were pulling out was that they only had two Burmese Moderators, right? And there’s con – I see that constantly in the news, you know, how many have they got on this country? How many women have they got in there? And it’s an impossible, impossible task to do, and I think they’re doing a lot of effort on that, but I think we need to almost take a step back and say, that might be impossible, to reach a perfectly fair system for all of this.
Dr Rasmus Kleis Nielson
And really importantly, and I will shut up after this but, you know, for those of you who like popular culture fans, like, this is like the final episode of the final season of Game of Thrones, right? You know, who gets to decide what’s right? And I have to say, Facebook as a company, you know, I think you guys are struggling with this all the time, and one thing I see sometimes though, it’s not the majority view that comes from the company, but sometimes I see it and it frightens me, is when Mark Zuckerberg or someone else stand in front of a slide at F8, or some other big event, with text saying something like, responsibility that people use our tools for good, and I’m like that is not your decision. That is not your decision, what good looks like. That is our decision, and I think it – there has to be something about respecting rights and protecting users, but also, ensuring that it’s not your responsibility to engineer a less polarised America. That’s a political phenomenon, it’s – you have responsibility to protect users, but not to define what good looks like and it will be, from my point of view, catastrophic if you try to do it because you would undoubtedly get it wrong, because everyone gets that wrong. And I think it would be just an even worse place than you are already.
Harriet Moynihan
On that note, I’m going to give Brent a chance – a final word. I’m going to give Brent a few minutes just to respond, and you’ve had a few questions, Brent, and a few comments.
Brent Harris
And this is great and so we ask – we asked for feedback, we asked for ideas. There are – there’s a lot of feedback, and I think there are a lot of ideas, and just a handful of points. So, on one, I’m in – and Rasmus knows this, ‘cause we’ve talked at length. I’m in profound agreement with the statements that he’s making about where we’re at, and I really do think we’re at this pivotal moment. This inflection point in where we are with society and what we’re seeing is that the world is going increasingly digital and there are consequences to that. There are pros and there are cons, and as the world goes digital, we’re in a moment where we need to figure out how do the institutions we have today, how do they relate to that digital sphere and what do we need? And so, that is why we’re out calling for more sets of rules and norms and why we’re trying to think about how do you build something like this Oversight Board, and the fundamental premise of the Oversight Board is precisely in line with at least how I hear some of this feedback, which is that we shouldn’t be making these decisions on our own and in isolation, and that we don’t have all of the answers inside of Facebook as a company. And so what we need to do is start to build mechanisms and start to build institutions, and the Oversight Board will be only one of them, but start to do that in a way that allows for greater debate and disagreement, and allows for deliberation behind how you trade off on sets of principles and sets of values, and how you think about what good is, and what free expression means, and when it crosses the line.
And so, that is why we’re out there and why we’re trying to do this, and my hope is that while we’ve talked to 650 people and we set a public submission and 1,200 people responded, the idea is that’s a beginning and I, for one, have not seen a company in Silicon Valley that’s stepped back and said, before we just launch something and before we build something, that we would go out and we would hear from audiences like this and we would take feedback and while it’s not enough, and I think we’d agree, we haven’t heard from enough people, and we hope we hear from more, that’s the point in part of building this board. It’s actually to create a mechanism that formalises and allows people over time to participate and to have thousands of people and tens of thousands of people engage and say that a particular decision isn’t right, or the sets of values aren’t being traded off the right way, and we’re not going to be able to do it alone, and we really shouldn’t do it alone, and that’s why we’re calling for regulation. And it’s also why I think in this moment it’s something where we need to think about how is there more research, and that was idea behind the Election Research Commission, and how is there more civil society activity? How do people actually become increasingly sophisticated at what accountability looks like, and how is there greater and stronger press attention, so that the right types of scrutiny come in and provide legitimacy behind where we were at? So, we’re delighted with the feedback and if we actually succeed in building this institution, I actually think that it will formalise so that there’s more debates like this and more dialogue like this.
Harriet Moynihan
Thanks, Brent. So, to sum up, I think we’ve heard a lot about accountability of platforms, which is a really, I think, innovative and interesting idea. We’ve heard about the importance of promoting good journalism, reliable information, and fact checking and trying to, sort of, ways to combat disinformation and hate speech, but I think what I take away from this above all, is that this stuff’s really, really difficult. And basically regulation is, as ever, trying to catch up with the technology, which is moving really, really fast and even with the regulation we’re talking about today may not be quick enough to catch some of the new technologies that are already on the move, Deepfakes, etc.
So I think what is positive about it is we’ve heard lots of innovative ideas and we have also heard about ways in which the different stakeholders involved in this debate are sometimes working together and talking together, as we see on that platform, and this is what Chatham House is about, it’s about bringing different constituencies together. Facebook has had its consultation, David Kaye and others, very importantly, have been mentioned today, and they’re all part of that debate and so I think while we haven’t had any easy answers today, what we have seen is people working really hard to try and find solutions, and I guess, it’s just watch this space.
I’d like to ask you to join me in thanking the panel for their very valuable contributions today [applause].