Kate Jones
Good evening, everyone, and – or good morning to those of you across the Atlantic, and welcome to this Chatham House members’ event on Freedom of Thought and Opinion in the Digital Age. My name is Kate Jones. I’m an Associate Fellow at Chatham House, where I focus on human rights in the digital age. Before I turn to our topic this evening, first a few housekeeping notes. This webinar is taking place on the record and is being recorded. Audience members are encouraged to tweet, using the #CHEvents. You may submit questions throughout the event using the ‘Q&A’ function. You may also upvote questions from others that you would like to hear answered. During the Q&A session, if selected, I will invite you to turn on your microphone and repeat the question you have submitted in the ‘Q&A’ box. Please make a note, alongside your question, if you’re unable to meet the question on mike, so that I can present it to the room for you. And so, to the substance of this evening.
The next hour gives us a really exciting opportunity to explore the impact of technology on our minds. This is something that lots of us are aware of. You might have noticed, for example, that if you catch sight of your mobile phone, you have an urge to pick it up. You might have found that once you’ve picked it up, you spend longer looking at it than you’d intended. You might be concerned about the polarisation of political debate and growing intolerance online, or worried about what technology is doing to our children’s mental development.
The ultimate risk is perhaps that our minds are so conditioned by the technology in the palms of our hands that we may no longer be free enough to resist its influence, or to be innovative in our thinking. As a Human Rights Lawyer, one of my concerns is whether we’re doing enough, in the face of these tech advances, to safeguard the right to freedom of thought and opinion. As a matter of international law, this is one of our fundamental rights. Legally, what goes on in our heads is private and we have the right to make up our own minds, subject to persuasion of course, but not at the mercy of hidden influence.
The aim of the next hour therefore is both to shed a light on the impact of technology on our minds, and to discuss whether we need governance structures and rules to help define and safeguard our right to freedom of thought. We’re privileged to have four most distinguished panellists with us to discuss freedom of thought and opinion in the digital age. I will briefly introduce them all in a second. They will each then, have a few minutes to discuss their perspectives. There’ll then be a panel discussion and then an opportunity for them to discuss your questions.
So, our first panellist this evening is Dr Anna Lembke, Professor in the Department of Psychiatry and Behavioural Sciences at Stanford University School of Medicine. Her speciality is addiction, a speciality which she has focused on opioids and controlled drugs and is now also applying to technology. Some of you may have seen her discussing these issues on the recent Netflix documentary, The Social Dilemma. Our second speaker this evening is Dr James Williams, formally of Google, and now at the University of Oxford, where he focuses on the philosophy and ethics of technology. His excellent book, Stand Out of Our Light: Freedom and Resistance in the Attention Economy, was one that first unveiled clearly for me, how technology is casting a shadow over our light to think for ourselves.
Third, Tim Kendall, CEO of Moment, an app that helps adults and children to use their phones in healthier ways. Tim has a deep understanding of how big tech works, as he is former President of Pinterest and, before that, was Director of Monetization at Facebook. Again, you may have seen him on The Social Dilemma documentary. And our fourth panellist, but by no means least, Professor David Kaye, Clinical Professor of Law at University of California, Irvine, and former United Nations Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression. More than any other individual, David has sought to alert the human rights community to the risks of technology for our opinions and our speech. His voice led the argument, now largely accepted, for internet platforms to monitor content online, via reference to freedom of expression, and he also argues for protection of freedom of opinion online. So, looking forward to a fantastic conversation this evening and, first, Anna, over to you. So, how is social media and other technology impacting our mental autonomy?
Dr Anna Lembke
First, let me say thank you for inviting me. I’m honoured to be here on this very esteemed panel, and I will be speaking very briefly on my perspective informed by more than 20 years as a Psychiatrist, treating patients, including young people, and also my knowledge of the neuroscience. So, just very briefly, I want to talk about three major concerns that I have, regarding the use of this technology, especially for young people.
The first is the way in which these virtual worlds that we’ve created for ourselves are so reinforcing and rewarding, such that the real world ultimately pales in comparison, and just like we can get addicted to drugs and alcohol, we can in fact get addicted to our participation online. And the reason for that is that anything that leads to a large release of dopamine, our pleasure neurotransmitter in our brain’s reward pathway, is potentially addictive and we know that the engagement, with various activities from social media to pornography to internet videogaming is incredibly reinforcing, releases lots of dopamine, and ultimately, leads to what has been called a dopamine deficit state. And the reason for that is because our brains want to naturally adjust to this huge influx of highly rewarding behaviours or substances, and so it does that by down-regulating our own dopamine production and our own dopamine receptors. Such that when we’re not engaging in this highly reinforcing activity, we’re actually in a dopamine deficit state, which is like an induced clinical depression. And just like people who become addicted to drugs and alcohol need to use more and more of that substance in more potent forms, over time the same thing happens on the internet, what was initially reinforcing becomes less so with ongoing heavy use, then we need more potent forms. And that’s how a young person goes from Snapchat, which is a relatively benign social media platform, to something like Tinder, which has become, in many areas of the United States, a place where people essentially hook up for sex.
The second major issue that really concerns me about our engagement with this technology is the narcissistic preoccupation that it engenders. And by that, I mean that what we have now is young people in particular comparing themselves and their accomplishments, not just to their siblings or their classmates or their neighbours, like we might have done 30 or 40 years ago. But now they’re comparing themselves to the whole world, and they’re comparing themselves to profiles on the internet that are – in many instances are not even real, they’re kind of what Winnicott, the Psychoanalyst Winnicott, has called “the false self”. And so, what happens is that young people in particular, but not just young people, all of us, feel less than because of our participation on the internet, and then there’s this constant striving for wanting to make ourselves more than we really are, and creating a kind of a false self to get more likes, or to get more followers.
The last thing that really concerns me that I want to touch on today about, you know, what’s happening to young people who are spending a lot of time in this virtual world is the way that deviant behaviour has become normalised and plain old bad ideas are reinforced and validated, and the reason for that is because we’re no longer hypothesists testing our frameworks in the real world, what’s happening is that our ideas and our behaviours are validated based on the number of likes that we’re getting, or the number of followers. So, some very bad ideas and some very deviant unhealthy behaviour is really reinforced in the echo chamber of these anonymous online voices. I mean, another way of thinking about that is that the internet has no moral compass.
And then, finally, the reason that young people are so incredibly vulnerable to this problem is because they have very rapidly developing brains. So, at two or three-years-old we have more neurons than we’re going to have in our whole lives and as people go through adolescence, up until about age 25, those neurons are slowly pruned back, leaving just the neurons that they will use most often in their adult life. So, if we have adolescents who are now engaging in this behaviour through most of their teenage years, they are creating a mental scaffolding that is attuned and primed for this kind of engagement online, which also makes them much more vulnerable to the dark side of this technology.
Thank you so much for listening.
Kate Jones
Thank you, Anna, those are fascinating insights and if we weren’t concerned before, I think we’re all certainly concerned now. And your first point in particular does, to me, beg the question of, well, in other areas, we have a toolbox for tackling these issues, for tackling alcohol abuse risks and gambling abuse risks and so on, so do we need a toolbox here? Perhaps something we can come back to later. But first, across straight to James, over to you.
Dr James Williams
Great, thank you. Hello, everyone. Thanks for having me, it’s great to be speaking with you today. So, yeah, so I am going to kind of assume, I guess, many of the folks watching are kind of generally familiar with, sort of, the overarching narrative to all of this and basically, the one way that I think I am fond of wrapping kind of this issue broadly is in, sort of, you know, the media ecology narrative of, you know, sort of, you know, humans are primarily kind of, you know, oral in nature, and then we have this enormous kind of revolution of print media, that then, you know, structured our lives, structured our societies, our, you know, legal, etc., systems, in various ways. And now we’re kind of coming out of that kind of Walter Ong’s notion of secondary orality, and so, you know, all of the psychodynamics, the, you know, kind of, behavioural effects, societal effects that come with kind of an oral, as opposed to a literate mode are kind of – come to the fore. And then now, within the digital world, you know, in a very rapid way, and there’s one quote that I think, to me, kind of captures a lot of, sort of, kind of what we’re dealing with in this very rapid shift that we’re undergoing.
It was actually a quote from a guy named Erastus Wiman, who was the head of the Canadian telegraph system in the late 1800s, and he was talking about the media that he thought were the best. And of course, you know, they’re coming out of this period of, you know, information scarcity, of kind of latency and he says, “The media that are best are the ones that are most instant,” and he says, “There’s no competition against instantaneousness,” and I think that’s a great line, and I think, that gets at so many of the, sort of, the, sort of, field of responses and field of effects that I think that we identify, when it comes to this space. Because it is a fairly, you know, large and complex area of all these issues we deal with, and I think, you know, in my mind, I kind of think of this – I kind of piece it out in three levels.
I think at one level there is – there’s sort of just this sheer kind of – the Herbert Symond point of the sheer, you know, information abundance that, you know, the digital world has given us and so then, you know, we move from a kind of information selection, to essentially a pattern recognition kind of mode. So, like that, you know, information abundance makes attention the scarce resource. And so, in my own work, I’ve, sort of, kind of tried to interpret some of these issues around like freedom of thought, freedom of action, etc., in this frame of attention and found it reasonably valuable to do that. And so, basically, it’s kind of, you know, this idea that – like our built-in, you know, evolved, you know, capacities, proclivities, are there’s this kind of weird misfit with this, you know, this, sort of, environment of sheer instantaneousness, information abundance, you know, instant access to, you know, the most successful in the world, people in the world to compare ourselves against, etc. So, there’s kind of just this information abundance, attention scarcity as the first level.
And then I think then, on top of that you have, sort of, the set of economic incentives, kind of the – those things that structure the platforms, that give reason to their design, the metrics, you know, kind of this notion of the attention economy is one way of talking about it, where then, you know, the, sort of, the societal phenomena, the behaviours in our own lives that get selected for are those that are most valuable, according to whatever those higher-level metrics and reasons are. And then it, kind of, again, pushes every – all of these existing systems to a limit and almost makes them kind of a parody of themselves.
And so then, I think the third level that I think about this is, sort of, the tactics, the practices, the actions taken – can be taken by designers, by the deployers of these systems, which, you know, to hook us, to, you know, kind of hyper target and exploit our reward systems. You know, all these, you know, whether it’s called persuasive design, behavioural design, whatever, and these can be obviously, you know, very – for positive ends, but given the, sort of, the higher incentives of these systems, as we receive them, it tends to be things that are not necessarily aligned with our aims.
And so, I think a lot of this stuff has been, in our own lives, we’ve come at it initially from the experience of distraction of, you know, some – the sense of something not being quite right, of, you know, discourse in society being a little bit more heated up, polarised than the normal. And so, I think that, you know, in my own work, what I’ve tried to do is really kind of help try to deepen this narrative and show what’s at stake from it all. You know, basically kind of going at, you know, to use Harry Frankfurt’s, you know, just conception of kind of autonomy and the will, like, not just, sort of, making it hard to do what we want to do, but to be the people we want to be and to live by the stars of those values that we want to live by. But then, I think at an even deeper level to, you know, to retain kind of the ability to want to be free, you know, as Aldous Huxley, you know, wrote.
He said, “The danger isn’t that people will burn books, it’s that nobody would care,” there would be nobody to read the books. So, I think when you get to that level of, like, you know, wanting what we want to want level of our capacities, things like reflection, you know, intelligence, reason, all these kind of things, to me, it’s that those kind of procedural pieces where, to me, it seems like, you know, if we, sort of, start to slip, in terms of the effect of all these systems on those things, then I think – then we’re really in trouble. But hopefully, through conversations such as this one, we can stave off those kind of effects before they happen. So, anyways, I’ll stop there, thank you.
Kate Jones
Yes, thank you very much, James, for setting that out in such an erudite way. I mean, two quick observations: one, I notice that three of you panellists are sitting in front of walls of books, almost as a statement on the importance of slow gained information, as opposed to that which is often in the palm of our hands.
Dr Anna Lembke
What does that say about me, though, I don’t have any books behind me?
Kate Jones
No comment, Anna, I’m sure you have walls and walls of books. And – but, secondly, the point that you make about the disconnect between the commercial motivations of the attention economy and the perhaps human needs of technology is an interesting one. It’s one I hope we’re going to come back to and, indeed, one I hope – that I hope that Tim is going to touch on in his remarks as well. So, Tim, over to you, how – where is the tech industry in all of this?
Tim Kendall
Well, I think that – the way that I think about it is I think we have entered an era of big social, and I think it’s distinct from big tech, and I think it’s as, you know, as pernicious as prior eras of big oil, big tobacco. We’re still in the era of big sugar and big food and having a reckoning around the degree to which food is making us sick. You know, one in three Americans are diabetic, or pre-diabetic, today, 100 million of them, and, you know, now we’re in this, you know, relatively recently identified era of big social. And what does that era mean? I think it’s this era where we’re realising that there’s this combination of this attention extraction business model paired with an all-knowing artificial intelligence algorithm that really knows ourselves. That really knows us better than we know ourselves, and that combination, so the natural extension of that combination is already wreaking havoc, in terms of individual harms, but also, as the prior panellist referred to and the film The Social Dilemma has a commentary on, there are huge societal harms as well.
And I think that one of the points that I’ll make is that, because I don’t know that it’s broadly understood, that the division that we feel at a societal level today is actually – what we’re actually feeling is the result of the algorithm, figuring out that sowing division is good business. So, Kate, if you’re on the left, and I’m on the right, and we know each other, the algorithm has figured out that if we can each get walked slowly, imperceptibly, to a more extreme position, we will each, in our relationship and in our action, have a reinforcing effect on getting further and further to the extremes, which preys on all these emotional triggers in our brain, that engages us even further with the platform. It emboldens us in our position even further, and so that wedge is a very real thing that the AI has figured out will lead to more time spent. So, we are really, pawns in that wedge, unfortunately, and we’re seeing that playout.
I mean, I remember when, you know, obviously, Anna and I both participated in the film, and I remember getting a call in the spring that it was going to come out in September. And, of course, it was actually in June, and I remember thinking, what terrible timing. No-one’s going to want to pay attention to this film because there’s so much going on in the world, we’ve got quarantine happening and now we’ve got Black Lives Matter and just racial division like we haven’t seen, certainly in my lifetime, how the hell is anyone going to pay attention to this documentary about, you know, something that seems maybe not as serious? But I think what I missed there, and I think the film does a nice job of just showing us, that part of the reason that the pandemic has been so bad in so many ways and racial division has led to, you know, a degree of angst in our – in at least the United States that we haven’t seen in my lifetime, is because of this underlying root cause of big social. And the film is very specific about what it’s done, in terms of dividing us around COVID and really distorting the notion of truth.
And so, where are we now? Well, I think we’re – part of the reason that it’s clear to me that we’re in big social is you’re starting to see, at an accelerated pace, more and more former and current leaders being pulled up in front of Congress and the Senate. Which I think is a good thing, it’s the beginning of this era where hopefully, there is some reckoning happening, at least the very beginning, and hopefully, it gets us to more of a place of shared truth. And I think what’s going to need to be – what is critical is that there is shared truth across us as consumers, the leaders of these companies and the government, and all of our roles, quite frankly, in the place that we see ourselves in now.
I’m, sort of, simultaneously pessimistic and optimistic, but one of the things that I am optimistic about, and this goes to the most recent testimony, a few weeks back with Mark Zuckerberg and Jack Dorsey, was that the – one of the aspects of shared – well, one of the dimensions on which there is not shared truth is whether or not these services are even addictive, shared truth across those constituents. And I think that what happened in this, in both the testimonies was, one, Jack Dorsey admitted that it can be addictive, which was progress, I thought. And then, Mark Zuckerberg said that he’s not sure if it’s addictive, not a lot of progress there, but he did say that he certainly – they certainly don’t intend for it to be addictive. And I think that statement, him going on the record, in terms of what the underlying intention is, at Facebook, in terms of what they’re trying to build, is an important – will be, as we look back on this years from now, I think it will be an important moment in which, once we’re able to show, with consensus that this is in fact addictive and pair that with his comment that he certainly doesn’t want it to be, maybe that combination will be a catalyst for more movement on this topic.
Kate Jones
Right, thank you, Tim. Fascinating comments and fascinating point to end with, in terms of where the tech companies are on these issues, which is something I want to explore. It does strike me that we are talking about a couple of intertwined issues at the same time. So, one of them is about the risk of addiction, or fact of addiction, as Anna, in particular, focused on. And then a second issue is about what all of this attention does to us, in terms of our views and what we are thinking. So, as you just really eloquently described about the nudges towards political disconnect, and then, perhaps another issue would be the commercial side of surveillance capitalism and the way that companies are trying to know more and more what’s happening inside our heads, for commercial purposes, in order to nudge our buying practices and potentially introduce discrimination, and so on, along the way. So, a number of issues I think we’ve got playing here.
But, last but not least, I would like to turn to David to start to bring in a human rights perspective here. Please.
David Kaye
Sure. Kate, thank you, and thanks for having me, and really want to thank my co-panellists for some really thought-provoking presentations and a thought-provoking start to this discussion. I guess, you know, what I want to suggest is a couple of things. First off, I think what we’ve heard so far, if we think about this in human rights terms, is the way in which the technologies that we’re talking about, big social, so they’re not just technologies, but they’re also companies and social structures, the way in which they’re interfering with some fundamental aspects of human autonomy, but interfering with the ability of individuals to develop opinions, to speak, to be heard, to also reach out to audiences and for audiences to reach out and hear from different kinds of speakers. So, I think what’s been described is both, as Anna was suggesting, like deeply embedded in the interaction of the technology and psychology. But also, I think as James and Tim were suggesting, are like very real social manipulations.
And so, what I want to suggest is that one way to think about maybe the next step, sort of where do we go from here, in a sense is to think about what is there – first, what is the nature of the rights that are at stake? And if there are rights that are at stake, what’s the mechanism for regulating the behaviour of the companies? So, if we think about the rights that are at stake, obviously, it’s right to freedom of opinion, right to expression, freedom of thought, freedom of association, all of those different rights, or discrimination, Kate, as you just suggested, all of those different rights are clearly implicated by the technologies that we’ve been talking about, and by the companies that we’re talking about.
So, I don’t want to belabour that so much. Instead, what I want to ask is, who should be responsible for regulating that behaviour by the companies? And there’s at least two or three different ways of thinking about that question, so the who decides what the rules are, kinds of questions. One is that governments should, right? So, governments exercising particularly I’m thinking about the way Anna described this environment as almost a kind of public health issue, right? There’s clearly a kind of public health element here that – and I don’t mean to either minimise or maximise by saying it’s public health. But there’s clearly an element of concern about the way in which the technologies and the companies have an impact on children and adults, their mental health, their ability to learn, to be educated and to educate others, right?
So, in that sense, I think government clearly has a role to play. I mean, government regulates all manner of impact on health, on the information environment, on the economy, and one thing that’s pretty clear is that governments have largely not been involved up ‘til now. I mean, Tim was describing some of these hearings on Capitol Hill, which, you know, it’s sort of like every so often you get this parade of tech leaders to come before Congress. And I say parade in part because it’s a bit of a circus in the United States, I mean, we should be honest about it. Like, the questions are highly politicised, the responses are – you know, have been – there have been, like, murder boarded to death basically. And so, you know, you could compare that to what’s going on in Europe, where actually in Brussels just today, the – one of the Commissioners of the European Commission introduced a new approach. A new communication on democracy and technology, and that’s, I think, a much more thoughtful and honestly, professional and democratic approach that we’re seeing happen in Europe. And so, I think one possibility is that we see government regulation of this space, and we could talk about what that might mean.
The other possibility is that the companies themselves regulate themselves. I mean, yesterday, Facebook, in a much narrower way than we’re talking about, but they introduced the first, I think, five or six cases of their Facebook Oversight Board. Which is, you know, focused specifically on content, not the kind of broad issues that we’re talking about here. But one possibility is that the companies, exercising their responsibilities to ensure that they’re not having a detrimental impact on human rights, actually do the due diligence in a transparent way to ensure that the products that they’re developing don’t have a negative impact on rights or a negative impact on the public. I mean, I think one of, and I’ll close off here, my view is that, I mean, technology is not going to be the solution. It will be a part of the solution, but we’re at a stage where both public regulation and private maybe co-regulation that is, sort of, public/private mechanisms of regulation are going to have to be a part of the conversation, so that we address the really problematic issues that my co-panellists have already identified. So, I’ll stop there.
Kate Jones
Great, thank you very much, David, and just on the European document that you mentioned, the European Democracy Action Plan that has come out today, it’s interesting to see it targeting manipulative techniques in the political context, and this perhaps is a step towards what we are talking about, which is really interesting, and, indeed, its call for increased due diligence and risk assessments and exactly as you are saying. So, perhaps then I could, through David’s question out to the rest of you, who ought to be regulating in this space? Should it be governments, should it be the companies, should we be thinking about business models, or how should we be approaching this? Any of you, open to you, if you would like to comment on that question.
Dr Anna Lembke
Well, I absolutely agree that the companies and the government need to step in, and I love the analogy that multiple panellists used to things like big food, big pharma, big tobacco, it very much is an appropriate analogy, that these are ginormous public health problems, and we need – we can’t just expect individuals themselves to be responsible for solving them. We do need top-down regulations, just like we banned certain kinds of advertisements around tobacco to adolescents. I mean, it’s that same sort of thing. Although I would be really curious to hear from my co-panellists about the practical feasibility of some of that, given the sheer volume of YouTube posts that go up daily, for example. And then I also want to emphasise that although I’m a big proponent of these kinds of top-down regulations, I do think that, at a smaller community level, for example, the schools I feel have really reneged on their responsibilities vis-à-vis technology, particularly the public schools where I send my children, and the computer has effectively become a babysitter and we’ve absconded our responsibility for teaching young people today. And that’s really, really upsetting for me, because I try to create an environment at home, which is essentially undermined by the schools, I hate to put it that bluntly, but that’s how it is.
I also think that individuals themselves are responsible and must take responsibility for the way that they consume media. We can’t just say, “It’s their fault and if they made it less addictive, I wouldn’t be addicted to it.” We are also responsible and we collectively, as a society, have to come up with new etiquettes and social norms around these behaviours, which we then teach to our children and hold each other accountable through culture, rather than through regulations.
Kate Jones
Tim or James, would you like to come in?
Tim Kendall
Yeah, I mean, I think that – I think regulation is essential, you know, I think there is an open question of how do we get there? I believe the most effective way to get there is – and look, there’s some optimistic assumptions that I’m making in this path that I’ll describe. But in much the same way that we have been on this journey of getting, for example, auto manufacturers to slowly segue off of making gas-guzzling cars to electric cars and we do that segue from fossil fuels to green technology and clean energy is happening by virtue of the government providing incentives and penalties, along a timeline to eventually get to a point of zero emissions. And I think that big social is going to have to go through the same metamorphosis, if we’re going to make headway against these individual arms and societal arms that many panellists have talked about. So – but I think that has to start with – first of all, I think that would be most effective if it’s co-created by the leaders of the company, some representatives of consumers and governments all over the world. That’s probably a little bit Pollyannish in hoping for that. But I do think top-down only will likely be slower and I think the economic destruction that will happen from top-down only may not be tenable.
You know, these companies have created tremendous economic value, I think, across the big ones like we’re talking about, you know, seven, eight trillion dollars in value that we can’t write off. We have to figure out, I think, a way to sustain that value, but segue the companies from this extractive model to a different model. So, the question is, can government and the companies and, you know, the right consumer leaders figure out what is the solar power – what is the solar model for big social? What is clean technology for social?
The most obvious is that people pay for it, and people pay for a lot of things, as it relates to digital services and so, there are some challenges with that obviously. But I think that is one way that we could get there, combined with, you know, you could imagine governments providing fairly large tax incentives for these companies to move, right, in a way that – I mean, that’s what’s so hard, I think, about self-regulation for these companies is that you’re talking about tremendous revenue streams that sadly are predicated on – the growth of which is predicated on preying on human weakness. And so, if they were to, for instance, I mean, this is hypothetical, if Facebook were to just stomp out all conspiracy theories on Facebook, that would have a cost to it, and they could absolutely do it. But they’re in this tricky jam, in terms of the economic value that they’ve created, this flywheel of people coming and working on Facebook to get paid by virtue of the value of the company, which is predicated on the revenue and the revenue is predicated on, you know, this ascendancy of this AI. And so, they really are trapped, in terms of being able to back out of this in a way where they don’t destroy tremendous economic value. So, that’s where I get to this co-created model, whereby the governments work with the companies to figure out the segue path in a way that mitigates true economic destruction, but also gets us away from the societal and individual harms that the extractive based business model is creating.
Kate Jones
Right, thank you. Can I just remind our audience that if you would like to ask a question, please type it into the chat and I may call on you to turn on your mike, in order to repeat the question to our panellists. First, just one brief comment from me on that, which is that that makes me feel as though we are standing a little on the brink of a precipice. Because what none of you have done so far is to outline what kind of changes we would actually need to see, and even if there are economic incentives for these, it strikes me that from particularly what Anna has said, what James has said, what Tim has said, these are potentially quite radical changes, but, yeah, this is perhaps something that you may wish to come back to.
But I will take a question from the “Chat’. Aidan Cross, could I ask you to unmute yourself and ask your question, please [pause]? Do we have Aidan Cross [pause]? Okay I will ask the question, oh, which is – well, the question was, “Who should be most responsible in ensuring that concerning activities taking place online are addressed, governments or companies?” I think we have largely, in fact, answered that question just now, in hearing about a multi-stakeholder model.
Could I turn then, to Trisha de Borchgrave for your question?
Trisha de Borchgrave
Hello, can you hear me?
Kate Jones
Yes, please go ahead.
Trisha de Borchgrave
No, which question, I’m so sorry, I’ve put two questions in there, but maybe one was more of a statement. My first question, I suppose, was I – we understand, I think, the good thing is that most of us now understand the precariousness of our lives, lived through these technologies. But is there evidence, do the speakers have evidence that we’re actually adapting almost positively to digital platforms? And what I mean by that is that I have seen a huge change, in the last ten/15 years of how I use it as a tool and my kids, who I was very concerned about when they were young, are now in their 20s and use digital platforms in a very different way. This is not about self-validation and it’s not about their lifestyles, it’s very much about a tool that they use for work. They kind of acknowledge that, you know, the likes sometimes are important, but not for their own – not for who they are. It’s a transactional way for work purposes, like you would use any other tool. So, is there evidence that we are adapting, in a positive way, despite the fact that we do need a huge amount of regulation, etc., etc.? Thank you.
Kate Jones
Anna, perhaps?
Dr Anna Lembke
So, yeah, I mean, I really appreciate the question, and I think it’s so important to validate all of the wonderful, positive things that are possible because of this technology, including something like the forum that we’re having right now. In terms of, you know, my evidence, ‘cause I think that the question really was geared towards what is the evidence? You know, I see a select indivi – a segment of the population that’s especially vulnerable to the negative consequences. So, I don’t have, as part of my, you know, professional life, people who are – I’m finding I see the people who are really bearing the brunt of all of the really bad things about technology, and these are people who are often vulnerable in other ways. But I can tell you that, as a parent, you’ve given me a lot of hope, because I’m really hopeful that my teenagers, when they come to the other side in young adulthood like yours, will have figured out how to use these devices more in moderation.
I guess, I would also throw out to my co-panellists, I do sometimes wonder if there is a way in which like we will evolve as a species to, sort of, be cyber kinetically enhanced, in a way that I may perceive as not good, but which in fact will be just totally normal and positive. A way of being interconnected positively with all the people that we care about 24/7, without these kinds of rifts that we experience now when we leave home and go to work and then leave work and go home. So, I’m certainly hopeful that there will be all kinds of ways in which especially younger people will adapt positively, but I have no evidence.
Kate Jones
Right, yeah, James, would you like to comment on that point?
James Williams
Yeah, sure. So, I mean, yeah, I agree that it’s really important to keep in mind the positive things that technologies do. I mean, I enjoy playing, you know, computer games from time-to-time and there’s the research lately I was reading about how, you know, different types of games can, sort of, you know, enhance our wellbeing and this kind of thing. So, certainly I think it’s important to keep that in mind.
I mean, I guess one thing I would, sort of, if I was going to maybe just press a little bit on the assumption of the question, that I guess the idea that, you know, if we adapt to something that, you know, that’s, sort of, in the general sense a kind of a positive thing, you know, I mean, humans are very good at adapting to bad things and making, you know, the best of a bad situation. And probably the challenge with this kind of discussion is we don’t really know, sort of, what the counterfactuals would be. So, if like, here is like the technology, if it was maximally designed in alignment with human values, autonomy and then here is like the worst and then, sort of, here is where we are, it’s like, okay, well, it could have been worse, but it could have also, have been a lot better in a lot of ways.
So, I do think that, you know, in part because of the very kind of, you know, attention economic incentives, design incentives that we’re talking about, sometimes we can skew toward the negative mode, you know, the moralism, this kind of thing. But that’s not to say that, you know, obviously there’s nothing to moralise about, or there’s nothing to kind of point out. But I do think it’s challenging because, you know, it’s easy to say it, “Well, look, this – we thought this was bad in the past, we adapted to it, and now people grow up thinking that it’s fine, but if they don’t know anything else, then, you know, of course they’re going to think that. So, I think sometimes it’s hard to just find the right, sort of, the benchmark for comparison in stuff like – and situations like this. But I do think it is important to keep in mind those positive benefits of all this. But I think also then to say specifically why it’s positive, is it because it’s aligned with autonomy in X way?
One of the things I was going to say earlier, I think, is just as a metapoint about a lot of this is, I think, we’re at a point now where linguistic precision is very important and really drawing the lines clearly between, you know, why, like in this case, why something is a positive or a negative, you know, design or outcome, you know, in view of these higher kind of interests, like human rights and values and these kind of things, so – but – so, I think, yeah, the more precision and – of relationship of language we can bring to these things, I think that will serve us well, yeah, regardless of where the conversations take us.
Kate Jones
Absolutely, yeah. There’s an excellent question here. Thank you, Trisha, for your question, and there’s a good question here from William Crawley. William, could I invite you to ask your question?
William Crawley
Yeah, and my question really is – can you hear me?
Kate Jones
We can just about hear you.
William Crawley
Can you hear me now?
Kate Jones
Again, just about. Please try and if necessary, I’ll repeat your question.
William Crawley
Okay, thank you, and my question really is about whether anonymity is a cloak for irresponsibility? I mean, people will often say things, or write things, if they think that nobody is going to know that they’re the people expressing those views. Now, the traditional media, the print media and broadcast media had safeguards against irresponsible opinions in that people who wrote letters to them at least had to provide their names and addresses, at least to the Editor, if not for publication. So, people could, sort of, know who they were. That sort of safeguard is totally absent, it seems, from the social media, which parades itself under the most absurd pseudonyms.
Kate Jones
Yes, thank you, William. David, would you like to come in on that point?
David Kaye
Sure. It’s clear that anonymity – that’s a great question, William. It’s clear that anonymity is one of the factors and is one of the ways in which bad actors can kind of hide behind either a pseudonym or just a fake account, in order to harass and abuse others, to share disinformation, whatever it might be. But the problem is that historically, anonymity has also been a really important tool for a lot of reasons. It’s been a tool for human rights activists around the world to converse, to share ideas in the face of authoritarian regimes. It’s been a real tool for, for example, ethnic minorities, or those who are attacked for their sexual or gender identity, to maintain some distance from, you know, the pressures that they face in their community. So, you know, the anonymity issue cuts both ways, and I think the question gets at a kind of broader point, which, in a way, your question rests on, which is, what do we expect the companies and also governments to be doing, in terms of harmful content, you know, of content that actually causes harm to individuals, or democratic harms to society?
So, my own view is that anonymity itself is not – it’s not the answer. Like, it’s something that we need to protect for some pretty core human rights reasons. But how we address the kind of content and we should also be clear that we’re talking about content that’s not just, you know, English language content, or content that concerns Americans or Europeans. You know, close to 90% of Facebook’s userbase is outside the United States, India being its largest market, and much of the conversation that we’re having in the United States and in Europe, actually kind of forgets about the mass number of users who are out there. And so, the question is really, what should the rules be that the companies adopt in order to deal with that problematic content? What should the kind of due process look like in order to ensure that when people have complaints that those are answered? What kind of transparency should we have, so that we understand what the rules are that are being made? All of the – or the AI that’s being developed to amplify voices, I think all of those things are part of the discussion and anonymity is just a small part of it.
Kate Jones
Yeah, thank you. There’s another great question here from Susie Billings. Susie, would you like to come in?
Susie Billings
Hi, yes, thank you very much. My main question was around how we actually regulate, you know, we talk, and I think we all agree that there’s great communities for a moment here. But the few times that I’ve managed to watch the hearings in the United States where, you know, Mark Zuckerberg’s sat there, it’s obvious that our elective officials, the vast majority of them are clueless as to what these platforms are, how they work, how they can be misused. And with all regulation, there needs to be a balance between, you know, issue and stifling that and actually being helpful. So, I was curious about who are the actual organisations out there currently that are lobbying for change and able to actually clearly articulate what would be actual useful regulatory steps?
Kate Jones
Right, really good question. Tim, perhaps, would you like to take that one?
Tim Kendall
I can try. I mean, I think that probably the best example of regulation that we’ve seen so far applied to big social has been around privacy, and the model there was that GDPR originated out of the European Union, which GDPR is this compliance rubric that basically forces companies to give people the right to – they have to opt-in with their data, in order for that data to be used in a certain way to target them both with advertising, but also with other sorts of algorithms. And it’s a great piece of regulation, certainly a great start on – in the privacy aspect of regulation. And so, it sounds maybe a little anti-American or a little defeatist to say this, but I think that we’re going to have to draft off of – I have a similar reaction to seeing the US Government in front of these leaders. And, in fact, I testified in front of the House in September and wasn’t overwhelmed with confidence after that panel. So, I, you know, I feel part of this is – and I think David talked about this, with regard to some interesting policy and regulation that’s come out of Brussels, like I think that we’re going to have to, at least the United States, look at what other, more functional governments are coming up with, in terms of the models that will follow and draft off of. But certainly, what the United States has done, as it relates to privacy.
Kate Jones
Hmmm, yeah, but I think there’s also a really practical issue here that on privacy, for example, there were already lots of privacy NGOs who took up the struggle on privacy, just as there were already fantastic freedom of expression NGOs that took up the baton on freedom of expression. But on freedom of a thought, an opinion, we don’t really have an established civil society because, until recently, we didn’t need to think very much about those rights because we assumed that they were absolute within our heads. So, there is a practical issue…
Tim Kendall
I think that’s a good point.
Kate Jones
Hmmm, yeah. David, would you like to comment on that, given your work as UN Special Rapporteur?
David Kaye
Yeah, so, I mean, first to respond to Susie’s question, I think you’re exactly right, and also, Tim’s point, like he wasn’t that – didn’t come away from a hearing on the Hill with all that much confidence. Confidence that members of Congress, the Politicians, are addressing this in a kind of thoughtful way that thinks about the needs of our democracies. I think we’re starting to see that in Europe, and so I would pay attention to the debate that is going to really kick into gear after the New Year, over the Digital Services Act in Europe. Some of the organisations that I think are really worth paying attention to, of course Chatham House, and got to say Chatham House is doing great work in this space, but Access Now, Center for Democracy and Technology, Association for Progressive Communications, Article 19, these are all – that’s just a small number of organisations that are doing really excellent work in this space. And I would say that sort of, the overlap or the intersection between privacy and expression, or privacy and thought, is really quite important here. And so, this goes to not only the kinds of issues that we’re talking about with respect to you know, the engagement economy, basically. But it also goes to digital security, right, and there will be pressure, there’s always pressure from law enforcement and from intelligence services on digital security and we need to ensure, like as Kate was saying, it used to be that we just thought of opinion or thought, like that’s just in your head. But, you know, now our communications, like the old diary that we might have locked up or put under, you know, put under our pillow, I didn’t do that, but I know people do that, you know, now that’s like what you’re writing, you’re uploading to the Cloud, it’s on your hard drive and governments shouldn’t have access to that, any more than it should have access to the diary that you keep under your mattress. So, I mean, I think there’s a lot happening in this space.
In fact, one of the things that Anna said earlier about education, like education is a part of this, it’s just that there’s so many different actors and there’s a sense in which, over the last ten years, I mean, these are basically new industries, and law and even culture haven’t fully caught up with how we deal with them.
Kate Jones
Yeah, thank you, David. Unfortunately, our time is up. I mean, I would leave that with the analogy that we could now say that, as somebody said to me earlier today, it’s as if Santa Claus is now real. He now knows where we live, he knows what we want, he knows how to get it to us, and his name is Jeff Bezos, or any of the other Heads of the tech companies, right? So, lots of food for thought there. First, I think we all agree that we have a grave situation to confront. We all agree that steps are needed. We seem to agree that we need multi-stakeholder approach and that this is going to be challenging.
Thank you so much to our four fantastic panellists this evening, Tim Kendall from Moment, David Kaye, Professor at the University of California, Anna Lembke at Stanford, and Dr James Williams at Oxford. Thank you so much for being with us. I hope you have all enjoyed this discussion, and good evening. Goodbye.