Rachel Briggs OBE
Good evening and welcome to Chatham House. My name’s Rachel Briggs. I’m an Associate Fellow with the International Security Programme, and we greatly appreciate you joining us tonight for what is an important and fairly timely event.
Before I launch into the substance of our event today, let me just give you some housekeeping announcements, which I will repeat during the course of the event. The first thing is to say that this event is on the record, so we are not observing the very famous Chatham House Rule. This will be on the record. If you would like to tweet about what is said and what you think about what is said, please use the #CHEvents, CH as in Chatham House, CHEvents. If you would like to ask a question during the question section, second half of the event, please feel free to add those into the Q&A function. The chat and others – well, it’s not the place to put those, but if you use the Q&A function to – for you – to put your questions in whenever you feel like, and we will either come live to you to ask those questions or cover them when we get to that point. So, thank you very much for listening to the important housekeeping.
So, we have got three great speakers joining us today. You’ll have seen their bios when you signed up, and just as you join now. So, I won’t give them long introductions, but needless to say that they’re incredibly well poised and qualified to guide us through today’s topic. And just by way of introduction, just let me say a few words. The first and most obvious point is that we know for sure that the internet, and all of the associated tools that go with it, is most certainly a tool that terrorists are using. Whether that is spread ideas, to exchange information, to recruit followers, or indeed, to plan attacks, televise attacks, even, and glorify those attacks. So, we know that that is unquestionably a space that is being used by terrorists in many different ways.
At the same time, we also have seen, over the last five to six years, I would say, a quite rapid sophistication – growth of sophistication in the kind of tools that we have to help us see what’s happening online, map what’s happening online, understand what’s happening online and indeed, respond to what’s happening in the online space. But – and here’s the but, we also know that online there’s quite some distance, often, between the talk and the walk. In other words, all sorts of people say things online that they wouldn’t say in real life, or indeed, don’t necessarily, sort of, translate into what they would do in real life. So, there’s somewhat of a distance, often, between the talk and the walk, and anyone who’s found themselves going down a Twitter rabbit hole and an unexpected argument they weren’t anticipating having, on any given topic, on any given day, will know exactly what I’m talking about.
And so, what we’re here to ask today is, are we casting the net too wide? Are we taking counterterrorism beyond where it should be? Our counterterrorism legislation and the powers that it brings, after all, are extraordinary powers and I guess one of the questions we want to ask is, are we applying extraordinary powers to perhaps ordinary behaviours and ordinary crimes? How do we know what the link is between that talk and the walk? When do we – what evidence do we have about when chatter, if you want to call it that, leads into real life action and, indeed, into violence? Is that chatter a good enough indication in and of itself? What else do we need to see to be worried? And all – after all of that, what are the implications of what we’re seeing, in terms of the efforts that we can be taking online, government policy and, indeed, law and policing? So, there’s more than enough for us to deal with today, in just under an hour now.
So, I’m going to turn, firstly, to Jonathan, who is the Independent Reviewer of Terrorism Legislation. So, Jonathan, well within your remit to have a view on CT powers and legislation. So, I’ll turn to you first to help set out your position on this important issue.
Jonathan Hall QC
Thank you, Rachel. The easiest way to define what I mean by “keyboard warrior” is just to look at some sentencing remarks from a Judge in June of this year. He sentenced a 19-year-old man called Mason Yates and the Judge said, “What I’ve seen of you tends to show isolation, an inability or an unwillingness to engage with others and form relationships. And, as I have found in dealing with a number of these cases, are a common feature involving young men who, in their own homes, communicate with others of like mind to express their poisonous ideology and enter into very dangerous waters on the internet and via social media.”
Now, it’s true that outside Northern Ireland, most completed terrorist attacks in the UK have been Islamist and judging by past events, that’s likely to remain the case for some time. But there is a chill wind blowing from across the Atlantic and there is particular fear that keyboard warriors, especially those on the extreme right, could move to violence, and we’ve seen waves of shootings in the US, inspired by the famous Christchurch New Zealand attack in 2019. So, it’s understandable that in the UK Counterterrorism Police are in the business of investigating and arresting people for what they say and do online. And in the last two years, in England and Wales, well over half of terrorism charges have concerned possession or dissemination of information and virtually all of that would’ve been from the internet. And there are references in a paper, which I will release later this evening, which supports everything I’m saying, but I’m not going to read everything that I’ve written.
So, at the same time, the UK has strong gun control laws, which means that the sorts of attacks we’ve seen in the US are far less likely in the UK and in the recent Intelligence Services Committee Report, the Director-General of MI5 is quoted and saying that there is “low risk of online hatred translating into real world violence.” The result is, with our wide laws and people who aren’t moving to violence, the UK tends to net some fairly small fishes, including record numbers of children, for the last two years, and it would be a stretch, I think, to describe the sorts of people who are being arrested for online terrorism as standing alongside Robespierre, Irish Dynamiters, Russian anarchists, hostage takers, mujahideen, you get the point.
The fundamental difficulty is the internet has lowered the entry barrier to terrorist offending. In the old days, if you wanted to get hold of a bomb manual, for example, you had to be a member of a clandestine organisation and meet on a cold park bench, so someone could pass it to you secretly, in order to get access to knowhow, training, weaponry. And as part of an organisation, you could aspire to really threaten security of the population or the state itself, and for that reason, counterterrorism has always been considered an aspect of national security.
On the other hand, keyboard warriors are loners. You could, as some Researchers do, aggregate them together and view them as a movement or a network, or a brand, or a wave. All of those words have been used, but in truth, they don’t form part of a quasi-military organisation or militia that is ready to overthrow the state or terrorise the population.
Secondly, the online world is inimicable to the promotion of the sorts of sustained causes that could lead to a terrorist campaign. It’s stuffed with shifting hypes and contradictory positions and there’s little wonder, perhaps, that in the absence of coherent ideologies online, people tend to be inspired by individuals. They’re inspired by Brenton Tarrant, who carried out the Christchurch attack, by what he wore, even by the fact that he had a little air moisturiser tree in his car window.
All of this gives rise to problems of legitimacy. If terrorism laws are used too readily on the basis of online chatter, then among other things, juries may be unwilling to convict people who they don’t regard as real terrorists. And there are recent cases where that – there is good evidence to believe that that is exactly what happens.
The keyboard warrior is nothing without the internet. If you remove this channel of communication, he is nothing. The online world is not simply a way of pursuing ideological or religious change, which if you take the internet away, they’re going to find another way of carrying out their campaign. Rather, it’s the sole means by which the in – the keyboard warrior approaches being a terrorist.
And I want to conclude just by referring to the position of children. First of all, whatever ideology’s in play, there’s less reason to suppose that a child’s online communications demonstrate a long-term commitment to a cause. Adolescents are, almost by definition, in a state of transition.
Secondly, the excitement of impressing peers online and the disinhibiting nature of anonymity, means that fewer secure inferences can be drawn about their intentions.
Thirdly, there’s a special obligation on the criminal justice system, and this is a matter of domestic and international law and good sense, to have regard to the long-term prospects of children. And one might add that society at large owes a particular responsibility for having offered up children as experimental subjects in the great internet experiment of the last 20 years, by exposing them, for example, to algorithms.
And fourthly, the truth is that children who are charged with online terrorism offences are routinely granted bail and Judges who sentence them don’t send them to prison. That strongly suggests that in many, not all, but in many cases, the authorities do not consider them a threat, once their use of the internet has been disrupted.
So, where does this go in terms of policy considerations? Gathering evidence, with a view to prosecution, ought to be less of a priority. The instinct of the authorities ought to be to intervene proactively before the child gets in too deep. Disruption and the early involvement of parents, schools, health staff and local authorities to prevent recurrence may be far more useful to society at large than criminal proceedings and then, the possibility in the – of long-term management.
So, I will conclude by suggesting an alternative model, whereby the presumption should be, and this is for both investigative and prosecutorial bodies, would be to treat online activity by children as a matter for immediate disruption, rather than investigation with a view to prosecution, except in four cases. Firstly, where the child is a member, or is aligned to a prescribed organisation. Secondly, where their conduct concerns a matter of national security, such as an attempt to obtain weapons of mass destruction. Thirdly, where there’s intelligence that the child, or an associate of theirs, has taken real world steps towards violence, and fourthly and finally, if the child has reverted after a previous attempt to disrupt. Thank you.
Rachel Briggs OBE
That’s great, thank you, Jonathan, and it’s really great to have some clear potential policy prescriptions there. Could I maybe just ask you, alongside what you said, does this, in your view, merit some kind of legal change or are you – do you think, sort of, policy and guidelines alone can get to what you’re – the position that you’re advocating there?
Jonathan Hall QC
I think so. I mean, the UK has got a broad deri – definition of terrorism. That’s often been criticised, but the US are now thinking that it’s quite a good thing, because they’ve found it harder to apply their law to what they call domestic violent extremists. I think that it can be done at the level of policy. The CPS have got policy documents, which go some of the way, and I think what is now required is a cultural shift, backed up by policy, for the Police and MI5.
Rachel Briggs OBE
That’s great, thank you, and we might come, in questions, back to questions of – on the response side of things and the extent to which some of these issues are challenging for Police and law enforcement and making that judgment call about when to escalate and when not to escalate, which I imagine when you’re the person making that decision, is a very, very tough call to make in practice. But thank you.
I’m now going to turn to Celia, who is one of the senior members of staff at Moonshot, and I would say, really, the organisation that has pioneered work in the online space for many, many years now. And she will say more about their work, but just to be very clear, they’re not just focusing on terrorism, they’re looking at online harms across the piece. And so, Celia, I wonder if you can talk us through what you’re seeing online and what we can do and maybe respond directly to Jonathan’s point, during your thoughts about this idea of early intervention and how that, maybe, chimes or doesn’t with the great work that you’re doing.
Celia Davies
Thank you so much, Rachel, and yes, I can certainly speak to the importance of early intervention and, kind of, disruption online in what I’ll say. So, I wanted to start, I suppose, with a reflection on that first question, “What is a terrorist in the digital age?” And actually, less from the definitional perspective and more just to make what, in many ways, is quite an obvious point about the relationship between the online and the offline. Because I think there’s often a tendency to think of them as quite distinct from each other, rather than these, sort of, multiple connecting and overlapping spaces that each and every one of us encounter hundreds of times during any given day.
Online lives are real lives and the ideas to which people are exposed online aren’t necessarily any less real. And when we talk about the real world, I think we do risk underestimating the potential role and power of digital interactions, and particularly when it comes to things like raising the temperature of discussions and emotions. We see that every day in our monitoring of, say, violent far right spaces online, with overt and tacit calls to violence being increasingly normalised and that, kind of, creation of a sense of urgency around perceived existential threats related to othering of, kind of – of different groups.
And I think to Jonathan’s point around risk and how we look at that, yeah, it’s absolutely critical. We certainly – well, I don’t, and we certainly don’t pretend to have some kind of sixth sense for when online threats have the potential to precipitate offline violence, but what we do spend a huge amount of time doing is analysing and, kind of, coding risks. So, for example, you know, we have hundreds of thousands of key words and terms that reflect or represent violent extremist sentiment or content across that spectrum, and we code those from the, sort of, the merely curious, right up to imminent specific threats of violence, where we would need to escalate to law enforcement. And taking that approach, kind of, across the spectrum means that we’re looking both at the prevention and the intervention, so, looking to provide anonymised support and early interventions for individual people who are actively looking for violent extremist content online, to disrupt that journey.
It does also mean that we can see some of the broader trends and patterns across different narratives and across different geographies. January 6th is quite a good example, actually. So, on January 5th we started to see increased online engagement with terms related to violent armed groups, incitements to political violence and conspiracy theories, which meant that we were able to put more resources into the safeguarding destinations for, kind of, election safety campaigns, broadly with the aim of de-escalating from violence. That included things like online, kind of, de-escalation content, also the option to speak with a trained Crisis Worker anonymously.
And so, kind of, going to that next point around, you know, “How can social media platforms and governments combat violent extremism online?” First of all, I want to, kind of, add some people into that mix, civil society, communities, think tanks, like RUSI, practitioners like Jonathan, public safety companies, like us, into that mix. I think it really has to be a multistakeholder, multidisciplinary approach, and I think that what – there’s often a tension between our different roles and responsibilities and our different priorities. I actually think that’s something really positive. It helps us check and balance our different efforts. We’ve seen how CBE can be excessively securitised and used as, essentially, a weapon to silence minorities or stigmatise particular groups, and I think having different people, sort of, basically, arguing with each other is one way to provide that sense of check and balance.
What I would say on the platform side is I’m actually really heartened by recent efforts, which have been accelerated by the pandemic, that tech platforms and social media companies have introduced, from the, sort of, safety by design perspective, to help moderate and remove violent extremist content and add friction into potentially harmful user journeys. Facebook, for example, redirects users looking for violent far-right content to an exit programme. It’s a small pilot in a small number of countries. TikTok has introduced, like, a warning popup if you search for harmful content.
Those are really important efforts. They’re not, on their own, enough. First of all, the vast majority of violent extremist content may not be eligible for takedowns, according to either the terms and services of the tech companies, or again, kind of, reflecting on Jonathan’s point, the actual legal definitions, and it doesn’t necessarily mean that they should be taken down. Furthermore, groups, you know, they walk that line, and they do that very well. There’s always going to be a tension with free expression.
Second of all, and I think maybe even more importantly, in many ways, when that cont – if and when that content is removed, that post or video account, sure, might be deleted, but the person who posted it still exists and they might repost elsewhere. They might move to another platform. They might be deeply vulnerable or pose a threat to the community around them or to themselves.
And finally, there are still, you know, many, many spaces on the internet, which just aren’t liable to takedowns and where this content can thrive and I think, for those reasons, we do really need solutions that can be, kind of, co-created with governments, with communities, with industry, that can address the problem at the different stages, from broader prevention, right through to one-to-one interventions.
And going back to that first point about online versus offline, I think a lot of solutions that we all, as a community, are looking at, are drawing very heavily on what works best offline. One of the key findings there is around the efficacy of behavioural health methods and disengagement, so connecting people with services that can facilitate change. That might be conversations with a Counsellor, a hotline to call, connecting them with very tailored resources to digest in their own time. And, essentially, the idea is to interrupt a potentially harmful emotional state and facilitate questions to support that person to explore the content they might have been looking at and where it’s led them. Enabling a kind of, I guess, gentle challenge by trained professionals who understand the violent extremism landscape and can effectively identify and escalate risk. Increasingly, evidence is showing that those need not be, and perhaps shouldn’t be, ideologically focused or specific, but actually look at the, kind of, underlying vulnerabilities and grievances.
I think I’m at time, so I will finish there, and we can, kind of, pick up anything else in the Q&A, but thank you.
Rachel Briggs OBE
Great, thank you. Celia, you said, you know, you don’t have a sort of, a magic bullet to understand when something is, kind of – what you’re seeing is, kind of, tipping over into the dangerous, or potentially dangerous. But can you give us an insight into what are some of the flags that when raised, start to, sort of, make the professional hairs on the back of your neck raise up? What would be – I know that you have, sort of, very clear frameworks around that. What would be some of the issues that would make you be concerned that perhaps this isn’t a “keyboard warrior,” to use Jonathan’s term, that we’re – that you’re looking at, and that it might be something that might need one of your team to, kind of, pick up the phone to the Police, for example?
Celia Davies
Yeah, absolutely, and it typically reflects, you know, what most risk assessments say. It’s around imminence and specificity and that, you know, it’s – it can be very hard with an anonymous source to, you know, look at things like credibility. That’s not our – we’re certainly not law enforcement, so that’s not our job. It’s really looking at the content of the threat. Are they naming a specific person? Is there a call to violence and, yeah, is there an element of imminence? And some combination of those would lead us to, kind of, escalate then. As you say, we have, sort of, essentially, risk escalation processes that, you know, “Here’s the threshold for calling the Police, here’s the threshold for reporting it to the platform,” for example, if it’s something like hate speech.
That’s certainly how we approach it from the, sort of, individual threat level, and then, I guess the other aspect of your question is what – you know, what – in what – in which cases might we have a sense that something is going on? And that’s a volume question, really. That’s – you know, are some of the spaces that we’re monitoring, is there a sudden uptick in conversation? Is there an influx of new members? That sort of thing. It’s, essentially, I mean, an – you know, on a really basic level, it’s evidence of change in a space that creates harm, essentially.
Rachel Briggs OBE
That’s great, and we’re all learning in this space, so thank you for sharing that. I’m now going to turn to Emily. Before I do that, I just want to remind everybody participating at home, or in the office, please do pop your questions into the Q&A section. We always leave loads of time for questions and discussion and actually, I find that the members’ questions are so interesting and such a lively and important part of making these events successful. So, please, please, questions in the Q&A and I will, I promise, do my best to get to everybody.
But before we do that, I will now turn to Emily, who is with RUSI, which – an organisation, which I’m sure is known to many of you for its excellent work over goodness knows how many decades, advising governments and countless others around the world, on these issues. Emily’s particularly active in the area of conflict and terrorism and heads up RUSI’s work in that area. So, I’ll hand over to you to maybe talk a little bit more about the, kind of, so what next question, so what does all of this mean for governments, etc.?
Emily Winterbotham
Thanks, Rachel and, you know, thank you to Chatham House for inviting me to speak, and some really interesting reflections so far. I mean, I think just to start out, we all know that terrorism is notoriously difficult to define, particularly at the international level, but it does appear that it’s becoming a little bit more difficult at that national level. And you were referring to, kind of, problems with the Police and Security Services earlier and, you know, in interactions with Police, they are saying that they are struggling a bit.
But, you know, broadly speaking, when we’ve thought about terrorism historically, we’ve been thinking about the pursuit, or the facilitation, of violence on the basis of political, religious, racial or ideological goals, and it’s perhaps that, kind of, last part which is causing the most problems at the moment. And I was thinking about this in preparation of the event and thinking, and it may be something that Jonathan has an answer to, but what did ideology mean to the drafters of the UK’s terrorism legislation in the year 2000? And when they were thinking about ideology, you know, had they gone to this, kind of, vision of ideology encompassing the, sort of, grievances and fears driven by conspiracy theories, that we’re seeing on leaderless internet platforms?
It is much easier to list ISIS and Al-Qaeda related groups as terrorist organisations. They have a worldview, they have a clear ideology, and they have a structure and yes, I think it is important to say that there is some inherent racism within some of this, particularly linking back to the War on Terror. But the reality is, when we look at some of the resurgent groups, many on the far-right side, not all, but many, movements lack some of those speeches. And those speeches become even more absent when we look at the online space, where there are these wider social movements operating online, but actually, also increasingly offline, that can inspire lone actor attacks.
But I think it’s also – it’s not just about new movements or new understandings of ideology, but the significance of the online space and the blurring of our understanding of ideologies, or the multiple ideologies we’re seeing, is, in part, due to a specific change in terrorist tactics. And this, really, does relate to the successful counterterrorism efforts we’ve – that we’ve seen in the offline space. So, terrorist organisations increasingly encouraging individuals to engage in acts of violence, independent of leadership and hierarchy. And this does make all forms of terrorist organisations less cohesive from an ideological perspective.
So, what does that really mean, in terms of how we should deal with this in practice? How do we define terrorism and how do we respond? I mean, I should say, from the outset, I share Jonathan’s concerns, in terms of a too broad definition of terrorism. I prefer a stricter definition. That does not mean I don’t think we need to be looking at other ideologies, nor that we shouldn’t bring in concerns from Muslim groups, for example, that they’re more likely to face terror charges. But from a human rights perspective, I am concerned that allowing this term to broaden can actually lead to states, in certain countries, justifying clamping down on legitimate protest movements, certain actions by environmental groups, for example.
So, we could become more precise. We could say let’s list the, you know, different groups that are – that we think are terrorist entities, and there are discussions, particularly in relation to far right groups, about do we need more legislation in that area? This is hard to do, though, when there are no easily definable groups, such as in the online space, and when that relationship, and I think Celia was talking about this, the relationship between extremism and the propensity for violence is so poorly understood. And so, actually, you can end up in a situation where you give too much attention to non-violent groups, which can act as a self-fulfilling prophecy, because then they actually further radicalise. So, we do need to exper – you know, kind of, demonstrate caution in that area.
Clearly, as we do all the time, we need to enhance our counterterrorism and specifically our preventive tools, including in the online space. You know, and RUSI has been working in – on this for a number of years, in terms of generating and sharing evidence-based learning, including in the online, but also narrative, spheres. And one of the examples of, kind of, recent activities that we’ve been doing, actually, with Moonshot, as well, but is work as part of something called the “Extremism, Gaming and Research Network,” which is building knowledge about the relationship between the gaming industry and violent extremism and extremism, for example.
I think we might also need better monitoring of fringe and scattered individual groups, such as incels, who may engage in violence. I’m always a little bit uncertain as – you know, I don’t like to put them in the terrorism sphere and yet, there have been some far-right attacks, which have had a relationship to the incel movement. But I also think it’s important, then, to understand how issues like misogyny can encourage the transnational spread of far-right extremism online and offline.
And then, Celia was talking about threat and risk assessment tools and there are lots out there, and, actually, you know, they’re relatively effective. They’re not meant to be predictive, but what they do, they do relatively okay. But I would say that they still tend to be based on incomplete datasets. So, they’re often developed with adult male, criminal and often – yeah, criminal prisoners in mind, which means they lack sufficient inclusion of gendered and age dynamics of terrorism. So, there is work that could be done in that space. And I also think that probably some of the most well-known models, there could be some work, in terms of their applicability in the online space.
And then, the final point I’m going to end on is actually similar to Jonathan’s and it goes back to that point about age and terrorism. Over the last 20 years we have exceptionalised terrorism and I think you said this at the beginning, Rachel, you know, which has led to extensive laws and measures. And it is because of those extensive powers that we need to think about the implications of labelling some acts as terrorism, particularly when it concerns children. And, you know, I look at some of the work I’ve done in the conflict space and when it comes to issues such as child soldiers, it’s done much more from the perspective of them as victims. There is some understanding that there’s almost no such thing as a child soldier, precisely because there are some very strong elements of grooming, exploitation and abuse. And so, again, you know, we all talk about sharing learnings, but there could well be learnings from that space to be transferred to help us think about how we frame children’s involvement in so-called terrorist activity, moving forward. And I’ll leave it there.
Rachel Briggs OBE
That’s great, thank you, and just to pick up on this – the point of exceptionalism. It’s interesting, isn’t it? Because having worked in this space for 20 odd years, it’s – you’re absolutely right that we – so we learn from what other countries are doing, in terms of counterterrorism, but we don’t often necessarily learn from what we’re doing to tackle other types of threats and bring, you know, bring that learning into the – ‘cause it’s so exceptional, it’s so different, we must elevate, we must take it so much more seriously and so on. And so, it’s – there’s no such thing as a child soldier. I mean, is there no such thing as a child terrorist? I will leave that hanging.
But what – I’m very, very happy to see that we have got our questions coming in thick and fast and I will be calling on people to read them out. I need to let my Chatham House colleagues know that, so their fingers are poised over the unmute buttons. So, I’m going to – I’ve got – in – I’m going to go through in this following order: Jonathan Fowler, David Page, Duff Mitchell and Rebecca Dugard. So, if I can ask my Chatham colleague – Chatham House colleagues to bring in Jonathan Fowler from the United Nations to ask his question, please.
Jonathan Fowler
Okay, thank you. Do you hear me okay?
Rachel Briggs OBE
Loud and clear, Jonathan, thanks for…
Jonathan Fowler
Great, thank you so much. Yeah, this is – there’s a lot of food for thought here. Thanks to all the speakers. I’m from the UN’s Office for Europe and Central Asia, based in Istanbul, but I’ve worked in the human rights space in the UN before. So, my question is, kind of, more driven by that. I mean, one of the failings we’ve seen over and over, from platforms, is that they fail to invest properly in moderation in local languages, and that’s allowed hate speech to morph into violence. I mean, I’m sure many of you are familiar with the case of Myanmar. So, my question, specifically, is do you think that platforms and, indeed, law enforcements and others, are managing to keep up with online terrorism in all language spaces, or are they overfocusing on particular language groups? Thank you.
Rachel Briggs OBE
Celia, I think that one would most definitely come to you first.
Celia Davies
Thanks Rachel and thanks, Jonathan, that’s an excellent question and, yeah, as soon as I read your question, I thought Myanmar. I think my short answer is no, they’re not managing to keep up with this, and I suppose I also think it’s maybe less about language spaces and more about profile and accountability. So, Myanmar flew under the radar for ages because Facebook didn’t have – they’re weren’t employing anyone, not on the content moderation side, that spoke local – that spoke the local languages. The people who were in charge of doing that were sitting in Dublin, speaking English, and it was civil society, it was a civil society working group, who, unpaid, were tasked with reporting that.
I think, kind of, on the other side, I think what is incredibly difficult is that hate speech, through its nature, is very coded, very, very specific, very hard to, kind of, enforce consistent moderation around. And I think we, kind of, keep coming back to this point around, well, they’re public spaces that are privately owned. They’re occupied and used, kind of, like, public spaces and where the accountability sits – you know, they’re American companies. Where the accountability sits, where the safeguarding responsibility sits, is, I think, something that our legislation, our, kind of, law enforcement agencies, governments, people, haven’t really caught up with, because it’s, frankly, just incredibly difficult.
I do think there’s a lot more that the platforms could do, in terms of hiring local staff to support on the content moderation, analysis and support side. I think that’s an easy, obvious step that they should be doing far more of.
Rachel Briggs OBE
Thanks. Jonathan, I wonder if I might bring you in on something that Celia said around, sort of, legislation catching up on these things, it being quite difficult to know, sort of, where the responsibility lies. And I may be putting you on the spot here, somewhat, but of course, the Online Safety Bill is now in the weeds again. Do – is that relevant here? Do we need to pick that up at some point?
Jonathan Hall QC
Yeah, it’s very relevant, because there’s no way that the authorities could investigate every person who’s looking at this material in order to assess whether they might be inspired. Sometimes people can look at a lot of stuff and they never do anything bad at all. They look at beheading videos. For some reason, people like gore. Doesn’t mean they’re going to kill someone, and yet, one of the most notorious terrorist murders, outside Finsbury Park Mosque in 2017, was said to have been inspired by a BBC documentary.
So, you’re never going to do that, so I think you’ve got to address it at root, and the Online Safety Bill certainly has suffered from, I think, a lack of work to work – to identify what they really want to do. Meaning that even though it’s been going through Parliament for some time, there have been some late, but actually some quite crucial, amendments to protect free speech, for example. I mean, that came in acceptably, but late in the day. There was also a failure to understand how terrorism offending, which is an important component of the bill, relates to what they call illegal content, and the bill failed to recognise that you don’t commit a terrorism offence, unless you haven’t got a defence and you’ve got to have a certain degree of intention.
So, having said that, I think it is really important. The government’s missed a bit of a trick, I think, in the context of children. What it’s done is, it’s said, “All terrorism content is illegal, and we need to protect children against suicide videos, pornography,” things like that. I think what the government ought to do is to say, let’s make special efforts to protect children from this. If you protect adults from terrorism content, you quickly get into really difficult definitional points, which have been mentioned, quite rightly, and difficult points about free speech and historical accuracy and the need to be able to see what’s going on in conflict zones, which actually is an important part of freedom of expression and understanding the world. I think more focus could be put into protecting children, though, from, for example, seeing a beheading video, which might be acceptable for an adult, certainly isn’t for a child.
Rachel Briggs OBE
That’s great, thank you, and on this question of scale, I’m going to bring in David Page, and Dave, yours was more of a comment than a question, but can I bring you in and maybe you have a question you can add onto it, as well, ‘cause it’s an important point around scale.
David Page
Thank you. My question is prompted by another think tank event, only, I think, last week, where the speaker has taken one million people in the UK watched a complete gruesome IS murder. If only a small number, say 1%, went beyond watching and hit the keyboard, I expect defensive schemes would be overwhelmed. To be fair, Celia and Jonathan have both touched on scale, but I’m sorry to say, but nation stake activity on the internet and, for example, child porn, in particular, and online fraud, has been spectacularly unsuccessful. I see no particular reason why counterterrorism, or monitoring of a wide range of activity, would be any more successful.
Rachel Briggs OBE
Thank you. Can I ask who would like to come in, in response to that? Jonathan, yeah.
Jonathan Hall QC
Yeah, keen to. The best model for dealing with internet content is child sex abuse material hashing. So, there’s a database of images, which is widely shared, and that material is going to be taken down. There is – the next best, because it’s harder to define, is hashing of terrorism content. You will rightly say, “Hang on a second, there’s any number of terrorism expressions.” If I suddenly say today, “Let’s go and kill loads of Muslims and Jews,” now, that’s not going to be captured on the database, is it? And you might say that should be taken down. I think what you can do, though, is aim for some pretty low hanging fruit, which has been shown, through evidence, to have a big impact. So, the example there would be the Christchurch livestreaming. I mean, that was incredibly inspirational, we know that.
So, I think, given the difficulty that everyone’s identified, you know, language, contested definitions, freedom of expression and your general point, which is you’ve got to be pretty modest about what can be achieved, I think, nonetheless, there are some things that you can definitely go for.
Rachel Briggs OBE
Thank you. Emily, I wonder if I can – just coming back to this scale point, which is at the heart of what we’re talking about with this question, any reflections from the wor – I know you work extensively with governments and folks on the frontline, dealing with this. Any perspective from your work on a sense of being overwhelmed and new approaches to being able to, kind of, sift through increasingly massive volumes of information in this area?
Emily Winterbotham
Yeah, I mean, I think in a way, Jonathan’s referenced, as you say, the, kind of, “low hanging fruits” and the things that we can do. There is a sense, I think, when we talk to different security officials, both within the UK and overseas, that particularly when it’s governments trying to do this, you know, particularly in some countries, which may not have massively advanced tech capabilities, too, that they are overwhelmed by it. And that’s why there is this, kind of, emphasis and onus on the tech companies themselves to be dealing with the problem and taking it seriously.
But, you know, I think there have been improvements and that is recognised, you know, in, yeah, in, kind of, whatsit – initiatives like the GIFCT, the Global Internet Forum for Counting – Countering Terrorism, have helped. That’s at least an avenue where lots of different organisations can come together and share information and can talk about these concerns. You know, more recent attacks, since Christchurch, that have been filmed, have been taken down quicker. But, you know, when we started to look in this in relation to the extremism and gaming side of things, that’s just opened up a whole other can of worms. You know, gaming is nothing new.
In fact, I think I think it was Hezbollah that actually started using gaming platforms at some point, but it’s gained a lot of attention, in recent years, particularly since COVID and lockdowns. And there are real concerns, among security professionals, that there’s a whole level of interaction going on, on either gaming platforms or adjacent gaming platforms, where, you know, there’s recruitment, but then there’s also training, and actually, you know, practice operations. And that’s going to be – I think that – the reason why we’ve set up this network is actually because there is a concern. So, trying to be a bit, I don’t know if it’s even forward looking, ‘cause as I said, it’s been around for a long time and it only seems to have got attention in recent years.
But there is a lot and it goes back to that point Jonathan made at the beginning, in terms of where is our energy best focused? And, you know, as much as I think there are concerns in the online space, the relationship between what is happening online and offline is still not proven and those links betwee – I mean, we know that the links between ideas and action is definitely not proven. And so, we wouldn’t want to sacrifice energy and resources in the offline space if – you know, to the online space. I think there has to be a trade-off in that regard.
Rachel Briggs OBE
That’s great, thank you. Could I now ask Duff Mitchell if you would ask your question, once you are muted by my Chatham House colleagues?
Duff Mitchell
Yes, good day. My question is about the gender profile of the keyboard warriors. Much of online violent space – violence inciting space appears to be dominated by males, especially young males. Are there any sign that females, who are highly active elsewhere in online spaces, are having a greater presence in the violence inciting space, if not, why not?
Rachel Briggs OBE
Celia, I think this may be a natural question for you, given the wealth of research you guys are doing in this space.
Celia Davies
Yeah, it’s a really interesting question, thank you. All of our research still shows it’s, yeah, it’s very male dominated, kind of, across the spectrum, and that goes right the way from, kind of, violent far-right groups, obviously incels, if we’re including them, are very, very male, through to, kind of, Salafi-Jihadist groups. I think what’s important here is making sure that the responses that we’re building, whether that is, kind of, prevention work, prevention campaigns, or interventions, are gender sensitive where they can be. So, making sure that if the – you know, if the minority of women who are in these spaces have an option, that there is a safeguarding option that’s, kind of, tailored to their needs.
So, yeah, for example, if we are, for example, running, sort of, broader prevention campaigns, we would tailor those based on gender and age. So that the person, you know, who’s looking for that content would receive a message that feels, you know, a positive alternative that feels relevant, based on – you know, potentially, based on gender. Kind of, building off of research around which messages might be most effective from the dissuasion side. Yeah, I hope that helps. It’s not the perfect answer.
Rachel Briggs OBE
That’s great, and Emily, I think you wanted to come in, as well.
Emily Winterbotham
Thanks. It’s only because this is such a space, which I still think – I mean, we’ve done – I’ve done a lot of work in this space and I think it’s still a little bit misunderstood, actually. I wouldn’t necessarily agree that there’s not presence of women online, in terms of the incentivisation of violence. There are more men than women committing violent terrorist acts, still, but that is not to ignore the significance of the roles that women are playing, both in the online and offline spaces, to all sorts of terrorist groups, including, you know, conservative far-right, Jihadi and, obviously, in the far-left, too.
Research I did in 2015-16, which was looking at ISIS recruitment, actually pointed out a higher vulnerability of women online because of successful female recruitment tactics in the online space that women were using to get young girls to come and travel to Syria and Iraq. And part of the reason for the success was that young women from conservative Muslim heritage families were more likely to be at home. They were not in public offline spaces. They were more online, as the question has asked, you know, has said. You know, they were more in the online space because of their cultural restraints and so, they were tapping into these narratives that were coming out, which were incredibly successful.
And, actually, other research has shown that women are, potentially, better recruiters in the online space, in terms of the networks they set up, the narratives that they use, including with far-right groups, as well, by the way. There’s a lot of women out there operating in the online space and even if they’re not necessarily at that har – very, very hard end, in terms of encouraging violent acts, they are part of that broader environment in which violence does take place. And I think it’s actually, as Celia says, in terms of, obviously, our approaches, it’s really important to be gender specific because we can run the risk of missing some of the concerns and some of the women who are operating online.
Rachel Briggs OBE
That’s great. Jonathan, did I see your hand being…?
Jonathan Hall QC
Yeah, I do, and just to add a point. There’s someone called Professor Maura Conway from Dublin and Swansea, who’s pointed out that women can join terrorist groups pretending to be men. And so, there’s – it is another way in which the internet can lower the barrier to entry, particularly if you’re talking about, like, a very conservative religious terrorist group. Someone can participate online with a false identity. So, I just thought it was worth adding that.
There is – if you’re interested in following it up, I was just looking, there is – in the Intelligence and the Security Committee report that came out yesterday, on the far-right, the figure that they give is very, very heavily weighted towards men, over 90%. And the explanation from the Police that was given, and they record, is that “Violence is a peculiarly male obsession.” So, that was what the Police thought.
Rachel Briggs OBE
That’s interesting, and the report also talked about the fact that MI5 nee – MI5, which has responsibility now for far-right terrorism, needs more resources because of the scale, and not just of far-right, but of everything. And that it’s, sort of, struggling – the amount of time and effort it’s having to place on those things is – you’re frowning, you’re disagreeing with…
Jonathan Hall QC
Well, I’m highly – I was slightly frowning, not because I would ever say that a public authority doesn’t need further resources, but because using the MI5, what they call the “pursue model,” so, using clever systems, working out what’s going on, getting the Police involved, finding the evidence, bringing it to prosecution, that may be one way of dealing with it, but I think it’s the wrong way. Because when you involve MI5, you involve secret intelligence, need to know, sensitive computer systems and it’s hard, frankly, to involve people who may be able to give sooner and, potentially, more effective disruptive help. So, I’m thinking about Mental Health Practitioners, parents, schools, etc.
So, the only reason I was frowning is because I – what – the point of my paper is, really, to try and suggest that the model that you would apply to Northern Irish terrorism, that you’d apply to Islamic State, you’d apply to national action, isn’t necessarily the one that you want to apply to keyboard warriors.
Rachel Briggs OBE
Yeah. One of the challenges, I think, though, is the resources available to deal with the scale that we’re talking about. I mean, in my previous life, supporting hostages who were coming home from captivity, we couldn’t get them into mental health support because that area of our public services is so drastically stretched. So, it’s really – I think it requires quite a significant investment that we just don’t have right now.
I just want to – in the interests of time, I do want to – we have a final question here from Rebecca. I will read it out. It, sort of, covers a little bit of what we’ve covered already. I’ll read it out and then, I will ask you to respond, but then, also would look for some concluding comments from each of you and maybe, kind of, what, in your view, is the biggest takeaway that we need to have from this conversation and what would you go do differently tomorrow, as a result? How about that as a challenge?
So, Rebecca says, “Do countries have the capability to patrol and monitor the breadth of the internet to ensure that extremist material is removed?” In other words, is this a fruitless task for the authorities, or are there methods that have been known to be successful that you can point to? We had a little bit of a conversation about that in reference to some of the more successful efforts around child exploitation, but Celia, can I come to you first on this point, but also, what’s the key takeaway and what would you love to go do differently tomorrow as a result of this conversation?
Celia Davies
Thanks, Rachel. Thank you, Rebecca, for your question. So, I think no, countries don’t have the capability to patrol or monitor the breadth of the internet and I – actually, I don’t think they have the mandate to, either. So, I suppose going back to an earlier point in the conversation around the importance of these, kind of, multistakeholder groups and coalitions, GIFCT being one of them, who are – kind of, have broad, I suppose, responsibility to help us all safeguard what is a, kind of, global public space.
I think what’s quite important is being clear about the task in hand. I don’t think the task is necessarily to get rid of all of the terrorist content on the internet. I don’t think – I think that’s, first of all, not possible. Second of all, I think our definition is still too broad that we’d risk going either one way or the other, and that’s going to be changing every day, with, kind of, the scope of different threats, with – and then, we’ll, sort of, be, constantly playing catch up. And one way to address that is to just be really clear eyed that, you know, the goal is prevention and safeguarding and where required, kind of, direct intervention.
And Rachel, to your very difficult question around what would I go away and do differently tomorrow? I suppose what I’ve found really useful, kind of, hearing the questions and, also, Jonathan and Emily, is just thinking, actually, a bit more carefully about the legislative side. And, actually, one thing that I would love to do in my work is engage more closely with, first of all, the drafters and, also, you know, people like Jonathan, who are defending people or, kind of, looking at one-to-one cases, sort of, within the courts. I think that’s – those are really important perspectives to have round the table. Thank you.
Rachel Briggs OBE
That’s great. You heard it here first, folks, a new collaboration, I love it. Emily, can I come to you next and then, I’ll come to you last, Jonathan?
Emily Winterbotham
Yeah, maybe if I’ll jus – I’ll just jump straight into the, kind of, what would I do tomorrow? I mean, I think firstly, it’s – there are more people who do not become terrorists than do, and that’s the starting point. So, there’s a lot of people engaging online, there are a lot of people who are saying some terrible things online, but the vast majority of them do not go anywhere near violence. And we almost need to start from that in this whole space, online and offline. You know, why do more people not become terrorists than do? What are the protective factors that are involved and how can those protective factors be brought better into some of our models, either from a risk assessment perspective or from a prevention perspective?
And then, just, kind of, interlinked to that, we all know that there is the, hopefully imminent, forthcoming Independent Review of Prevent and, you know, I would like to see Prevent look at how – what tools they are currently using and to talk a lot more about evidence of success and effectiveness, because I don’t think we’re there yet. So, a little bit more in that space.
Rachel Briggs OBE
Brilliant, concrete, practical, love it. Jonathan.
Jonathan Hall QC
Just answering Rebecca’s question, the problem is jurisdiction. So, the British Government can say to a tech company based in California, “Take it down,” but the tech company in California will say, “Well, we’ve got to protect First Amendment rights.” And indeed, some states in the US are passing what they call “blocking statutes” to prevent tech companies from actually responding to overseas requests. So, there’s a real problem with that.
The solution, if you want one, is a pretty drastic one. It’s to – for the government to filter everything, and that would lead to what some people call the “Balkanisation of the internet,” where there’s a special British internet, applying British rules, you have a Russian one, a Chinese one and so on and so forth. And that wouldn’t be good, for all sorts of reasons, not least it’s the, probably the engine of the economy and future prosperity.
In terms of what I’ve taken away, it was a comment, I think it was Emily, it could’ve been Celia, about effective risk assessment. I mean, it’s really hard for terrorists anyway, but I think it’s particularly hard to apply what little we know to – for adults, to children. And so, what I would like to see would be more work on risk assessments for children, particularly those who are involved online, and maybe ask children themselves.
Rachel Briggs OBE
Fantastic. I call that a call to action, which is a brilliant way to end an event, which is really important and does deserve action. So, thank you to all of our three brilliant speakers. I’m looking forward to seeing all of this collaboration that’s going to start as a result of this. That’s the best outcome. I want to thank all of you. I want to thank the folks for attending. It’s our members that really bring our wonderful institution to life, so thank you for your questions and your comments and for holding us to account, as well. And I will, with that, close the meeting and wish you all a very good night. Thank you.