3. Countermeasures to Disinformation
3.1 The view from the EU
In the run-up to the 2019 European elections,36 almost three-quarters (73 per cent) of European citizens expressed concern about disinformation during pre-election periods37 and 83 per cent considered it a problem in general.38 Despite a fragmented media, political and regulatory environment, the EU appeared unified in its determination to deal with the issue of disinformation.
The UK, France,39 Spain and Germany are just some examples of EU countries that have been the target of disinformation aiming to affect political processes. Nevertheless, EU responses are driven not just by political security concerns but also by human rights considerations too. The High-Level Group of Experts, set up by the European Commission to advise on policy to counter disinformation, concluded the problem should be addressed within the framework of the European Union Charter of Fundamental Rights (CFR) and the European Convention for the Protection of Human Rights and Fundamental Freedoms (ECHR).40
Data-driven microtargeted disinformation campaigns are particularly challenging for the rights enshrined in ECHR’s articles 9 and 10, which relate to freedom of thought, conscience and religion and freedom of expression, respectively.41 The stealth profiling of these influence operations, in combination with the subtext of plausible deniability, threaten the autonomy of targets and their freedom to ‘hold opinions and to receive and impart information and ideas without interference’.
In a report on internet intermediaries, the Council of Europe (CoE) highlighted that ‘the protection of privacy and personal data is fundamental to the enjoyment and exercise of most of the rights and freedoms’ guaranteed in ECHR.42 Even though the CoE is an international organization distinct from the EU,43 cooperation between the two bodies has been reinforced recently and the formal accession of the EU to ECHR is still at the forefront of the debate.44 In its report, CoE’s recommendations for states included oversight and redress mechanisms in their regulatory frameworks, consideration of the size, structure and nature of intermediaries in their proposals, and the introduction of human rights impact assessments. CoE also highlighted the responsibility of intermediaries to protect users’ human rights under the UN Guiding Principles on Business and Human Rights and the ‘Protect, Respect and Remedy’ Framework.45
CoE’s recommendations for intermediaries were ambitious, including adequate training for content moderators, human rights impact assessments for automated content management, transparency in regard to user tracking and profiling, and banning of user data migration across devices and services without consent. However, industry pushback is expected – most likely in the form of either legal confrontation or deflective PR strategies – as these recommendations would impact algorithmic systems that are core components of digital platforms – such as Facebook’s News Feed – as well as their data governance and their dominance of digital markets.
Following a toughening stance by EU policymakers, US counterparts are growing vocal about the need for tech regulation with various congressional committees moving it to the front of a broader reform agenda.
Nevertheless, following a toughening stance by EU policymakers, US counterparts are growing vocal about the need for tech regulation with various congressional committees moving it to the front of a broader reform agenda. Recent bills introduced to the House of Representatives, such as the Algorithmic Accountability Act by US senators Cory Booker and Ron Wyden,46 the Deceptive Experiences To Online Users Reduction Act47 by senators Mark Warner and Deb Fischer, or Senator Josh Hawley’s Do Not Track Act,48 may be ambitious at this point, but they definitely indicate the tide is turning. Both the EU and the US are facing similar systemic problems and similar adversarial actors, so drawing on the CoE’s guidelines for digital intermediary regulation to create a common path is a wise step.
EU institutional responses
The 2019 European parliament elections put the EU on high alert, calling for active engagement in the fight against disinformation from all member states. The European Commission’s Action Plan against Disinformation established four key pillars: 1) a coordinated response by the EU, mobilizing all government departments; 2) improving detection, analysis and exposure capabilities; 3) mobilizing the private sector; and 4) building societal resilience and raising awareness through conferences, debates, specialized training and media literacy programmes to enable citizens to spot disinformation.
As part of the third pillar, in September 2018, digital intermediaries committed to a Code of Practice (CoP)49 and were tasked with providing monthly reports on its application, with the Commission warning that if there was no improvement in the fight against disinformation by the end of 2019 it would consider regulation. The list of signatories included Facebook, Google, YouTube, Twitter, Mozilla, as well as advertisers and trade associations representing online platforms and the advertising industry. The European Regulators Group for Audio-visual Media Services (ERGA) would assist the Commission in assessing the effectiveness of these commitments. The Commission called on the signatories to ensure ‘full transparency of political ads’, access to data for research purposes and to facilitate close cooperation between them and national governments through the Rapid Alert System (RAS).
Advocacy groups highlighted the fact that the CoP remains a voluntary, self-regulatory measure and that they demand clearer objectives and an effective monitoring system enforcing compliance via sanctions or other actions. CoP’s efficiency will be judged in the long term as the results thus far have been mixed and at times, highly inadequate. Although the monthly reports of social media platforms have listed substantial fake account takedowns and similar actions, the Commission has repeatedly called for improvements in terms of monitoring efficiency, sufficient information delivery, and more clarity in terms of their strategy to tackle disinformation.50 Even after the European parliament elections, the Commission still criticized companies’ insufficient progress in increasing the transparency and trustworthiness of websites hosting ads.51
As months progressed, various events compromised technology companies’ trust capital. In January 2019 despite being a signatory to the CoP thereby committing to ‘support good faith independent efforts to track disinformation and understand its impact’ and to refrain from prohibiting or discouraging ‘good faith research into disinformation’, Facebook limited access to information that ProPublica, Mozilla and WhoTargetsMe built tools to monitor,52 leading to an outcry from the researcher community.53 Facebook did eventually open its Ad Library API in late March 2019,54 but researchers expressed their concerns that the API did not provide all the necessary data,55 and subsequently reported a wealth of technical issues that impeded on their work.56 After months of deliberations, progress has been slow. Irrespective of improved monitoring of disinformation during the EU elections, those wishing to influence political events are reportedly able to circumvent checks by using the business manager account feature on Facebook,57 which highlights the cat and mouse nature of policymakers’ approach to technology companies.
The RAS, launched in March 2019, is an interface to enable information sharing among stakeholders and for issuing alerts about foreign malign influence campaigns in real time. Each EU member state appointed a contact point for RAS and its editorial control rests with the East StratCom Task Force.58 East StratCom has long-term expertise, dealing with malign foreign influence operations in the Eastern Neighbourhood – especially from Russia – so it plays a central role in advising the EU on how to analyse and respond to disinformation. The taskforce tries to support media plurality in the region and improve EU communication on its objectives. East StratCom also participated in a tripartite group with the Commission and the European parliament preparing for the European elections. The Action Plan called for the reinforcement of the Strategic Communication Task Forces of the European External Action Service (EEAS) and the establishment of close cooperation between RAS and the G7 Rapid Response Mechanism to support the resilience of allies to disinformation.
It will be easier for the EU to successfully communicate what the union stands for if divisive actors within individual countries advocating fragmentation are challenged and counteracted by coherent political, economic and social arguments.
The EEAS also engages in proactive communication to avoid leaving space for malign actors to spread disinformation. Strategic communication is deployed by the European Commission, too, which also tries to counteract disinformation that attacks the EU’s legitimacy with positive messaging.59 It will be easier for the EU to successfully communicate what the union stands for if divisive actors within individual countries advocating fragmentation are challenged and counteracted by coherent political, economic and social arguments.
An internal network on disinformation has been established within the Commission, to build awareness and exchange information between the different directorates-general and representations in member states. Their monitoring is mainly focused on disinformation that targets the EU while an equivalent group has been established within the European parliament to monitor disinformation against individual parties and politicians. In January 2019, Federica Mogherini, the high representative for foreign affairs and security policy, highlighted that attention needs to be paid to different kinds of disinformation both inside and outside the EU.60 In terms of building public resilience, the European Commission also supports a European network of fact-checkers that was initially coordinated by East StratCom. Fact-checking efforts although fundamental should not be seen as a silver bullet, as their ability to mitigate the effect of information campaigns is limited.
The European Union Agency for Cybersecurity (ENISA) also recommended robust cybersecurity measures for political organizations’ systems, infrastructures, and data, as well as – following the example of the US Department of Homeland Security (DHS) – the classification of election systems as critical infrastructure.61
Notable member state responses
The below list of member states is by no means exhaustive, and the examples were selected on the basis of actions that have proved to be best practices.
- The Czech Republic approached the disinformation issue from a security perspective when it included foreign influence operations in the areas covered in its 2016–17 National Security Audit. According to a report by the European Values think-tank, the audit’s recommendations made the system more resilient.62
- Finland has also proved resilient to disinformation by taking similar steps.63 In January 2016, 100 officials were trained to identify and understand the phenomenon, with the intention to produce a coherent government response. The country is also investing in media and information literacy more broadly.64
- France passed its own anti-disinformation law following incidents of malign influence campaigns during the 2017 pre-election period. The law concerns solely pre-election periods (three months before a vote) and makes intermediaries accountable to the Superior Audiovisual Council (CSA). Media literacy is also a component. Although it was originally opposed65 on the basis of the 48-hour take-down window, which was seen as too short, and on freedom of expression considerations, the Constitutional Court upheld the statute as constitutional. Researchers have argued the statute has blind spots, such as the narrow framing of what constitutes false information without paying much attention to the actual process of online manipulation (revealing the actors and the incentives).66
-
Germany ratified the Network Enforcement Act, or so-called NetzDG, in January 2018, which obliges large digital intermediaries to remove material ‘obviously illegal’ under the German penal code in less than 24 hours. When content legality can be disputed, the time frame may be extended to seven days. Fines can reach €50 million. The main criticism of NetzDG relates to its short compliance time frame, freedom of speech concerns,67 as well as fear of pre-emptive self-censorship by the platforms themselves.68
In February, Germany’s antitrust regulator ruled that Facebook had to stop the ‘unrestricted collection and assigning of non-Facebook data to their Facebook user accounts’69 without meaningful consent. Competition law has been mobilized by various EU states70 that examine the market repercussions of intermediaries’ business models and strategies.71
- Spain launched a taskforce to fight disinformation comprising experts from the National Security Department, the Office for the State Secretary for Communication and other ministries.72 According to Hybrid CoE, the country was the target of disinformation operations in relation to the Catalan independence referendum.73
- Sweden, along with other Nordic and Baltic countries, has been widely praised for measures against disinformation. Sweden’s best practice for example was to take a holistic approach in the run-up to its 2018 elections and train over 10,000 civil servants on how to spot influence operations, reform its elementary and high school curriculum to include digital and media literacy,74 while its Civil Contingencies Agency even produced a ‘Countering Information Influence Activities’ handbook for public-sector employees.
-
The UK House of Commons’ Digital, Culture, Media and Sport (DCMS) Committee launched an inquiry into disinformation and ‘fake news’ in response to the Cambridge Analytica scandal. The Committee invited parliamentarians from nine countries around the world (Argentina, Belgium, Ireland, Latvia, Brazil, Canada, Singapore, France and the UK) to participate in an International Grand Committee on disinformation that would nurture cross-border cooperation, starting with a meeting in November 2018 that led to the signing of the declaration on the Principles of the Law Governing the Internet.75 The final report of the DCMS committee was published in February 2019,76 recommending among other things, the establishment of a new category of tech company, a compulsory Code of Ethics, the protection of inferred data as personal data, a levy on tech companies to support the expanded work of the Information Commissioner’s Office (ICO), enhanced powers for the Electoral Commission to make it efficient for the 21st century, an audit of the online advertising market as well as strategic communications companies, primary legislation regulating the use of personal information in political campaigns. Finally, it recommended the re-introduction of ‘friction’ into social media platforms to allow time for deliberation before stories are shared or interacted with. The UK government responded to the DCMS report’s recommendations by announcing digital imprints for political advertising and the introduction of a new regulatory framework for social media companies with a statutory duty of care outlined in the Online Harms White Paper (OHWP).77 The introduction of a new concept of legal entity that would be relevant to the power, responsibilities and scale of digital intermediaries that the DCMS suggested, although not adopted by the UK government, merits further examination by the EU and the US, as the multifaceted and multi-domain implications of evolving technologies merit new and equally innovative thinking.
OHWP was to a certain extent trapped by its vast ambitions and its commendable consultation process led civil society and research institutions to voice concerns such as the lack of nuance and clarity in definition of harms, or the ambiguity of how disinformation is defined.78 OHWP also suggested ‘duty of care’ as an approach to regulate digital intermediaries but has been criticized as inadequate or in need of reframing, as it does not automatically translate from an offline to an online context, and more clarity in terms of companies’ duties is needed.79
Civil society and academia
To this day the EU has dedicated substantial funding not just towards expanding the capabilities of its existing institutions but also into academic research and technology initiatives tasked with addressing the disinformation threat. For instance, the European Research Council has supported the work of the Computational Propaganda Research Project (COMPROP) in Oxford, which monitors and analyses social media-driven manipulation of public opinion.
Through the Horizon 2020 research programme, the EU has also provided funding for the Social Observatory for Disinformation and Social Media Analysis, and the projects it coordinates (SocialTruth, Provenance, Eunomia, WeVerify80) creating a multidisciplinary network of academic researchers, fact-checkers, technologists, media organizations and policymakers. Horizon 2020 also supports the three-year Provenance project at Dublin City University’s Institute for Future Media and Journalism (FuJo) that’s working on a free verification tool using blockchain. The European Parliament’s Science-Media Hub is also contributing to the efforts of raising awareness and supporting research on disinformation.81
LSE’s Arena, as well as its Truth, Trust & Technology Commission have conducted research82 and organized events on disinformation, while four Nordic universities from Denmark, Sweden and Norway, have launched an interdisciplinary network to study the impact of online disinformation on democratic processes.83 The network launched a series of disinformation conferences in Aarhus in May.84
Further independent initiatives in Europe
Debunk.eu: The Lithuanian initiative incorporates AI tools, volunteer fact-checkers and journalists, to monitor disinformation on a daily basis, and assists academic research and media outlets with debunking.
Global Disinformation Index (GDI): With the support of the Knight Foundation – among others – UK-based non-profit GDI is working to create a global rating system for media outlets.
Newtral and Maldito Bulo: The two Spanish fact-checking initiatives provided much needed assistance in the fight against disinformation in the last national elections.85
Transparent Referendum Initiative: The Irish volunteer-run organization was launched in the lead-up to the referendum on the repeal of the 8th Amendment with the aim of enabling ‘fair, truthful and respectful debate’, by collecting and publicizing data on Facebook Ads. Its founder went on to broaden its scope by launching Digital Action.86
3.2 US responses
The US context
According to Pew,87 more Americans are now accessing news through social media than print newspapers, with television remaining the most popular platform for news consumption (49 per cent of adults use it to stay informed). The combined percentage of social media and news website regulars (43 per cent) is edging closer to that of TV. With the widening gap between the habits of news consumption among the young (online) and over-50s (TV), and an increasing percentage of audiences receiving news through social media and search engines rather than direct visits to media websites, it’s a matter of time before digital platforms become the leading source of news for US citizens.
The US media ecosystem features asymmetric media dynamics, mainly skewed by hyper-partisan outlets that, as Benkler et al. have noted, leave a percentage of the population ‘systematically disengaged from objective journalism’.88 Alarmingly, a 2018 Ispos survey of over 1,000 adults, found that despite overwhelming support for freedom of the press (85 per cent), almost a third of American respondents agreed with the assertion that the media are ‘the enemy of the American people’ (29 per cent).89 Persistent attacks on the press have permeated political discourse, creating challenges for journalists and the democratic system that relies on the Fourth Estate.
According to a report by cybersecurity firm Recorded Future, hyper-partisanship is being exploited by certain Russian influence operations, which have moved from disseminating disinformation to amplifying hyper-partisan messages and polarizing statements by politicians, often sourced in traditional media.90 Polarization is indeed becoming a problem in European political discourse too.
In the US, the First Amendment’s protection of freedom of speech imposes its own constraints on US policymakers and officials, who thus far opted for transparency as an approach to the disinformation problem rather than outright bans.
In the US, the First Amendment’s protection of freedom of speech imposes its own constraints on US policymakers and officials, who thus far opted for transparency as an approach to the disinformation problem rather than outright bans. Nevertheless, the merits of meaningful transparency may be more obvious for researchers, journalists and actors as a means to hold politicians, companies and advertisers to account rather than for the public itself, which tend to avoid the ‘friction’ of reading terms of service or scrutinizing ad transparency tools as part of their day-to-day digital activities. Digital and media literacy initiatives may inoculate the public to a certain extent by raising awareness of the cost of a frictionless online existence and of not confronting disinformation, which may lead to long-term positive outcomes.
Institutional responses
Congressional oversight: The US Senate Select Committee on Intelligence (SSCI) has convened hearings inviting Facebook’s COO, Sheryl Sandberg, Twitter’s CEO, Jack Dorsey91 (Alphabet’s CEO Larry Page declined) and commissioned two reports on the online influence tactics of the Russian Internet Research Agency.92 Google’s CEO Sundar Pichai eventually testified in front of the House Committee on the Judiciary.
From January 2019 onwards, the House Energy and Commerce, Intelligence and Judiciary committees launched a series of probes into how tech companies and their strategies affect competition, consumers and society.93 A session by the House Subcommittee on Consumer Protection and Commerce of the Committee on Energy and Commerce94 uncovered some issues such as the need to distinguish between various data practices that pose threats to consumers, avoid regulatory patchwork across states, and outline clear prohibitions on a range of harmful and unreasonable data collection practices.
Federal agencies: The 2017 National Defense Authorization Act, signed into law by then US President Barack Obama in December 2016, established the Global Engagement Center (GEC) at the Department of State. GEC became the central hub tasked with integrating inter-agency efforts to recognize, analyse and expose disinformation efforts that threaten US national security interests globally, particularly focusing on threats from Russia, China, North Korea and Iran. Apart from working closely with the White House’s National Security Council, the center also engages with foreign state partners, the private sector95 and civil society (funding media literacy efforts and research among others). GEC is in communication with the EU’s StratCom East Task Force, NATO’s StratCom Centre of Excellence in Riga and Hybrid CoE, but most channels of communication with the EU are established on a bilateral basis.
In November 2017, the FBI created the Foreign Influence Task Force (FITF) to identify and counteract ‘malign foreign influence operations targeting the United States’,96 by monitoring mainly the domestic environment. FITF takes an agent-focused approach, observing foreign actors known to deploy disinformation campaigns, in an effort to avoid any First Amendment conflicts. In March 2019, FBI Director Christopher Wray stated that divisive foreign influence campaigns against Americans had continued ‘virtually unabated’97 but expressed his optimism in the potential of FITF working closely with social media companies, GEC, the National Security Agency (NSA), the Department of Homeland Security (DHS), and the Office of the Director of National Intelligence. DHS’s Cybersecurity and Infrastructure Security Agency (CISA), launched in November 2018, is tasked with dealing with foreign influence. Former Director of National Intelligence Dan Coats stated the intelligence community needs to be restructured to deal with the ‘evolving flood of technological changes’ and warned that foreign actors will try to influence the 2020 US elections.98
The work of the Department of Defense’s Cyber Command (USCYBERCOM) during the 2018 midterm elections has been praised by senators on both sides of the aisle, for deterring Russian hackers suspected of conducting disinformation campaigns by signaling to them that they had been identified. According to press reports and under the ‘defend forward’ strategy,99 USCYBERCOM also disrupted the internet access of the Russian Internet Agency on the day of the 2018 midterms.100 DoD’s Defense Advanced Research Projects Agency (DARPA) is also working on developing tools to spot disinformation campaigns and detect ‘deep fakes’.101
The US Agency for Global Media (USAGM) is running training programmes for spotting disinformation, has Russian language services in Eastern Europe, two Korean services, a new Persian channel, and is looking to expand its Mandarin output too.
The Department of Justice (DoJ) was also mobilized against disinformation in February 2018, with the then-Attorney General Jeff Sessions establishing the Cyber-Digital Task Force. The DoJ has also reviewed the US Attorney’s Manual and introduced Section 9-90.730, which provided guidelines for the DoJ to disclose information about foreign influence operations either publicly or privately to the targets or the tech companies hosting the operations. The remit of the Manual’s amendment is strictly influence campaigns when foreign government attribution can be made with ‘high confidence’, leaving campaigns of unknown or domestic sources outside its scope. This approach appears selective in its interpretation of how networks operate and may create serious vulnerabilities in the system that the US will be forced to face sooner rather than later.
Employing counter-narratives is another approach the US is taking. The US Agency for Global Media (USAGM) is running training programmes for spotting disinformation, has Russian language services in Eastern Europe, two Korean services, a new Persian channel, and is looking to expand its Mandarin output too. According to USAGM, the agency has editorial independence from the US government. In terms of foreign broadcasters operating within the US, since September 2018 and following the National Defense Authorization Act 2019, the Federal Communication Commission mandates foreign media outlets to provide reports disclosing any relationship to foreign principals, essentially putting them under the scope of the US Federal Agents Registration Act (FARA). Russian state-owned outlets RT and Sputnik, among others, had to register as foreign agents.
Federal and state-level legislation: The Honest Ads Act, introduced by US senators Amy Klobuchar, Mark Warner and John McCain, was the first legislative effort to regulate digital intermediaries, aiming to expand the remit of the Federal Election Campaign Act (FECA) to encompass paid digital ads, and require platforms to make sure disclaimers identified them as such, create a publicly accessible record of political advertising requests costing over $500, and ensure no foreign actors were able to purchase political ads. On 1 March 2019, the Honest Ads Act was included in an omnibus reform bill called the For the People Act (or H.R. 1). Even though H.R. 1 passed in the House on 8 March 2019, it’s unlikely to be scheduled for a vote in the Senate.
Regulating paid political ads is just one piece of the disinformation puzzle. It is promising that the bills on algorithmic accountability and tracking mentioned earlier started reflecting on the complexities of regulating digital intermediaries. The same can be said about a July 2018 white paper circulated by Senator Mark Warner.102 Its proposals included modifying Section 230 of the Communications Decency Act that immunizes digital intermediaries from state tort and state criminal liability, bot labelling (the so-called ‘Blade Runner law’), examining the concept of an information fiduciary,103 comprehensive data protection legislation, data transparency and portability bills, employment of ‘essential facility’ labels for market dominant companies and providing the Federal Trade Commission (FTC) with rulemaking authority.
There are increasing calls for more power to be given to the FTC, which fined Facebook $5 billion104 after an investigation into whether it broke the 2011 consent decree, but given the technology company’s revenue, even that amount can be seen as merely the cost of doing business.105 The investigation was launched following the Cambridge Analytica scandal that revealed the data of 87 million Facebook users were passed on to the third party. FTC’s new order for Facebook to create new layers of oversight for handling of users’ data may seem like a step in the right direction, but the settlement has been criticized for effectively allowing the company to decide for itself the extent of user privacy without any meaningful change to its structure and financial incentives, which constitute the root of the problem.106 Additionally, the fact the settlement shielded Facebook in regard to unspecified violations has been strongly criticized,107 as the broad immunity given to its executives, sets a dangerous precedent.
The FTC has also launched a Technology Task Force108 dedicated to investigating competition in the technology sector109 that also intends to review previous acquisitions. FTC Chair Joel Simon stated he would be open to breaking up Big Tech,110 but Facebook’s settlement leaves serious questions in regard to the agency’s willingness to take drastic measures contravening dominant market players. Facebook’s plans to merge with WhatsApp and Instagram, despite being promoted as a step towards enhanced privacy by CEO Mark Zuckerberg,111 are viewed with scepticism by critics112 and antitrust authorities on both sides of the Atlantic.113 Certainly the suggestion Facebook could become the US equivalent of China’s omnipresent WeChat is alarming.114 Regulators should take action before Facebook moves forward with merging its three different services, to ensure due diligence in terms of users’ personal data and the functionalities of the future unified interface.
Data governance has entered the debate in the US, too, with California passing the California Consumer Privacy Act (CCPA), a bill that according to former FTC Chief Technologist Ashkan Soltani, Facebook was supporting in public but lobbying against behind the scenes. CCPA passed into law in June 2018 but will not become enforceable until 1 January 2020. It draws on the General Data Protection Regulation (GDPR) and aims to give internet users more control of their data. Furthermore, in 2018, New York passed the Democracy Protection Act, California passed the Social Media Disclose Act (to take effect in 2020), and Maryland passed the Online Electioneering Transparency and Accountability Act. A federal court blocked the latter under the First Amendment but the law could be amended to alleviate some of the concerns raised. While there is a lot of movement at state level,115 a weaker federal privacy and data protection law that would pre-empt state-level victories should remain a concern. Technology companies may be advocating for federal level legislation they can still influence, only to override state-level laws.
Targeted influence campaigns rely either on segmented audiences or individual profiles, and the latter is essentially the business model of data brokers.
In 2018, Vermont became the first US state to address another piece of the puzzle, particularly in regard to micro-targeted disinformation: the hyper-personalized influence campaigns that by virtue of operating at the granular level can evade detection by oversight bodies and watchdogs. Targeted influence campaigns rely either on segmented audiences or individual profiles, and the latter is essentially the business model of data brokers. Although in the EU various organizations and individuals have started taking on the data brokers,116 in the US relevant steps are in their infancy. For example, the Vermont law on data brokers,117 requires them to register as such, and mandates data security standards and the provision of information about an opt-out policy for customers, where available. However, the law does not apply to consumer-facing companies who are first-party data collectors such as websites apps or e-commerce platforms.118 Frustratingly, it also does not require data brokers to disclose what information they collect or who is purchasing it.119
Civil society, academia and think-tanks in the US
The Data & Society Research Institute in New York, the Tow Center for Digital Journalism at Columbia University, the Berkman Klein Center and the Shorenstein Center (Information Disorder Lab) at Harvard, have all produced and continue to create broad-ranging research that tackles the issue of disinformation through reports, events or digital tool development.
The non-profit organizations Social Science Research Council and Social Science One have also partnered with Facebook to provide funding for independent research on ‘the effects of social media on democracy and elections’,120 but the project also includes funding from seven non-profit foundations in an effort to counterbalance any financial influence from Facebook.
Credibility Coalition is another broad researching community, supported by the Google News Initiative, the Facebook Journalism Project and Craig Newmark Philanthropies among others. Their aim is to create a comprehensive framework for the study of disinformation, define and validate efficient signals of content and source credibility.
Poynter’s International Fact-Checking Network unit has created a global network of fact-checkers. It provides training and has created a code of principles that fact-checking teams from around the world can apply for accreditation.
Atlantic Council’s Digital Forensics Research Laboratory, the Alliance for Securing Democracy121 (affiliated with the German Marshall Fund), the GMF’s Digital Innovation & Democracy Initiative, the Center for Strategic & International Studies, the Brookings Institution, New America, the National Democratic Institute, and the Design 4 Democracy Coalition, are all engaged in debates, reports and research on the issue of disinformation.
Further independent initiatives in the US
New Knowledge is an Austin-based private cybersecurity company specializing in disinformation that has testified in front of the SSCI and has produced a report on the influence of Russia’s Internet Research Agency. However, the New York Times criticized its research methods when it revealed the firm’s chief executive employed tactics similar to those conducting influence operations in the 2016 elections.122 The company responded by explaining this action was taken for the purposes of an experiment, but the incident demonstrates the risks of employing techniques that could be construed as counter-propaganda. In the fight against disinformation integrity is paramount. State, civil society and private-sector actors in the EU and the US should also avoid replicating the methods of adversaries or they risk losing credibility.
NewsGuard is a start-up using ‘nutrition labels’ to classify reliable news sources according to nine criteria. Defying the prevailing drive towards automation, the company employs trained analysts and journalists who will review 7,500 sites that account for 98 per cent of US news consumption. It has also been rolled out in the UK, France, Germany, and Italy and is in talks with British ISPs about the potential of flagging up suspect news sites.
3.3 Action taken by digital intermediaries
This section focuses on key actions taken by Alphabet – the parent company of Google – Facebook, Twitter, and their subsidiaries as dominant players and influential normative powers in the current information ecosystem. With the exception of Pinterest, these companies have also signed up to the CoP.
Alphabet
- Fact-checking: Google supports fact-checking initiatives such as First Draft123 through its Google News Initiative.
- Media literacy: Although not ready for radical changes to their business model, digital intermediaries have pulled their weight in supporting the drive towards media literacy programmes. In the UK, for example, Google has funded NewsWise,124 a free news literacy project for nine to 10-year-old children set up by the Guardian Foundation, the National Literacy Trust and the PSHE Association. The company has also supported media literacy programmes across the European continent, and in US high schools.
- Policy changes: Google also continues to demonetize sites with more ads than content (as some ‘fake news’ sites are).
- Political advertising: In August 2018 and ahead of the US midterms, Google launched the original version of its Ad Library, and in April 2019 its EU edition was released in preparation for the European elections. Since March 2019 the company required verification for purchasing political ads in Europe too.125
- Supporting journalism: Google provided training for European journalists and funded newsrooms through its Digital News Initiative. The company has signed up – along with Facebook and Bing – to The Trust Project, an international initiative that designed eight ‘trust indicators’ that can send machine-readable signals to news distribution platforms that will surface content and prioritize it as trustworthy. The project was incubated in the Markkula Center for Applied Ethics at Santa Clara University. It also supports the Journalism Trust Initiative (JTI) created by Reporters without Borders and joined by the European Broadcasting Union, Agence France-Presse and the Global Editors Network, which works to create machine-readable credentials for media outlets that relate to their ownership, journalistic methods and ethics. JTI has recently been joined by GDI and NewsGuard to coordinate their efforts.
YouTube
In February 2019, the Google subsidiary reported to the European Commission that since November 2018 it had removed one channel linked to Russia’s Internet Research Agency and 34 YouTube channels linked to Iranian influence operations. Two new features promoting ‘authoritative’ sources were introduced – the Top News shelf in search and Breaking News on the homepage. The YouTube recommendation algorithm generates more than 70 per cent of the views.126 After complaints its algorithm was surfacing many conspiracy videos, the company announced changes to deprioritize them.127 It labels RT and Sputnik as affiliated with the Russian government,128 and after disinformation campaigns against Hong Kong protesters led by China-backed media, it is under pressure to ban state-backed media ads replicating the example of Twitter.129 YouTube has also announced it is going to roll out a new feature, an information panel appearing next to videos that relate to topics prone to disinformation.130
Jigsaw
Jigsaw is a technology incubator within Alphabet that attempts to find technological tools to tackle disinformation, hate speech and terrorist recruitment.
Issues for consideration: In regard to its CoP compliance and according to its second implementation report,131 Google removed tens of thousands of ads in violation of its misrepresentation policies, but the percentage of those pertaining to disinformation campaigns remains unclear. That report listed the UK as having the most ‘misrepresentation violations’ followed by Estonia and Romania. It would be helpful for researchers to have access to adverts that violate Google policy to establish an informed view of the specific actors trying to pollute the information space. Lack of clarity in terms of Google’s issue-based ad policy remains. Additionally, reports about the power of YouTube’s algorithm to radicalize views via its recommendation system, proves companies’ algorithmic systems need to be audited.132
- Account takedowns: Facebook also continues to remove accounts of disinformation networks fomenting dissent in various EU states, such as the UK or Romania, and the US.
- De-ranking and content moderation: One of Facebook’s most important actions was its demotion of content flagged as potentially false by fact-checkers in its News Feed algorithm.133 According to its CoP January 2019 report, this decreases views by more than 80 per cent. Facebook uses AI to identify clickbait articles and ‘ad farms’ and de-ranks them in its News Feed. Facebook also penalizes false headlines even when the copy is accurate, by demoting the story.134 Following the examples of Pinterest and YouTube, Facebook also downranked anti-vaccination pages and groups. The company uses machine-learning to prevent fake accounts from being created and, following a consultation period, it announced an Oversight Board. The latter has already attracted criticism,135 not least because what the company is attempting to pursue is ‘something like a constitution’136 of unprecedented normative power. Despite its democracy-promoting rhetoric the sight of a corporation defiantly claiming powers that in democracies are bestowed to elected governments by their own people, might look like corporate overreach if not hubris. A one size-fits-all approach may be more financially desirable for Facebook but this level of centralization of the power to dictate what free speech effectively means across the world should alert governments, activists, journalists and citizens across the world.
- Digital literacy: As part of its efforts to enhance digital literacy, Facebook launched the Digital Literacy Library,137 in consultation with the Berkman Klein Center for Internet & Society at Harvard, and has added a context button to help users assess the credibility of posts.
- Policy changes: Facebook’s prohibition of coordinated inauthentic behaviour (CIB) seems in compliance with First Amendment considerations, as it is behaviour-based and ‘content-agnostic’, observing patterns of activity. The company continues to remove pages, groups and accounts displaying CIB,138 and has removed hundreds related to Russia, Iran and Venezuela.139 The CIB policy has also had unintended consequences such as the banning of activists who want to scale up their communications via coordination.140 It also proved flawed as it failed to foresee operations such as those enabled by the aforementioned ‘business manager’ account feature.141
- Political advertising: Along with Google and Twitter, Facebook has forced political advertising to be marked as such and introduced user verification requirements for purchasing entities.142 In 2018, Facebook launched its first Ad Archive in the US, followed by a pan-EU Ad Library in 2019, which includes political and issue-based advertising. Pages now provide information on the ads they are running, their name changes and, for pages with greater reach, the location of the administrators. Nevertheless, the system still has blind spots. Apart from the name of the group sponsoring the ad, the actual identities of the individuals or the source of funds is difficult to track. Indicative examples of the problem include the pro-Brexit campaign by the once elusive Mainstream Network143 – now revealed to be run by CTF affiliates – in the UK or Vice impersonating 100 US senators to buy ads before the midterms.144 Ahead of the EU elections Facebook decided to ban cross-border advertising by authorized advertisers, a decision that created problems for pan-EU political groups and led to the Secretaries-General of the European Commission, the European Parliament and the Council of the European Union to protest it would have ‘huge political and institutional consequences’.
Facebook’s Ad Library includes ads posted on Instagram. Users can now also flag fake content and AI tools are employed to spot misleading content.
- Funding research: The subsidiary has committed funds through its Misinformation and Social Science Research Awards to researchers around the globe investigating issues of news credibility, variables influencing sharing habits, digital literacy, disinformation deployed in electoral contexts, virality and more.
- Policy changes: In a first effort to contain the disinformation problem parent company Facebook decided to limit WhatsApp’s forwarding limit from 20 to five times, although the effectiveness of this measure is up for debate according to a study into disinformation in Nigeria funded by WhatsApp itself.145
Issues for consideration: Growing criticism from fact-checking teams that have collaborated with Facebook are troubling.146 Even the announcement of fact-checking initiatives for Instagram has been received with scepticism.147 Ad Library technical issues, a series of revelations that confound policymakers such as the ‘business manager’ account option, and a persistently evasive approach to meaningful public scrutiny, have created a substantial trust deficit. Researchers have also pointed out that the company has been selective with the data it offers for research. Oversight has to stop being defined predominantly on Facebook’s terms, especially since the company’s actions to address issues relevant to disinformation, such as user privacy, tend to come too little, too late. A case in point is the promised and heavily promoted ‘Clear History’ tool, which was eventually rebranded as ‘Off-Facebook Activity’. Instead of clearing anything meaningful, the tool disconnects the browsing data of third parties148 from personal account profiles. Even if users decide to opt-out, their browsing history will remain on Facebook servers. There is also no mention of Facebook offering an opt out of targeting based on data it collects itself from its own platform. Additionally, since anonymized browsing histories can become part of aggregate data they can potentially still inform and refine the audience segmentation used to target ads anyway.
Growing criticism from fact-checking teams that have collaborated with Facebook are troubling.
Mozilla
The company is engaged in civil society debates, signed up to the CoP, and announced a new anti-tracking policy that covers browser fingerprinting149 and supercookies. In May, Mozilla also launched the Firefox EU Elections Toolkit to help EU voters recognize and avoid online manipulation.150
In view of the health risks, the social media company took the radical step of blocking anti-vaccination content, setting an example for other digital intermediaries.
- Removal of fake and violating accounts: Twitter continues to investigate bots, fake accounts and suspend millions of them.151 At the time of writing, the company investigates between 8.5 and 10 million accounts on a weekly basis. Another critical policy the company has implemented is the removal of accounts found to distribute hacked materials.
- Political advertising: Twitter has also produced a publicly accessible archive of potential foreign information operations for researchers. It now allows users to report fake accounts and has also launched its Ads Transparency Center, which has been expanded to EU political ads too. In preparation for the 2018 midterms, Twitter set up a cross-functional analytical team tasked with detecting and responding to ‘inauthentic, election-related coordinated activity’.152 Mirroring Facebook, it now requires verification for the purchase of political ads. In line with the CoP, Twitter provided detailed insights.153
- Funding research: Twitter provides funding for research, including the Atlantic Council’s Digital Forensic Research Lab and has also assisted European researchers by releasing information pertaining to information operations.154 The company also provides funding to the EU DisinfoLab.
Issues for consideration: Despite Twitter’s offer of data for analysis, researchers have been asking for more clarity in terms of how data sets are selected in the first place or information on which users viewed disinformation campaigns.155 The targeting information users have access to – through the Why Am I Seeing this Ad? feature – needs to be more granular in order to be informative. Following the purchase of disinformation ads by China’s state-backed media outlet Xinhua News to attack Hong Kong protesters,156 Twitter announced its ban of state-controlled media outlets purchasing ads. Nevertheless, the incident highlighted the fact that protecting citizens from disinformation is incumbent upon government authorities, as technology companies lack the expertise, foresight or willingness to identify the electoral or national security implications of gaps in their policies.
Protecting citizens from disinformation is incumbent upon government authorities, as technology companies lack the expertise, foresight or willingness to identify the electoral or national security implications of gaps in their policies.
3.4 Global efforts and best practices
Australia: In terms of research, the final report of the Australian Competition & Consumer Commission’s Digital Platforms Inquiry,157 with its broad scope, covering digital intermediaries’ market power, digital advertising, journalism, consumer welfare and new technologies constitutes a comprehensive analysis of the elephant in the disinformation room: Big Tech’s business models. Australia also launched its own version of FARA, its Foreign Influence Transparency Scheme where companies with foreign principals are called to sign up to a publicly available Transparency Register.158
The Indian Centre for Internet & Society: The non-profit has offices in Bengaluru and Dehli and is conducting interdisciplinary research on digital technologies and their impact on societies, as well as the different facets of disinformation, such as data governance issues, political ads, user perceptions in the digital realm and more.
The Partnership on AI: The international consortium159 is looking into disinformation as part of its AI and Media Projects. More specifically, it is investigating how to leverage AI to create disinformation alert and detection coordination mechanisms, authentication layers for branded news and a clear disinformation taxonomy among others.