Regulating Big Tech: Lessons from COVID-19

COVID-19 demonstrates the potential power of tech companies as a force for good, but also that they have largely devised their own rules in vacuums of both standards and accountability. A new digital deal is both essential and inevitable.

Expert comment Updated 22 October 2020 3 minute READ

Kate Jones

Associate Fellow, International Law Programme

A man uses a Swedish version of the COVID-19 Symptom Tracker app on his smartphone in Stockholm, Sweden, on 29 April 2020. Photo: Getty Images.

A man uses a Swedish version of the COVID-19 Symptom Tracker app on his smartphone in Stockholm, Sweden, on 29 April 2020. Photo: Getty Images.

Misinformation and Disinformation

The coronavirus pandemic, labelled an infodemic by the World Health Organization, has demonstrated the power of false information, whether created or shared without intention of causing harm (misinformation) or knowingly generated to cause harm (disinformation).

The peddling of false claims online and on television has precipitated arson attacks on 5G mobile phone masts across Europe: over 50 in the UK, at least 16 in the Netherlands, further attacks in Belgium, Cyprus, Italy and Ireland. Mere discussion of a vaccine is stoking the anti-vaccination movement. The scale of disinformation is rife: Facebook alone has placed warning labels on around 50 million pieces of content and as of mid-April, COVID-19 misinformation on Facebook had been viewed an estimated 117 million times.

At a time when two billion people are at home and largely reliant on the internet and social media for news and information, the platforms have stepped up to the responsibility of curating the content they host for false claims. These steps, admittedly imperfect, centre on removal of or placing warning labels on misinformation and the active promotion of reliable information. It is notable how closely tech companies have been working with public health authorities and governments in directing their crisis efforts.

This episode debunks the myth that truth carries the most currency in the marketplace of ideas, such that disinformation need not be controlled. It demonstrates both the formidable power of false information as a weapon in public discourse, and the strength of the tech companies’ armoury against it. The potential of both weapon and armoury cry out for a skeleton framework of standards that reflect the values of human rights and democracy, to prevent their being wielded in ways antithetical to those values.

Regarding the pandemic, the bones of that skeleton are clear and need no debate: the promotion of medically correct information and the quashing of information contradicting it. But in other fields, the bright lines are not so clear. For example, in the political sphere there is unresolved tension between the principle that political comment should be free and the pernicious risks to our democracies created by mis- and dis-information.

It is neither fair nor appropriate to entrust the tech companies with building the skeleton framework of standards about what content is allowed online. While some of the larger companies are beginning to build such skeletons and to develop the tools to do so, they remain ill-equipped: their motivations vary and legitimately include questions of commercial interest, and unlike governments they have neither the tools to consider the public interest fully nor democratic accountability in so doing.

To date governments have shied away from setting that framework. But just as a shared framework has helped in tackling the COVID-19 information crisis, so governments, in collaboration with the industry and civil society, now need to build that broader skeleton framework and to ensure that it is deeply embedded into how tech companies work.

There is certainly a risk that governments abuse their power: that they use it to curtail speech in support of their own agendas. The risk of abuse means not that governments should leave regulation to commercial companies, but that it’s vital that they set a framework by clear reference to long-agreed international standards of human rights and democracy, in consultation with tech companies and civil society.

Their frameworks must secure an appropriate, measured balance between free speech and both other human rights and the avoidance of harms. Such a framework, broad enough to accommodate technological developments, will not prevent the need for hard decisions in the future, but will shape how they are made just as human rights shape other functions of a public nature.

Accountability is also important, as with any public service. The COVID-19 pandemic has demonstrated that the tech companies’ responses to material posted can affect the very fabric of society: the health of its populations, and the levels of violence. If a platform were to ‘go rogue’ – for example, if it were to fall into the ownership of an entity wishing to destabilizse a government – it could cause massive damage.

Privacy issues

At the core of the debate on privacy and COVID-19 tracing apps is whether their purpose is only to inform individuals of risks they may face, or additionally to ‘centralize’ data on the spread of COVID-19 so that governments may understand and tackle the extent of exposure in the community. The need for protection of privacy and measures required have been carefully debated.

On the question of purpose, there is a difference of view between Apple and Google and some governments, including the UK and France. The Apple/Google Exposure Notifications System enables public health authorities to develop their own contact tracing apps that will neither identify users, gather location data nor permit use for targeted advertising.

The data remains ‘decentralized’, ie it passes between phones rather than being collated at a central hub. As of 20 May this API had been requested by 22 governments; but is insufficient for those governments which see one purpose of the app as being to collect centralized data. Apple will not currently permit its technology to enable centralized data collection.

This situation turns on its head the relationship between regulator and regulated. Rather than governments setting privacy rules which tech companies must follow, the tech companies are setting privacy limits and giving governments no choice but to accept.

The reason for this inversion may lie in part in the opacity that cloaks privacy online. Even governments are not well placed to understand fully how data is held and protected by online companies, therefore cannot easily establish what rules are and are not needed to protect the right to privacy.

Collaboration should be deepened so that governments understand the working of the companies and can develop fundamental privacy standards, by reference to human rights law, that will endure over time, and command respect of both companies and society. Such collaboration should be accompanied by far greater transparency for individuals and scrutiny possibilities for civil society. Europe’s General Data Protection Regulation was an important step but is already insufficient.

Conclusions

There are two emerging sets of governmental approaches to the role of tech companies in our society: largely Western ones, founded in human rights and democracy; and more authoritarian models, centred on government restrictions on speech. We must ensure that Western models are developed quickly enough to become the world standard and to lead the development of tech companies and their place in society.

Western models need proactively to build a skeleton framework of standards, not passively to allow the market to self-regulate. Western governments can no longer avoid crucial issues of expression and privacy by declining to regulate or ducking contemporary challenges.

From the COVID-19 crisis it is apparent that tech companies can play a key role in protecting public goods, that it is both legitimate and necessary to require them to do so, and that many companies would welcome a normative framework to guide their actions.

As the European Commission prepares the draft Digital Services Act and the British Government the draft Online Harms Bill, they must not shy away from constructing a skeleton framework of standards grounded in human rights and democracy, collaborating closely with tech companies and civil society to glean the best ways of doing so.

This is governments’ responsibility as custodians of the public interest, not a threat against corporate inertia. Most importantly, they should instil those standards before commercial or authoritarian state interests step in to fill the void.