Human rights provide a well-established set of rules, norms and approaches to complex governance issues like digital platform regulation. However, they are not a simple, catch-all solution. Rather, they should be looked to for guidance and policy innovations.
Tomorrow’s platform regulation may be led by efforts in Beijing and Brussels, or by decisions made in London or Washington. But hope exists for a more collaborative international approach. Internet governance has, for the past three decades, been characterized by unique multi-stakeholder bodies, from those responsible for setting web standards such as the Internet Corporation for Assigned Names and Numbers (ICANN); processes including the Paris Call, the Global Digital Compact or the Global Network Initiative; and multilateral, UN-led convening forums like the Internet Governance Forum (IGF). But in the context of platform regulation, similar processes have not yet been practically applied at a global level.
Multi-country commitments to platform regulation have been made in multiple forums. For instance, in response to livestreams broadcast online during the March 2019 terrorist attacks in New Zealand, the Christchurch Call led by France and New Zealand brings together over 100 governments and organizations in demanding platforms to take steps to eliminate terrorism-related content. UNESCO is consulting on guidelines for regulating digital platforms. The challenge for these global efforts is to agree on a unified global framework – existing or new – through which to approach the questions raised by platform regulation.
Human rights should underpin at least part of this framework. Yet, the adoption of values-led or human rights-led thinking in platform regulation remains somewhat complicated, and is actively avoided by some states. The data collected by researchers for this paper attest to two realities:
- Human rights have been largely overlooked in attempts to define principles around the governance of digital platforms, with the possible exception of the EU’s DSA and early UN efforts; and
- Translating human rights principles into effective platform regulation is in itself a challenge.
Why the concept of human rights remains relevant
Human rights embody the idea that individuals must be protected against certain abuses perpetrated by their own governments and states, as well as by individuals and private entities. The concept of human rights is recognized in international, regional and domestic legal frameworks. Their exact definition and scope of protection vary in each. But IHRL is quasi-universal, flexible and already binding on most states. IHRL therefore provides a pre-existing and widely accepted set of principles, rules and definitions that could be adopted as part of a global framework for online platform governance.
While human rights may not have all the answers, they provide a well-established foundation for sound governance of technology. Despite nuances in its interpretation and implementation in different jurisdictions, IHRL is regarded by many as developed and adaptive enough to address regulatory issues and gaps. It provides a clear and robust framework to mitigate risks of human rights violations by imposing binding obligations on states to respect, protect and ensure a range of fundamental rights, as well as the establishment of monitoring and oversight processes and an ecosystem of safeguards and accountability mechanisms. Applying this existing framework to platforms may be a viable route to tackling harms online, while also balancing the rights of individuals with the interests of governments and corporations that hold power over digital space.
Assumptions and misconceptions
Yet human rights have not yet been used to their full potential. To date, platform regulations around the world have not drawn heavily on existing human rights frameworks. As examined in this paper, just one in five of the regulations in place demand that platforms carry out human rights due diligence assessments.
Online content regimes largely focus instead on other concepts, most commonly that of harm and its prevention. Discussions on platform regulation also tend to omit human rights expertise. This omission often leads to misleading assumptions and misconceptions in the tech sector, such as that human rights are only a concern for governments and not for companies, for whom they remain mere ethical considerations and not legally relevant.
The sidelining of human rights in approaches to digital platform governance contrasts with the rich literature discussing in detail the importance of upholding the freedom of expression in digital platforms, identifying the opportunities and pitfalls of such an approach, exploring avenues for progress and including the private sector. It also contrasts with the human rights emphasis of regional and international discussions and decisions, including in the context of the Council of Europe, UNESCO and the UN’s Human Rights Council.
This paradox may not only attest to the attitude of governments and their diverging approaches to digital platform governance. It also reflects more generally the discrepancies between interests at the international level, and at least certain regional contexts, and political will at the national level.
Where human rights are foregrounded in digital platform regulation, primacy tends to be given to one right over others. Freedom of expression is routinely presented as the main (and, sometimes, only) focus by platforms in their policies and governments in regulatory tools. This is most notable in the context of online content moderation – or even, in certain cases, arguably used as a ‘laissez-passer’, a pretext to quieten dissenting voices. Meanwhile, the rights to privacy, freedom of thought and access to information and the media – all inherently part of the freedom of expression – and other rights are rarely given equal status. This prominence of one right over others not only contradicts the idea that human rights form a single, indivisible body of rights. It also raises questions over the potential for human rights to be placed into a hierarchy – or for individual rights to be graded against one another.
The debate of universality vs prioritization is not a new one in the context of human rights. But, in digital platform regulation, the priority given to freedom of expression may undermine attempts to follow a holistic rights-based approach. Decision-makers must exercise caution in the way they manage conflicting interpretations of human rights principles and obligations, and more broadly in approaching human rights through political prioritization.
In practice
A human rights-based approach must not be considered as the complete remedy to abusive and exploitative platforms. Rather, it should be applied alongside other relevant legal regimes (e.g. criminal law for liable offences), as well as standards, regulations and other ‘soft’ law tools. This is of particular importance in light of the largely private ownership of digital platforms, as IHRL remains, essentially, binding on states only. Nevertheless, this is not to discount the responsibilities that the wider human rights framework may confer on companies and other non-state actors. For example, in certain domestic contexts, corporations have due diligence duties to identify, mitigate and remedy human rights risks, and may be held liable for failing to do so. This would be the case, for example, under the directive on corporate sustainability due diligence within EU law adopted by the European Commission in 2022.
Despite these limiting factors, a human rights-based approach could still provide a strong underpinning for an approach to platform governance. Human rights and IHRL provide states, platforms and multi-stakeholder coalitions with an appropriate language for online content governance. They also set out how human rights may be respected, protected and ensured in different contexts – online and offline. If the Brussels effect is indeed felt around the world, human rights may well become the framework of choice. But efforts by international bodies to strengthen the adoption of such approaches through multi-stakeholder dialogue must continue, if the trend towards multiple competing and conflicting national regulatory approaches is to change.
As noted earlier in this paper, the right to seek, receive and impart ideas and information of all kinds – i.e. the freedoms of expression and information – are of particular importance in the context of digital platforms. Online content is the expression of ideas or information, which may be sought by billions of internet users worldwide. But other human rights also apply online and deserve equal, if not greater, protection in different contexts. For example, states must, and platforms should, protect the lives and health of individuals from the threats to public health posed by disinformation of the kind seen during the COVID-19 pandemic. IHRL, including international and regional human rights treaties, already provides the tools to navigate these conflicts between rights.
Human rights and IHRL provide states, platforms and multi-stakeholder coalitions with an appropriate language for online content governance.
For instance, Article 19(3) of the International Covenant on Civil and Political Rights (ICCPR) provides that the exercise of the rights to freedom of expression and information carries with it special duties and responsibilities. This means that those rights may be limited by law to safeguard a legitimate aim, and insofar as necessary and proportionate in the circumstances: the so-called ‘tripartite test’ of legality, legitimacy, and necessity and proportionality. Article 19(3) ICCPR identifies as legitimate aims that may justify limitations to freedom of expression and information: the respect for the rights or reputations of others; the protection of national security; or the protection of public order, public health or morals. Similar provisions are found in Article 10(2) of the European Convention on Human Rights and Article 13(2) of the American Convention on Human Rights.
In the context of digital platforms, including private messaging apps, search engines and social media, these provisions have three significant implications. First, states must enact legislation or regulation and companies should adopt policies that define: i) what kinds of online content may be limited; ii) for what purpose they may be limited; and iii) how they may be limited. Legislation and company policies should be sufficiently clear, accessible and transparent. Second, necessity and proportionality require limitations on speech to be balanced against the importance of the rights or interests at stake (e.g. public health, morals and the rights or reputations of others). A non-binary approach to online content governance would ensure that these provisions are upheld. Content that may be restricted should not be simply taken down or left in place. Other measures should be available, such as labelling or deprioritizing content, or directing users to other sources of information. Finally, states must ensure that platforms put in place user redress or review mechanisms as a safeguard against wrongful content moderation decisions. Errors are unavoidable in an environment where machine-learning algorithms sift through billions of posts every day. But decisions and the processes behind them must be open to challenge.
Neither consensus-building nor the promotion of a human rights-based approach are straightforward, complete solutions. By uncovering the patterns and commonalities of current platform regulatory regimes, this paper has shown how regional and international agreement can become more attainable. It remains unclear whether agreement will be based on countries being allowed a measure of difference in their approaches or on broader alignment. Despite being overlooked by many in the tech community, human rights will always be a crucial part of this dialogue as a well-established international rulebook for greater transparency, accountability and remedy. International cooperation and alliances are achievable, and human rights must remain universal, even in the digital world.