Digital platform regulation is in the spotlight again. Amid questions around competition, allegations of shortcomings in stewardship and lawsuits alleging negligence, calls to regulate digital platforms by governments, the media and civil society are louder than ever. But this is a discordant chorus: its volume should not mask the wide divergence in approaches to platform regulation playing out round the world.
Allegations have been levelled at platforms for their role in amplifying hateful content during the violence in Myanmar. More recently, a lawsuit has been filed in Kenya against Meta on the basis that its recommendations systems magnified hateful and violent posts in the context of the Ethiopian civil war.
Elon Musk’s Twitter takeover has served as a further catalyst for wider discussions, notably in the press, on regulatory frameworks surrounding digital platforms. For example, with regards to Twitter’s compliance with the EU’s Digital Services Act, due to enter into force in 2024, censorship over journalists and issues surrounding anticompetitive practices mandated by the billionaire have faced criticism.
But the challenges posed by digital platform regulations cut both ways. Not all regulation is good regulation. In the absence of strong democratic institutions and appropriate checks and balances, they are likely to have a chilling effect on fundamental rights such as freedom of expression.
Concerns arise when governments leverage regulatory tools to reaffirm control over platforms without due process. In July 2022, the Economic Community of West African States (ECOWAS) Court ruled Nigeria’s ban on Twitter was a violation to freedom of expression, access to information and press freedom.
In some parts of the world, the rise of digital platforms has renegotiated long-standing settlements on the right to freedom of expression, which is deeply rooted in the International Covenant on Civil and Political Rights (ICCPR) and regional human rights treaties.
Striking the right balance
These trends and developments attest to three realities: First, the regulatory landscape surrounding digital platforms is a crowded – and increasingly fragmented – space. Second, as opposed to ‘classic’ power relationships between the state and individuals, digital platforms have grown into major and influential players in setting the boundaries and norms of freedom of expression and other rights. The social contract is now a negotiation between three parties.
Third, the question of digital platform regulation carries high stakes over the upholding of democratic values and human rights as the ‘digital public square’. It may even be argued that in some cases, digital platforms further the cause of democratic and timely participatory civic engagement.
Rapid and unfettered growth and a relative lack of licensing requirements, which digital platforms are generally not subject to in contrast with other forms of media (e.g., radio, television broadcasting, etc.) have propelled platforms into a critical societal role, and in doing so their terms of service have reshaped norms around freedom of expression.
It is precisely because of the power they hold that digital platforms are under such scrutiny: the question in most governments is no longer whether to regulate, but how.
States and other governing bodies have been approaching the question of digital platform regulation differently leading to a fragmented regulatory landscape across the globe. Even within geographic or linguistic regions, legislation is often disparate and disconnected.
In Europe, the European Union’s Digital Services Act requires that clear terms of service and redress systems are in place for users; the publication of transparency reports; and the appointment by each member state of a national independent regulator as the digital service coordinator.
But neighbouring countries may choose not to align themselves with this approach. Belarus, Russia, and Turkey all put forward further restrictions on politicized content types (e.g., those offensive to public morality) and potential sanctions on individual platform employees or directors.
Globally recognized legal or normative frameworks, like international human rights, have not yet translated to digital spaces. Industry attempts to set rules – Facebook’s Oversight Board, for instance – have been welcomed but do not appear to have deterred governments from taking matters further.
Whether regulatory harmonization is feasible or desirable at all remains, however, debatable, particularly in light of countries’ respective priorities, norms, and legislative landscapes. It remains to be seen how platforms will manage regulatory regimes as divergent as those in Ireland and those in Indonesia, or manage the expectations of lawmakers in the US where most platforms call home.
Risk mitigation and solutions
There remains little international agreement on a model to regulate digital platforms and it is unrealistic to hope for a ‘one-size-fits-all’ approach in light of diverse local democratic contexts. Local laws and customs, sociopolitical realities on the ground, the standing of human rights approaches, as well as the power relationships between the state, the people, and other private entities, all differ from country to country, and all inform national approaches to platforms.