This paper examines how human rights frameworks should guide digital technology.
Summary
- Online political campaigning techniques are distorting our democratic political processes. These techniques include the creation of disinformation and divisive content; exploiting digital platforms’ algorithms, and using bots, cyborgs and fake accounts to distribute this content; maximizing influence through harnessing emotional responses such as anger and disgust; and micro-targeting on the basis of collated personal data and sophisticated psychological profiling techniques. Some state authorities distort political debate by restricting, filtering, shutting down or censoring online networks.
- Such techniques have outpaced regulatory initiatives and, save in egregious cases such as shutdown of networks, there is no international consensus on how they should be tackled. Digital platforms, driven by their commercial impetus to encourage users to spend as long as possible on them and to attract advertisers, may provide an environment conducive to manipulative techniques.
- International human rights law, with its careful calibrations designed to protect individuals from abuse of power by authority, provides a normative framework that should underpin responses to online disinformation and distortion of political debate. Contrary to popular view, it does not entail that there should be no control of the online environment; rather, controls should balance the interests at stake appropriately.
- The rights to freedom of thought and opinion are critical to delimiting the appropriate boundary between legitimate influence and illegitimate manipulation. When digital platforms exploit decision-making biases in prioritizing bad news and divisive, emotion-arousing information, they may be breaching these rights. States and digital platforms should consider structural changes to digital platforms to ensure that methods of online political discourse respect personal agency and prevent the use of sophisticated manipulative techniques.
- The right to privacy includes a right to choose not to divulge your personal information, and a right to opt out of trading in and profiling on the basis of your personal data. Current practices in collecting, trading and using extensive personal data to ‘micro-target’ voters without their knowledge are not consistent with this right. Significant changes are needed.
- Data protection laws should be implemented robustly, and should not legitimate extensive harvesting of personal data on the basis of either notional ‘consent’ or the data handler’s commercial interests. The right to privacy should be embedded in technological design (such as by allowing the user to access all information held on them at the click of a button); and political parties should be transparent in their collection and use of personal data, and in their targeting of messages. Arguably, the value of personal data should be shared with the individuals from whom it derives.
- The rules on the boundaries of permissible content online should be set by states, and should be consistent with the right to freedom of expression. Digital platforms have had to rapidly develop policies on retention or removal of content, but those policies do not necessarily reflect the right to freedom of expression, and platforms are currently not well placed to take account of the public interest. Platforms should be far more transparent in their content regulation policies and decision-making, and should develop frameworks enabling efficient, fair, consistent internal complaints and content monitoring processes. Expertise on international human rights law should be integral to their systems.
- The right to participate in public affairs and to vote includes the right to engage in public debate. States and digital platforms should ensure an environment in which all can participate in debate online and are not discouraged from standing for election, from participating or from voting by online threats or abuse.