Gender-based targeting resulting from the insecure use of, and engagement with, social media platforms can lead to harms.
This chapter advances a definition of cybersecurity rooted in actions, practices, designs and policies, in addition to purely technical cybersecurity. It assesses the gendered cyber harms facilitated by visual media technologies and considers how a user’s context (culture or location) may additionally shape these harms. Through six semi-structured interviews with purposefully sampled users from Nigeria and South Africa who self-identified as queer, this case study demonstrates how large-scale technology design (across whole sectors and markets) has implications for gendered cyber harms and interacts with gendered social structures.
The two countries were chosen based on their advancement (South Africa) or lack thereof (Nigeria) of legislative protection for LGBTIQ+ people. While there are many similarities among the interviewees’ experiences, this case study acknowledges the geographical, cultural and individual differences that exist between them. In doing so, it argues for placing people at the centre of the nexus of gender, technology and the private sector.
3.1 Anxieties around escalation, scalability and visibility
An individual’s context and identity affect their security needs, which are additionally correlated with who or what that individual trusts. This point is captured by the concept of individuals having differential levels of vulnerability and trust, which carries gendered implications. All six interviewees expressed that they felt their cybersecurity vulnerabilities were shaped by a discriminatory social structure that determines their experiences of technology. From the perspective of four of the interviewees, it became clear that a key feature of TikTok’s design – i.e. a reliance on extreme scalability and algorithmic escalation of content – can be a source of concern and depletes their trust in the platform.
Improvements to platform accountability and transparency can act as a form of gender-transformative cybersecurity.
On TikTok, a person’s content or profile can go ‘viral’ without the creator’s informed consent, and in ways that are not transparent to the creator. This may be due to algorithmic design choices like recommendation systems, which determine how content and profiles are scaled, prioritized or deprioritized by the platform. Metrics differ from platform to platform, but may include content length, caption, location and engagement. One interviewee from Nigeria, referred to as ‘Seyi’, explained that the lack of transparency around TikTok’s algorithm makes them unsure about how to present themselves on the internet. This is in addition to evolving threats and anxieties generated by and in online spaces. Their non-binary, feminist identity and positioning as a Nigerian on TikTok increases their fear of potential threats (e.g. from ‘doxxing’ and harassment, on a variety of platforms). These threats can be amplified through the scaling and virality of publicly produced content in ways that are unpredictable for users.
Meanwhile, some interviewees from South Africa shared that their lack of trust in such platforms stems from witnessing identity-based threats and harms that others have experienced on TikTok, including death threats and stalking. As another interviewee, Arinze, said: ‘Our collectivist nature mean we are in each other’s business, and your content could end up on your aunt’s ‘For You’ page.’
The false narrative of queer ‘social contagion’ is used as a way to force queer people to live their online and offline lives in private. Content that has been scaled and amplified may generate anxieties about becoming the target of privacy violations as a result. The harm of extreme scalability and algorithmic escalation is a locational dynamic, rooted in factors that dictate how technology users can express and perform their identity online, like politics, legislation and culture.
The queer users interviewed in Nigeria and South Africa perceive and experience gendered cyber harms posed by scalability and algorithmic escalation differently. ‘Seyi’ described TikTok as an extreme form of ‘the public’, which makes security difficult to navigate. On the other hand, Terrie, an interviewee from South Africa who is trans and identifies as a woman, felt she faced no imminent threat as a South African person using the platform to promote her YouTube content through a new medium. However, Terrie’s experience was not shared by two other South African participants, who were more concerned with queer users receiving death threats.
The design of technologies and platforms creates an easily scalable and replicable form of threat because design choices make individuals who are already at-risk hyper-visible and vulnerable to greater and more diverse threats. Interviewees expressed concern with how threats become ‘worse’ when their content (unexpectedly) reaches thousands of people. Interviewees noted that while cyberspace poses a variety of risks, TikTok is particularly concerning because of how easy and quick it seems for content to ‘automatically’ escalate. Scaling of content is a purposeful algorithmic design choice, often intended to maximize user engagement and, in turn, generate profit. In this case, unintended harms emerging from non-static design choices (i.e. regular adaptation of algorithms) are experienced differently by different users.
3.2 Navigating security, visibility
and engagement
Responsibilities for ensuring security are shared among actors such as technology designers, regulators, enforcers and communities. With the right access to information and training, users can also make informed, communal choices to improve the safety and security of their own platform use. As some interviewees explained, specific engagement choices – such as choosing to not engage with or seek out harmful or abusive content – is a security choice, as it reduces the likelihood of being exposed to harmful content. However, if social media users – and particularly queer users – need to take additional steps to improve their online safety, technology companies also have a responsibility to transparently provide the necessary information and tools to enable this. Cybersecurity is embedded in sociotechnical dynamics and is influenced by one’s gender, sexuality, race, class, location, religion and cultural context(s). The security needs of queer users reflect these dynamics in different ways, meaning (cyber)security is constantly co- or re-constituted in different locations and by different stakeholders. The potential for gendered cyber harms is difficult to measure, but technology companies and users can take steps both to prevent and mitigate risks.
Improvements to platform accountability and transparency can act as a form of gender-transformative cybersecurity. Platform accountability and transparency not only place the primary responsibility for ensuring safety and security on the provider (i.e. the platform), but also make it easier for users to make informed decisions on how to navigate or use these platforms with security in mind. Those responsible for designing, developing and regulating how platforms function, what information is shared with users, and how detailed that information is should integrate considerations of how straightforward it is for a user to action security measures (such as reporting and ‘opt-out’ functions). Such an approach would be gender-transformative because it recognizes, as many platforms already do, that gender-based targeting resulting from the insecure use of, and engagement with, social media platforms can lead to harm.