The literature on gender and cybersecurity has identified three kinds of cyber harm that have gendered dimensions, considered in turn below. These cyber harms are separate to those stemming from lack of access to the internet or digital technologies (the ‘gender digital divide’), which are themselves exacerbated by internet shutdowns and other deliberate impediments to online inclusion. They are also separate to the issue of unequal gender participation in cybersecurity governance and technical fields, which is both a distinct policy problem and an important factor facilitating the harms discussed here.
3.1 Hate speech
The first kind of gendered cyber harm involves gendered hate speech, online abuse and disinformation. These are overlapping but distinct phenomena, all related to online content. Gendered hate speech is offensive content that attacks or targets people based on their gender identity, typically through pejorative or discriminatory elements; for example, misogynistic hate speech expresses a hatred of women. Gendered online abuse is similarly targeted at individuals or groups based on their gender identity, but does not necessarily involve the explicit content elements of hate speech. Gendered disinformation is the deliberate spread of false information regarding gender issues, and can be part of hate speech and online abuse. The internet makes such dissemination easier, wider and more harmful, with new digital tools such as generative AI only extending this trend.
While gendered harassment and abuse online affects many people, Brown and Esterhuysen note that human rights defenders, journalists and those in vulnerable or marginalized situations face increased risks and suffer greater consequences from gender-based threats and abuse. Haciyakupoglu and Wong underline the dependence of gendered online abuse on the business models and algorithmic design of large social media platforms, including metrics of engagement and virality that favour offensive or polarizing content. In contrast, content moderation is often inadequately resourced and restricted by geography or language, meaning that the incentive structure of the commercial environment is weighted against protection and care for those targeted. The problem spreads beyond major platforms, with gendered harassment and abuse also widespread in multiplayer online games, web forums and private or semi-public messaging apps – where appropriate policies are even harder to introduce and monitor.
Gendered hate speech has implications for international politics. In her then capacity as UN Special Rapporteur on violence against women, Dubravka Šimonović highlighted the links between online and offline violence against women in politics, emphasizing how ‘violence against women in politics is often normalized and tolerated, especially in contexts where patriarchy is deeply embedded’. Di Meco notes deliberate attempts, based on misogynistic tropes and stereotypes around gender roles, to discourage women from seeking political careers and derail public support for women politicians. Judson et al. investigate the specific problem of ‘state-aligned’ gendered disinformation: disinformation created by, for, or in support of state actors for political purposes. Such studies highlight a range of techniques used to discredit women in political debate, as well as its intersectional dimensions. One UK study found that Black and Asian women members of parliament were more likely to be subject to online abuse. Research events at Chatham House have demonstrated the co-option of gender narratives in nationalist disinformation campaigns in Georgia and Ukraine.
It should be noted that women, girls and LGBTIQ+ people are not the only victims of gendered hate speech. Extremist content is a form of hate speech, relying on stereotypes to create, generalize and spread harmful messages, and often, but not always, inciting online and offline violence or hate crimes. Those radicalized by extremist content might themselves be victims of hate speech that relies on such (gendered) stereotypes. In closed or coded online communities, extremist content contributes to the radicalization of all people, especially men and boys. Misogynistic stereotypes permeate many extremist ideologies. In some cases, gendered hate speech is instrumental to a broader process of radicalization; in others, misogyny may be the starting point around which an extremist community forms. In addition to gendered hate speech online, the offline harms resulting from radicalization range from acts of doxxing (searching for and revealing another person’s private or identifying information, such as their real name or place of residence) to recruitment to terrorist organizations or ‘lone wolf’ acts of violence.
In some cases, gendered hate speech is instrumental to a broader process of radicalization; in others, misogyny may be the starting point around which an extremist community forms.
Overall, gendered hate speech, online abuse and disinformation are cybersecurity issues because they are harmful to an individual’s sense of security and belonging in cyberspace. While gender is far from the only lens through which to analyse hate speech, it is a highly visible aspect of an individual’s identity. This means that, for both targets and perpetrators, gender is a focal point for victimization or abuse at both individual and group levels. Furthermore, the issue of gendered hate speech reveals a wider tension in personal decision-making around, and platform moderation of, online content: how to reconcile hypervisibility in terms of profile (the increased scrutiny and exposure experienced by certain gender identities) and invisibility in terms of solutions (as content moderation fails to address the harms felt by specific communities). For some groups of people, such as women in politics, security through obscurity is not an option because their work is, by its nature, highly visible. Finally, gendered hate speech and online abuse clearly reverberate offline, with real-world consequences and impacts.
3.2 Data breach
The second kind of gendered cyber harm involves privacy violations due to data misuse, leakage or exploitation by malicious actors. There are two key ways in which online privacy violations and the misuse of digital data by those who do not have a legal or ethical right to access that data have gendered impacts. The first is as part of ‘technology-facilitated violence and abuse’ or ‘digital coercive control’: the incorporation of digital devices and data into strategies and techniques of intimate partner violence, online and offline. The clearest example is ‘stalkerware’ – i.e. spyware that can send almost all of a device’s data remotely to an abuser. Crucially, technology-facilitated abuse is not limited to mobile devices and computers. Internet of things (IoT) devices such as smart speakers or Bluetooth and Wi-Fi trackers have also been abused for purposes of coercive control.
The other is through the rise of ‘femtech’ – i.e. personal digital devices or apps designed for women. The rapid growth of this sector means that information and data on women’s health – including menstrual cycles, pregnancy, birth control and abortion – are increasingly vulnerable to cybersecurity vulnerabilities and risks, ranging from commercial de-anonymization to the publication and exploitation of leaked data. Recent studies highlight the gulf between the collection and use of data by femtech apps and devices, and users’ understanding and sense of control of that data. Harms stemming from data leakage range from psychological impacts of inappropriate advertising, for example increasing an individual’s sense of violation and trauma after miscarriage, to the physical and legal implications for people seeking abortions (discussed later in this paper). As these devices and apps are gendered by default, the harms that result are also inherently gendered. In this way, femtech privacy issues become part of a broader reduction and stigmatization of women’s reproductive rights in many places worldwide.
Although conceptions of privacy are diverse and context-dependent, and change over time, gendered differences also appear in studies of attitudes to the misuse or exploitation of personal digital data. Oomen and Leenes concluded, in 2008, that ‘gender appears not to influence privacy risk perception’. In contrast, more than a decade later, Coopamootoo et al. identified a ‘privacy gender gap’ whereby ‘women feel more negatively about [online] tracking, yet are less likely to take protective actions, compared to men’. Such a perceived lack of security consciousness is, according to Wei et al., a prevalent gender stereotype. Respondents to surveys conducted by those authors not only viewed women as more gullible, emotional and likely to share sensitive information on social media (thereby presenting a higher cybersecurity risk), but also viewed them as being less interested in and capable of adopting technical cybersecurity measures. Wei et al. trace such stereotypes to deeper forms of sexist essentialism, including the unfounded association of biological sex differences with ICT security behaviours.
Such stereotypes inform the assessment made by Slupska et al. that cybersecurity concerns of women – and vulnerable gender identities in general – are more likely to be minimized or overlooked. This is despite the fact that, in many cases, women face greater security burdens and are more likely to be affected by cybersecurity advertising that is misleading about the dangers they face. More specifically, gendered victim-blaming often occurs in response to the sharing of explicit images, choosing weak passwords or clicking on phishing links. The non-consensual dissemination of intimate images, in particular, is a growing form of gendered cyber harm, attracting attention in international cybercrime negotiations.
To summarize, a data breach is a cybersecurity issue because it is a privacy violation and involves (and can facilitate) unauthorized access to personal information that can then be ‘weaponized’ to cause harm online and offline. Data breaches through technology-facilitated abuse or femtech are gendered cybersecurity issues for two chief reasons. First, the impact of data breaches relies on gendered stereotypes of perceptions of, and attitudes towards, privacy and data protection. Second, the misuse of or access to digital data by those who do not have a right to access that data can be used to cause tangible harm and abuse to individuals, and there is a clear gendered element to this tactic.
3.3 State overreach
The third kind of gendered cyber harm stems from states’ use of policy and legislation to advance and enforce certain state-aligned gender norms online. For example, cybercrime laws may include clauses criminalizing online content that contravenes public decency or morals, usually defined elsewhere in states’ penal codes or criminal law, and often build on unequal standards of behaviour for people according to their gender. Similarly, cybersecurity strategies may leave the determination of what content constitutes a national security threat undefined, with state law enforcement or intelligence agencies then interpreting provisions through their own legal and institutional prisms. Where such agencies have histories of discrimination and repression against certain gender identities, sexualities or sexual orientations (online or offline), discriminatory practices are likely to manifest in law enforcement and other national practices in the digital space – often, but not always, in the name of cybersecurity and protecting against cybercrime.
While both hate speech and data breaches are cyber threats in a broadly conventional cybersecurity sense, where a malicious actor seeks to cause harm via technological means, the gendered harms resulting from state cybersecurity and cybercrime laws are less direct. In this case, the harm occurs not because of the cyber threat itself, but as part of a state response to counter cyber threats. This can be termed ‘overreach’, as the state response exceeds or omits what is strictly necessary to counter cyber threats while respecting gender and other human rights.
The imposition of rigid and exclusionary understandings of gender through cyber policy and legislation occurs as part of a broader phenomenon of cybersecurity measures facilitating authoritarian practices through control, surveillance and monitoring of digital public/private communication and content. There is an extensive body of research documenting the human rights implications of cybercrime laws, cybersecurity strategies and other similar measures that restrict fundamental freedoms (such as freedom of expression) online by authorizing violations and imposing censorship. In this way, the gendered harms that result from state cybersecurity measures are one – but far from the only – consequence of cybersecurity that is state-centric rather than human-centred.
Such actions occur within broader state efforts to politicize and securitize gender, both online and offline. States have long created and supported narratives of gender that are closely intertwined with ideas of national identity and security. Such narratives are typically most explicit in wartime, although they persist outside of conflict. For example, states frequently mobilize concepts of hegemonic masculinity to aid military recruitment, as well as characterizing adversaries as a threat to an idealized femininity – national or otherwise. Consequently, state law and regulation has historically enabled political bodies and systems to exert control over gender identities and expressions under the pretext of protecting against threats to national security (sometimes including, in a circular logic, the destabilization of prevalent gender norms itself as a national security threat).
There are three relevant implications of state overreach as regards cybersecurity and gender. First, understanding and acknowledgment of gendered cyber harms depends on the extent to which states leverage gender identities and gendered norms for purposes of national security and identity. Second, state responses to gendered cybersecurity vulnerabilities and risks will be prioritized or deprioritized in line with national gendered ideals and norms. Third, access to tools, systems and measures that mitigate such cybersecurity vulnerabilities and risks will depend on how a state (and other influential actors or communities in a given state context) supports or encourages specific understandings of gender. Overall, while the first two kinds of gendered cyber harm foreground the individual identity aspects of gender, state overreach foregrounds the role of gender as both social structure and system of power.