The choice and uses of technologies for responding to cybercrime are significant, as is the choice of supplier or provider.
The global rise in cybercrime and cybercriminal threat actors – and the subsequent expansion of cybercrime victims to include individuals, communities, companies and sometimes even entire governments – has been coupled with an increasing global awareness of the gendered dimensions of the cybercrime life cycle, from gender-based victimization to gender-disaggregated data on the impact of cybercrime. At the same time, there are important global efforts towards ensuring cybercrime responses are gender-sensitive and some promising developments. Globally, law enforcement both redesigns and readapts technologies developed in the private sector for this purpose, such as online reporting platforms and tools for monitoring harmful content.
However, significant gaps persist in responses to cybercrime. In some cases, state and police responses to gendered cyber harms can mitigate harm. In others, they can lead to secondary harms. The choice and use of technology for responding to cybercrime are significant, as is the choice of supplier or provider.
This chapter considers case studies that interact with, but are not the direct result of, profit motives in technology design: first, responses to digital sex crimes in South Korea; and second, responses to gender-based violence in India. The chapter explores the redesign and readaptation of technologies to enable state and police responses to gendered cyber harms, focusing on the potential implications of design and deployment choices.
5.1 Investigating digital sex crimes and spycam abuse in South Korea
The proliferation of digital sex crimes in South Korea presents a case study for exploring the ways in which digital technologies that are designed to alleviate or prevent gendered cyber harms and enhance cybersecurity can mitigate and/or exacerbate such harms in new, predictable or unpredictable ways. The technologies in question are used for monitoring harmful and/or illegal content.
In recent years, key markers of internet connectivity in South Korea (i.e. mobile connections, social media users and internet connection speeds) have increased. So too have technology-facilitated abuses, most notably ‘molka’ (몰카) crimes – meaning digital sex crimes involving the use of hidden spycams to capture intimate or private images without knowledge or consent, and the dissemination of this non-consensually captured content via public or private channels. In the ‘global epicentre’ of spycam abuses, prosecutions for sex crimes involving illegal filming in South Korea rose 11-fold between 2008 and 2017, but survivors (for the most part, women and girls) reportedly face multiple barriers to reporting, justice and recovery. These range from social barriers – including social stigma and reputational harm – to institutional ones created and exacerbated by political leaders, police, prosecutors and legislators. These barriers are a significant cybersecurity risk, as expectations of privacy are not met, and both technical and social vulnerabilities are exploited by malicious actors. They are a gendered risk because of the disproportionate impact of molka crimes on women and girls.
South Korean legislators have enacted several measures to address, investigate and prevent spycam abuse, although activists have criticized these measures as insufficient, unsustainable or as enablers of surveillance without safeguards. Technology companies like Google have also been criticized for exacerbating harm through their ‘inadequate’ reporting system. Gaps in these measures, and in enforcement, justice and survivor support, are reportedly often filled by private sector companies, private individuals and civil society groups. This chapter outlines two (related) state-led initiatives for monitoring digital sex crimes.
The Digital Sex Crimes monitoring unit/taskforce at the Korea Communications Standards Commission (KCSC) was founded in 2019 with a mandate to monitor domestic and foreign websites for Korean-language hashtags that may reference images and videos captured without consent. With KCSC’s regulatory power behind it, the taskforce can force South Korean sites to take down images and videos. But for content hosted on overseas servers, it can only request that foreign operators remove it. In 2022, the taskforce was recommended by the Ministry of Justice to take further steps to delete and block illegal videos, such as preventing abusive videos from dissemination by blocking access. The taskforce head also noted that survivors directly contact them to deal with cases, although the KCSC reportedly also operates a separate victim support centre. However, no information is available publicly about the specific monitoring technologies used by the taskforce.
While the KCSC’s taskforce has a nationwide mandate, local police agencies and local governments have also pioneered their own monitoring initiatives. For example, the Seoul Metropolitan Government’s (SMG) AI-based monitoring system was announced in early 2023 to replace manual monitoring, based at the city’s digital sex crime centre. AI-enabled monitoring in this programme reportedly identifies sexually exploitative material. The programme has a mandate for automatically deleting content and ‘blocking circulation’ at the source. Official reporting on the legislative mechanism underlying the programme’s mandate for deleting content appear vague, as is information on the development of the AI program itself, which is reported to have been developed by the Seoul Institute for Technology.
Both initiatives discussed were designed to address a specific gendered cyber harm through the use of monitoring technologies by specialized units and programmes. However, there is a danger of the solution leading to secondary harms for survivors.
The KCSC’s lack of transparency has been criticized by civil society organizations, particularly the body’s enforcement of ‘vaguely defined standards and broad discretionary power’ in the sub-commission on internet communications – that enables commissioners to ‘make politically, socially and culturally biased judgements’, which may lack a legislative rationale. Furthermore, there appears to be little publicly available information about the selection, development and use of monitoring technologies by either the KCSC’s taskforce and the SMG’s unit. This lack of public information may not necessarily suggest an absence of independent expert oversight in the technology’s selection, design, implementation and auditing.
The definition of ‘digital sex crime’ materials is enshrined in South Korean legislation, but the interpretation and operationalization of these definitions by state, police, suppliers and implementers may not be fully aligned, especially those regarding exactly what constitutes illegal, harmful or exploitative material, and the minimum benchmark for automatic deletion or platform notification. The implications of this misalignment could range from the wrongful criminalization of legitimate content to biased decision-making and outcomes.
There is the added challenge of ensuring that mechanisms for updating and reviewing identifiers consider developments in technology (particularly AI-generated deepfakes). The use of digital sex crime images and videos to train AI models more generally raises urgent questions about whether sufficient safeguards (such as data input controls and auditing measures) are implemented, particularly as the design and implementation of technologies relying on harmful datasets can lead to the reflection and exacerbation of existing systemic biases. If this was the case, a potential secondary harm faced by survivors would be inaccurate notification, which could result in (re)traumatization and psychological distress, regardless of the outcome.
Technologies designed and readapted to deliver gender-transformative cybersecurity must be responsive to potential harms (and harm mitigation measures) across the life cycle of design, deployment and review.
Monitoring is just one point in the life cycle of measures addressing digital sex crimes. Gender-sensitive monitoring is not in itself a safeguard against compounded harms at other points in the cycle. Human Rights Watch reports systemic failings by police (such as victim-blaming or jokes about intimate images) and judges (with trends towards lenient sentencing, and preferences for issuing fines rather than imprisonment) compounding survivors’ existing difficulties in seeking criminal justice. The impact of monitoring interventions could be undermined by weaknesses elsewhere in the country’s political and justice system, leading to inadequate outcomes at best and compounded harms at worst.
In this case, technologies designed and readapted to deliver gender-transformative cybersecurity must be responsive to potential harms (and harm mitigation measures) across the life cycle of design, deployment and review. This approach is consistent with a sociotechnical approach to cybersecurity more generally, which considers both the security of technologies and the security needs of users. Partnerships between private sector actors, law enforcement and academia have the potential to deliver results, but also to compound harms.
5.2 Reporting gender-based violence in India
South Korea is not alone in trialling technological solutions to gender-based violence. India is another country that has implemented several initiatives to improve online reporting of such crimes. Data from the Indian National Crime Records Bureau (NCRB) reports a ‘consistent year-on-year rise’ in gender-based violence from 2016 to 2021, with the exception of 2020. Meanwhile, the UN Sustainable Development Group referred to gender-based violence in India during COVID-19 as a ‘shadow pandemic’.
While much gender-based violence is experienced by survivors offline, there are rising concerns about technology enabled violence, which can both transform offline harms and lead to new types of cyber harm. A report by the University of Chicago and the International Center for Research on Women suggests that ‘male dominance in online spaces and gendered cultural norms often make the internet inhospitable for women’ in India. For more than a decade, activists and academics have mapped gendered cyber harms experienced by marginalized or minoritized gender identities in online spaces in India, as well as the measures used to mitigate and prevent online abuse (as explored below). These studies map out a landscape with deeply entrenched barriers to reporting gender-based violence, whether experienced online, offline or both.
The Indian government, states and local police agencies have pledged to address gender-based violence through reporting mechanisms. The national Scheme for Cyber Crime Prevention against Women and Children (CCPWC), pioneered by the Ministry of Home Affairs and the Ministry of Women and Child Development, aims to ‘have an effective mechanism to handle cybercrimes against women and children in the country’. One of the scheme’s main features was the 2018 launch of an online cybercrime-reporting platform for complaints relating to child sexual abuse material or sexually explicit content. The National Commission for Women (NCW) supports this initiative and others, including a 24/7 helpline ‘to help women facing domestic violence’, and a WhatsApp helpline launched in April 2020. In the last 10 years, technology companies have also launched personal safety apps for smartphone users, with features for location-sharing, sending emergency alerts and notifications of unsafe locations. This case study explores the use of WhatsApp chats for reporting gender-based violence to police.
As journalist Mahima Jain reports, the ‘efficacy’ of personal safety apps is hindered by various factors, including regional differences in language, technical glitches and complicated registration processes. As a result, ‘the apps remain niche… none have been widely adopted’. Jain interviewed the director of the women’s safety wing in the Telangana State Police, who explained that ‘WhatsApp has emerged as the most-used platform for women to seek help or make complaints of harassment’. WhatsApp reportedly works closely with law enforcement agencies in India and notes that:
In 2022, the company launched a dedicated safety hub for users in India, which includes resources on preventing abuse and supporting cybersecurity.
WhatsApp is used by local police in India for reporting in different capacities. For example, dedicated teams in the Telangana State Police handle complaints received via WhatsApp by contacting the complainant and collecting evidence. Over 40 per cent of reports picked up by the these teams are collected via WhatsApp messages. Complaints include ‘non-heinous’ acts committed and experienced both offline (such as public harassment) and online (such as ‘lewd comments made on social media’). In 2017, local police in Pune initiated a WhatsApp group called ‘BuddyCop’ to provide’ immediate access to police’ in the event of gender-based violence. Within six months of the group being launched, 100,000 women had registered and around 750 groups formed. Pune also launched a WhatsApp helpline number (not a specialized chat, as in Telangana state) in July 2023, ‘especially for women’s safety and security’, that forwards messages to the relevant police stations.
Political and police interest in innovating reporting mechanisms is a positive development, but technology enabled responses to gendered harms (whether experienced in cyberspace or offline) are by no means a fix-all solution. Technology platforms may not necessarily alleviate existing barriers to reporting gender-based violence, despite the relatively widespread access to and use of smartphones and data. Existing barriers include a lack of awareness about how and where to report, as well as social stigmas around reporting violence, harassment and abuse. Additionally, as 2018 data suggests, 71 per cent of respondents (adolescent girls from low-income households) ‘do not feel confident to approach the police in a case of harm’. Gendered gaps in digital literacy and access to devices – not to mention a lack of data collected on non-binary or genderqueer individuals – further complicate this picture. Only 29 per cent of women in India actually own a smartphone, compared to 48 per cent of men.Moreover, the percentage of women who own a device to which they have sole and exclusive access and use (i.e. a device they do not share with family members) is unclear. This lack of access poses a serious barrier to the efficacy and coverage of well-intended initiatives such as WhatsApp helplines, and underlines the importance of adopting an intersectional perspective when mapping these barriers.
Technology platforms may not necessarily alleviate existing barriers to reporting gender-based violence, despite the relatively widespread access to and use of smartphones and data.
Against this backdrop, using WhatsApp to facilitate reporting gender-based violence raises questions around how the choice of technology platform affects gendered cyber harms. First, while WhatsApp may be a suitable fit for police use – with end-to-end encryption and a variety of features tailored to threats faced specifically by users in India – there is an apparent lack of publicly available information on how identifiable chat data is stored and safeguarded by key stakeholders (i.e. local police agencies and organizations such as the NCW receiving complaints via WhatsApp). In addition, information on how sensitive data, images or videos (for instance, abusive messages shared by a complainant with a local police force using a WhatsApp chat helpline) would be stored or protected is either not apparent or unavailable.
A second area of concern lies in the difference between automated chat functions (available for users of ‘WhatsApp for Business’) and personal chat functions. The former generally rely on a series of key-term identifiers to automate responses and determine escalation. For instance, in reference to the Telangana case, it is unclear what the key-term identifiers are that determine a minimum benchmark for action. While an offence may be formally defined in legislation, decisions to act on suspected offences are influenced by a patchwork of personal, social, cultural and political factors, and may differ from state to state, and person to person.
Reporting is just one of many measures for mitigating gendered harms. WhatsApp’s effectiveness as a reporting mechanism is contingent on cultural, institutional and legislative realities, which can enable better responses but also open the door to potential abuse. Accurate data on reporting (and barriers to reporting), disaggregated nationally and by gender, are key. Further research could include interviews with law enforcement stakeholders and aim to map such barriers to better understand how platforms like WhatsApp can be used to overcome them.
In both of the case studies presented in this chapter, despite innovations and good intentions, the efficacy of the solutions presented is uncertain and there is a clear potential for secondary or unaddressed harms. Nonetheless, both studies demonstrate actions that can be taken for countering and mitigating gendered cyber harms – an important step towards gender-transformative cybersecurity.