Potential harms arise from technology design choices that enable third parties to commercialize, exploit and ‘weaponize’ data, leading to compounded harms when those data are used in gender-based targeting.
Technology results from, and is reliant on, a vast ecosystem of data-gathering. This system informs design choices and helps create more responsive, sophisticated and convenient technologies that, in turn, generate more data. Together with a growing market for third-party personal data, this cycle has led technology companies to maximize the collection of data, and to data becoming ever more profitable for technology companies and others. Consequently, commitments to ensuring data privacy for users have been less of a priority.
The risks of data privacy breaches are particularly severe for women and other marginalized groups, since discriminatory structures in society and in government institutions define whether and how personal data can be ‘weaponized’ to harm individuals and whether those data should, therefore, be considered ‘sensitive’. Thus, gender has an influence on the ‘weaponizability’ of seemingly non-sensitive data.
Potential harms arise from implicit technology design choices that enable third parties to commercialize, exploit and ‘weaponize’ data, leading to compounded harms when said data are used in gender-based targeting. The case studies in this chapter discuss gendered cyber harms associated with data privacy breaches in the context of commercially available data collected and aggregated by data brokers, and the ‘weaponization’ of geolocation data in particular. The chapter is divided into two case studies that consider two types of personal information that are particularly sensitive from a gendered perspective. The first discusses the ‘weaponization’ of reproductive health-related data in the context of abortion criminalization in the US post-Roe vs Wade. The second discusses the potential for gendered cyber harms related to dating apps, and analyses the ‘weaponization’ of geolocation data obtained through LGBTIQ+ dating apps in the Middle East and North Africa. In both instances, cybersecurity has been of secondary importance to commercial incentives and priorities, creating an environment – and increasing the potential – of insecurity.
4.1 The role of data brokers
Data brokers aggregate a variety of data types from a vast range of sources to create highly detailed profiles of individuals. Data brokers benefit from deliberate design choices that are intended to optimize and maximize data availability and collection. Mobile applications, social media platforms, search engines, website visits, online purchases, wearable devices and advertisement clicks are all data sources. Profiles obtained from these sources often include time-stamped geolocation data, a type of data that is particularly problematic from a privacy perspective. While data brokers claim to protect individuals’ data privacy rights by anonymizing their data, in many cases, geolocation data can be used to re-identify individuals and collate millions of data points aggregated in data brokers’ profiles on those people. Moreover, data managed by data brokers is at higher risk of being weaponized, since a wide range of actors (from law enforcement agencies to private individuals) can access the highly personal and sensitive data that brokers hold.
4.2 ‘Weaponization’ of reproductive health data in the US
The process of seeking reproductive health information and services leaves a vast web of digital traces, whether through the use of digital devices and apps for fertility and period tracking; consulting telemedicine providers; purchasing healthcare products like abortion pills and pregnancy tests with a credit card; or simply searching for information online. While reproductive health-related information is a highly sensitive type of data, it is not always protected by high data-privacy standards and regulation. Much of this data can be collected and sold by data brokers.
In recent years, digital services and products collecting health-related data have come under increased scrutiny – most notably in the context of abortion criminalization in the US. The overturning of the Roe v. Wade ruling by the US Supreme Court in 2022 ended the federal protection of abortion rights in the US and enabled abortion to be restricted in several US states. Many people all over the US reacted by removing sensitive health apps and data from their digital devices due to concerns that this data could in future be obtained by law enforcement agencies and result in criminal prosecution. In this context, most attention has been paid to the potential risks of data collected by fem-tech apps and geolocation data tracking individuals’ visits to abortion clinics.
The design of fem-tech apps enables them to collect highly sensitive data not captured by other types of consumer devices.
There are currently more than 1,300 companies offering a range of fem-tech products aimed at helping women manage their health. These range from apps for mobile devices to Internet of Things products or worn devices. The design of fem-tech apps enables them to collect highly sensitive data not captured by other types of consumer devices. Here, users are assumed to be willing to divulge personal data in return for a service.
In 2021, the US Federal Trade Commission (FTC) issued a complaint against Flo Health, a fertility tracking app, for sharing sensitive health data from millions of individuals with marketing and analytics firms; the case was settled later in 2021, with the company being required to acquire users’ ‘affirmative consent before sharing their personal health information with others’. Similarly, in 2022, the FTC filed a lawsuit against data broker Kochava Inc, which was accused of commercializing sensitive data, including visits to domestic violence shelters and reproductive health clinics – data that could be used to determine a user’s home address. According to the FTC’s data sample, time-stamped geolocation data from over 61 million unique mobile services was potentially affected.
In 2022, 32 data brokers in the US were found to promote access to datasets containing data on millions of data subjects in the US labelled as ‘actively pregnant’, ‘potentially pregnant’ or ‘shopping for maternity products’, also offering large datasets on people using birth control products that have recently been banned by some US states.
At the time of writing, there have been no examples of reproductive health-related information sold by data brokers being presented as evidence in court against those accused of having sought abortions in the US. However, US authorities have previously used digital evidence such as text messages and online search histories to enforce abortion laws. In some US states, the evidence bar for abortion cases can be extremely low: geolocation data proving a visit to an abortion clinic alone could be considered strong enough evidence for a conviction. Law enforcement is also increasingly using ‘reverse warrants’ focused on geolocation data or search histories (these are known as ‘geofence warrants’ or ‘keyword warrants’). There are concerns that reverse warrants could be used to justify digital ‘dragnets’, identifying large numbers of potential abortion-seekers in the process. There are also concerns that bounty hunters might use purchased geolocation data to track abortion patients and providers, motivated by new monetary rewards on offer in some US states. Anti-abortion groups have also reportedly obtained data from brokers and used it to target women and abortion advocates with online disinformation, harassment and doxxing.
This evidence exemplifies the various ways through which reproductive health data sold by data brokers have been weaponized or hold the potential to be weaponized by public and private actors alike. In this case, a profit motive informed the design of highly data-extractive technologies (i.e. fem-tech apps) and the data generated by this design choice were exploited by both brokers and technology companies. Potential harms directly generated by such interactions include the prosecution of women seeking healthcare that is criminalized (e.g. information and services related to abortions, sexually transmitted infections, contraception, transgender healthcare, etc.), targeted disinformation about reproductive healthcare, and stalking, doxxing and even physical violence against patients, healthcare providers and activists.
In addition to the deliberate ‘weaponization’ of reproductive health data, there are also an array of indirect harms to consider, some rooted in mistrust and others in the consequences of increased dependence on online services, as availability of offline resources becomes increasingly limited. These indirect harms include wrongful convictions due to biased and unreliable data purchased from data brokers or retrieved from fem-tech applications, and discrimination by employers. Some data privacy advocates have raised concerns that, for example, an employer who opposes abortion could weaponize predictive employment algorithms to discriminate against candidates who have, or are suspected to have had, an abortion. In such cases, the fear of being prosecuted may in itself restrict access to healthcare information and services, while, if certain data generated by fem-tech apps are used for research and to inform public policy, datasets risk encoding misrepresentation and bias.
In addition to the deliberate ‘weaponization’ of reproductive health data, there are also an array of indirect harms to consider, some rooted in mistrust and others in the consequences of increased dependence on online services.
The profit motive of private actors (in this case, both data brokers and technology companies) creates an enabling environment where the gendered security of those seeking reproductive healthcare is considered as a secondary or competing priority, as is the security of the data generated by fem-tech devices and apps itself. While technology companies and application developers depend on public opinion and users’ trust, and thus have an incentive to improve their privacy policies when faced with public criticism, this is not the case for data brokers, for whom public warnings of the vast data they gather and the potential ‘weaponizability’ of this data might even serve as advertisement for the effectiveness of their services.
4.3 ‘Weaponization’ of LGBTIQ+ dating app data
Dating apps collect large amounts of personal information about their users and often share it with third parties, sometimes without adequately informing users about how these data are used and who can access them. While app developers claim that these data are anonymized, users can be re-identified, not least because dating apps collect and can share geolocation data with third parties. Similar to data about reproductive health, information about people’s sexual orientation and dating behaviour has the potential for commercialization and ‘weaponization’, resulting in potential harms faced by individuals with different gender and sexual identities.
Both states that criminalize homosexuality and anti-LGBTIQ+ activists may use dating apps to spy on and persecute LGBTIQ+ individuals by exploiting apps’ lack of privacy safeguards, but also potentially by purchasing data collected and aggregated by data brokers who have commercial relations with app providers.
For example, the LGBTIQ+ dating app Grindr has faced criticism for its data privacy practices.
The commercial availability of data collected by apps like Grindr is highly concerning due to its potential for ‘weaponization’ by state and non-state actors alike. For example, it could be used by third parties to target LGBTIQ+ individuals in religious communities or organizations where being outed could result in discrimination and other harms.
There are similar security concerns regarding the ‘weaponization’ of Grindr data by state actors. A Human Rights Watch report recently documented multiple cases of security agencies in the Middle East and North Africa (including those in Egypt, Iran, Iraq, Jordan, Lebanon and Tunisia) using dating apps such as Grindr to identify and prosecute LGBTIQ+ people. In Egypt, for example, police have been reported to use features of Grindr to persecute gay men: Grindr’s location feature, screenshots and messages, and even the mere presence of the app on an individual’s phone can form part of ‘debauchery’ court cases. In this context, location data retrieved from dating apps, or infiltration of the apps themselves (via fake profiles, for example) can enable harms like threats, assaults or even arrests of LGBTIQ+ people. The use of dating apps by to spy on and arrest LGBTIQ+ individuals is simultaneously an exploitation of an app’s lack of privacy safeguards and an abuse of user expectations about how their data is used.
Following criticism from human rights organizations, Grindr introduced several design changes, including features like enabling users to unsend messages, block screenshots and send temporary images designed to expire. Grindr has also launched a ‘Holistic Security Guide’, which provides guidance on reducing the potential for harm across users’ digital security, personal safety, and self-care and well-being. The app also disabled the distance function in countries such as Egypt, Iraq, Nigeria, Russia and Saudi Arabia. Most recently, following reports of LGBTIQ+ people being arrested in Egypt in early 2023, Grindr sent a warning to its users in the country, stating:
As dating apps like Grindr improve their data privacy practices in response to public scrutiny, and user awareness of data privacy risks associated with the use of such apps increases, police or government agents targeting LGBTIQ+ individuals may increasingly turn to data brokers to help identify individuals, circumventing data privacy practices by using geolocation data for de-anonymization. Even if dating apps address gaps in their practices and data brokers refrain from collecting sensitive data in the future, these measures will come too late for users or former users whose data have already been collected and could still be up for sale.