2.2 The private sector, technology and gender
Technologies are designed, implemented and gendered in an ecosystem involving a range of different actors, including not just technology designers but also educators, policymakers, regulators and users. From technology design to deployment and evaluation, private sector entities have historically played a central role in importing gendered associations and connotations. Advancements in new technologies like generative AI threaten to amplify gendered harms both online and offline. This section describes the interaction of profitability, gender and technology design choices. It also outlines the internal and external pressures that result in the implicit and explicit gendering of technology, and in redesign.
Private sector entities are not only global leaders in technology development and deployment, but increasingly also in technology governance. Examining the private sector’s role in technology design and gendered cyber harms is urgent for many reasons, one being that law enforcement agencies and public administration bodies around the world have rapidly adopted private sector-developed technology solutions. State adoption of private sector-developed technologies is particularly worthy of focus, as it presents an important example of where the encoding and amplification of harmful gendered connotations or bias could translate to offline and online harms.
The private sector is not a single set of actors, instead incorporating everything from venture capital firms to multinational technology companies; from small- and medium-sized businesses to state-owned enterprises. Any simplification of these different actors into a single category is bound to overlook relevant differences – for example, a publicly listed technology company that has published technology design principles to combat domestic abuse would likely respond differently to criticism of the gendered implications of its technologies than a privately owned social media platform. This paper predominantly focuses on technology companies designing and deploying their own technologies (with a particular focus on social media and messaging platforms and dating applications). It also analyses private sector entities that offer information services (like data brokers), and for the reasons noted above, state and police use (or readaptation) of technology designed in the private sector.
Companies are motivated primarily by profit. As part of a broader social structure, gendered assumptions and roles are part of the external context in which they must manoeuvre to generate profit. More specifically, companies can also exploit gendered assumptions and differences to stand out against competitors. While this often leads to alignment with and reinforcement of gender stereotypes, such competition for distinction also motivates commercial decisions to subvert or challenge gendered norms and hierarchies. Of course, most private sector decisions do not deliberately seek to directly profit from, or perpetuate, gendered cyber harms. But even minor decisions may unwittingly enable harms, if bad choices (including those on design) can be exploited and ‘weaponized’ by third parties.
Technology companies play a unique role in the interaction between profitability, gender and technology. Like other private actors, technology companies make design choices to maximize profitability, the pursuit of which is conditioned by gendered assumptions. Some technology companies are both exceptionally profitable and exceptionally able to reach people across the world. The consequence of this is that, when faced with negative feedback (such as public pressure or punitive measures as a result of regulation), technology companies are in a complex position due to the financial implications of their perceived or actual ability to change the product. For many technology companies, one of their core concerns is to retain and grow their user base, including by maintaining trust and responding to feedback and competition (albeit to different and, in some cases, diminishing, degrees). This complex landscape is explored in the case studies.
The private sector’s role in gendering technology design can be illustrated through the example of voice assistants, which both reflect and entrench gender stereotypes. Companies producing voice-assistant systems initially tested a range of voices (both masculine- and feminine-sounding), with commercial pressures incentivizing the design of systems where the voice was ‘trusted’ and not ‘irritating’ (both of which are highly gendered attributes). In this way, technology developers in companies responded to external gendered pressures to produce a product that would be most well-received by customers and, therefore, both useful and profitable. The choice of a feminine-sounding voice perpetuates gender stereotypes and hierarchies by projecting assumed gendered roles in a professional environment into a digital device.
Once a technology has been released, redesign or rebranding decisions may be driven by reputation and profitability concerns, with companies responding to positive and/or negative feedback on their design choices. These pressures interact with others – both internal and external – such as: the interests of employees and stakeholders; abiding by best practices in corporate social responsibility; and acting in accordance with law, regulation and national and international norms.
Unless criticism is particularly severe or such a decision is externally mandated, companies rarely respond by recalling or withdrawing their product and exiting the market. Sources of negative feedback range widely, including pressure generated by media reporting, what constitutes ‘public opinion’ or shareholder groups. As noted above, companies also make design and deployment decisions in response to national and international regulatory constraints (such as potential punitive measures) and norms (such as those relating to the responsible use of technology).
An important subcategory of redesign emerges from technology design choices that directly or indirectly cause harm. Companies can act as inadvertent enablers of (gendered) harms, either due to third parties (ranging from law enforcement to criminal groups to other private sector actors) maliciously exploiting, weaponizing or otherwise misusing their products, or due to their own negligence in the design phase. An example is a geolocatable device, which presents security risks if abused. Risk emerges from the failure to consider and test for potential gendered harms during product design, potentially as a result of cost-saving measures (in other terms, the profit motive) or bias, but more likely because such testing is not deemed a priority or necessity. Further, companies may also directly profit from the use of their products to perpetuate gendered harms. In this case, they are not inadvertent enablers but rather facilitators. As explored in the case studies, the lines between inadvertent enabler and willing facilitator can be opaque and dynamic.
Private developers of technology are increasingly aware of gendered cybersecurity risks. Nonetheless, notable gaps in gender-sensitivity remain. In response to these gaps and the harms they perpetuate, a rich academic and practitioner movement has emerged against the ‘male by default’ approach to technology design. Practices they promote include participatory threat-modelling, in which advocates for feminist technology design develop research and recommendations for developers of technology to identify and mitigate potential gendered cyber harms. Despite a promising upward trend in awareness, gendered security considerations do not yet play a substantial and sustained role in most private sector-led technology development, redesign and evaluation.
There are serious gaps in the design and implementation of gender-transformative technologies, which are compounded by the difficulties in incentivizing private actors to consider gender-transformative cybersecurity a profitable goal.
There are serious gaps in the design and implementation of gender-transformative technologies, which are compounded by the difficulties in incentivizing private actors to consider gender-transformative cybersecurity as a profitable goal. For the public sector, legislation, regulation, norms and best practices are important levers and can influence private sector activities. Governments wield an important, if not primary, influence in setting gender and technology policy agendas, which are responsive to gendered social structures and shape, to a degree, private sector activity.
In addition to the overarching role of law and regulation, technologies developed in the private sector can also be readapted for use by law enforcement or public sector actors. Such readaptation can be organic or the result of formalized partnerships like public-private partnerships. A positive example explored in this paper is the readaptation of a social media messaging app for communicating with specialized teams in law enforcement to report potential crimes. However, harms can also be exacerbated through this process of readaptation. An example given in this paper is the use of data from dating apps by law enforcement agencies in the Middle East and North Africa to target LGBTIQ+ individuals. Some technologies developed by the private sector can be redesigned or readapted and deployed in a different context – for example, for mitigating gendered harms. However, without the right safeguards in its implementation, those technologies risk exacerbating existing harms.
2.3 Introducing the case studies
To investigate the roles and responsibilities of private sector actors in perpetuating and mitigating gendered cyber harms, this paper includes three chapters discussing real-world case studies: Chapter 3 considers social media use by individuals identifying as queer in Nigeria and South Africa; Chapter 4 considers the ‘weaponization’ of, reproductive health data in the US, and of LGBTIQ+ dating app data in the Middle East and North Africa; and finally, Chapter 5 considers detection technologies for images and videos of ‘digital sex crimes’ in South Korea, and online reporting mechanisms for gender-based violence in India.
These three chapters each encompass three interconnected kinds of gendered cyber harm – hate speech, data breach and state overreach. The case studies presented in each chapter span across cultural, national, and regional contexts. Chapter 3 is partially concerned with hate speech and other content-based harms, while Chapter 4 considers harms emerging from data breaches like privacy violations and state and police actors perpetuating harm. The two case studies in Chapter 5 address all three kinds of gendered cyber harm. As in the previous Chatham House paper on gendered cyber harms, these case studies advance the argument that gendered cyber harms are cascading and compounding. The case studies also emphasize the connections between offline and online harms, showing how cyber harms stem from and exacerbate offline prejudice, discrimination and violence.
A central aim of the case studies is to explore how private sector actors can contribute to gendered cyber harms in different contexts. For this reason, the case studies occur at different points in the technology design and deployment cycle. Focusing mainly on social media and data markets, the first three case studies focus on how large-scale technology design – across whole sectors and markets – responds to both the profit motive and gendered social structures, resulting the commercialization and ‘weaponization’ of sensitive information. In contrast, the final two case studies focus on the readaptation and deployment of technology for monitoring and addressing gendered cyber harms, at the request of, or in tandem with, the public sector and law enforcement agencies.
These case studies foreground lived experiences, highlighting the basic point that, ultimately, it is people who interact with technologies and experience the harms they cause. Decisions on how and when to use the technologies in these case studies represent nodes of criticism, challenge and even resistance. When someone restricts their location settings on a dating app, limits photo-sharing on social media, or chooses (not) to post an emergency message on an automated helpline, their decision is an expression of individual agency. It is therefore essential to foreground individual, human interactions with technology, and guard against an overly structural perspective.