How facial recognition technologies are deployed determines whether their use is compliant with international human rights law. This has rendered the technology extremely controversial.
Facial recognition is one of the most widespread – and perhaps most questionable – applications of AI. Not only does its deployment risk reinforcing structural inequalities due to built-in algorithmic and data bias, but the technology can also serve as a tool for state surveillance.
The meaning of the term has become obfuscated, as it encompasses many different practices. The Electronic Frontier Foundation broadly defines face recognition as ‘a method of identifying or verifying the identity of an individual using their face’. The technology relies on the collection of biometric data – a type of personal data related to the physical and behavioural characteristics of an individual, which can include their facial traits, their gait or even their emotional state. Algorithms are trained using biometric databases to enable identification and verification of individuals, and are then deployed as a software solution. In cities where video monitoring systems are already in place, the deployment of facial recognition is often a matter of adapting existing surveillance infrastructure by installing appropriate software updates.
Facial recognition gained momentum in the early 2010s, when AI deep-learning methodologies significantly improved its accuracy rates. Despite these performance improvements, facial recognition was found to reinforce gender and racial discrimination. This was well documented in Joy Buolamwini and Timnit Gebru’s ‘Gender Shades’, a study published in 2018 which showed how women of colour were most often misclassified by commercial AI systems. The study shed light on how algorithmic bias – which can derive both from design decisions and from bias in the databases used to train algorithms – can give inaccurate results when attempting to identify women, people of colour and gender-nonconforming individuals. Since then, several companies have worked to reduce bias in their AI systems, though it still represents a significant limitation of the technology.
Facial recognition is especially problematic when used in public spaces, for law enforcement purposes. Deployments connected to public safety are often designed to single out an individual from a crowd or database. This is why the technology is mostly deployed in public spaces with high levels of circulation, such as in public transport networks or mass events. Identification differs from verification procedures which are used to corroborate the identity of a specific person – for example, to unlock a smartphone. To identify a person, facial recognition systems deployed in public spaces analyse the biometric data of several individuals, including those who are not suspected of any crime. The right to privacy of all these individuals is compromised. The overtly invasive and potentially disproportionate nature of identification procedures render the use of facial recognition a problematic practice which, without the proper safeguards to prevent abuse, can easily be misused for surveillance.
The risk of misidentifying individuals also poses an important limiting factor when employing facial recognition for law enforcement purposes. Facial recognition systems estimate the probability of having a match, meaning that they have a margin of error that may result in false positives or false negatives. Being misidentified through false positives can have severe consequences, such as wrongful detentions. False positives also tend to echo gender, racial, class, age or able-bodied biases built into the technology. Additionally, false negatives mean facial recognition systems might fail to identify persons of interest in criminal and national security investigations, therefore interfering with important individual rights while falling short in the delivery of purported benefits.
Evaluating facial recognition deployments in public spaces for law enforcement purposes calls for a thorough assessment of how each specific implementation may affect human rights. An important factor is whether identification of individuals occurs in real time or ex post. Live facial recognition deployed in public spaces, as defined by the College of Policing for England and Wales, entails the comparison of live camera feeds of faces against a predetermined ‘watch list’. According to the analysis put forth in the EU’s Artificial Intelligence Act, this is especially intrusive for affected individuals, as it can disrupt the sphere of privacy of large segments of the population, evoke a sense of surveillance and potentially dissuade citizens from exercising other rights, such as the right to peaceful assembly. Whenever a deployment relies on capturing images of non-wanted individuals – as opposed to solely capturing images of those who are indeed suspects – special attention is required to assess whether the deployment meets proportionality requirements and which measures are being put in place to prevent abusive use for surveillance purposes.
Evaluating facial recognition deployments in public spaces for law enforcement purposes calls for a thorough assessment of how each specific implementation may affect human rights.
Data use and retention practices are another important feature to consider when assessing human rights. The questionable retention, use or transfer of images obtained during identification procedures for purposes other than originally intended would raise red flags about the potential risks of surveillance. Similarly, the ways in which entities deploying facial recognition source biometric data, and whether they have legitimate access to use specific public databases that contain biometric data, will also determine whether a deployment is indeed compliant with human rights standards.