The ways in which other major jurisdictions approach facial recognition deployments offers valuable insights for policymakers in Latin America.
Facial recognition deployments in Buenos Aires and São Paulo are mirrored in other countries across Latin America. Similar technology has been brought into use in Colombia, Mexico and Paraguay. The region appears to be ‘stuck’ in a worst-case scenario, where facial recognition is being used by security forces in public spaces despite potential human rights infringements, and with inadequate safeguards to contain potential abuses or provide avenues for redress.
With these deployments already in place, the use of the technology for law enforcement purposes has been normalized and even legitimized.
Where does Latin America go from here? While the region has unique characteristics that call for local solutions, an examination of evolving regulatory responses in jurisdictions such as the US, EU and UK – all of which are also exploring how to deploy AI and biometric technologies in a manner that is respectful of fundamental rights – provides some pointers as to how Latin America may move away from this scenario.
Facial recognition in the US
The US offers an interesting opportunity to study the regulation of facial recognition technologies, since the country – like Argentina and Brazil – is organized as a federal system where national, state or provincial and city-level authorities and legislation coexist.
US regulatory approaches to police use of facial recognition made international headlines when a number of cities began to ban the technology. The first city to take such action, in May 2019, was San Francisco, at the heart of the Silicon Valley technology hub. Somerville, Massachusetts, and Oakland, California, quickly followed suit, giving rise to a ‘domino effect’ which led to ordinances banning facial recognition being passed in another dozen US cities in the period to October 2020. At federal level, law enforcement agencies are known to make extensive use of the technology for national security purposes. However, except for a bill proposal in mid-2021 to ban the federal government from using facial recognition, national authorities in the US had steered clear of actively regulating facial recognition and biometric technologies.
State governments, on the other hand, have been left to self-regulate. In Massachusetts, policymakers at city and state levels are dealing with simultaneous efforts to regulate the technology. In June 2020, following an exhaustive campaign by local civil rights and community leaders, the city of Boston pronounced itself against the use of facial recognition by city police. The American Civil Liberties Union of Massachusetts, which participated in the pro-ban campaign, maintained that facial recognition is a risky technology that should not be regulated at city level.
In June 2020, following an exhaustive campaign by local civil rights and community leaders, the city of Boston pronounced itself against the use of facial recognition by city police.
Regulation was indeed taken up at the state level through the Police Reform Bill which, among other aspects, touched upon police use of facial recognition technologies. The bill originally sought to ban the use of biometric surveillance systems by Massachusetts state government agencies, but was vetoed by the state executive, which claimed that the technology was needed for criminal investigations. A renegotiated version of the bill was approved in December 2020, establishing that police may resort to facial recognition when in possession of a court order, or, in emergencies only, without a judicial warrant. In addition, the legislation established transparency requirements and created a commission to assess whether more stringent regulation might become necessary in the future. By contrast, Maine – the state with the strictest statewide regulation of facial recognition – requires government agencies to have ‘probable cause’. This means that facial recognition may be used in criminal investigations, but only when law enforcement has sufficient grounds to believe that a particular person has committed a crime.
In October 2022, the Biden administration – through the White House’s Office of Science and Technology Policy – introduced its Blueprint for an AI Bill of Rights, which offers more straightforward, though non-binding, guidelines for the use of automated technologies. The blueprint includes a series of principles to protect civil rights, including privacy, in the deployment of AI-based systems. For example, the application of these guidelines to police use of facial recognition would require the enactment of protections against algorithmic discrimination.
Facial recognition in the EU
The European Union has set out to regulate the use of facial recognition technology through its proposed Artificial Intelligence Act, which is currently going through the final stages of the EU’s legislative process. Within this framework, real-time biometric identification is classified as a high-risk application of AI and must therefore comply with certain mandatory requirements if it is to be put into service. More specifically, real-time biometric identification systems deployed in publicly accessible spaces for law enforcement purposes are prohibited, unless used for specific exceptions connected to public safety such as the search for missing persons and the localization of criminals and suspects. Similar to the regulation put in place by the US state of Massachusetts (see above), law enforcement would need to secure authorization to use the technology from either a judicial or an independent administrative authority designated by a member state, unless dealing with emergencies or life-threatening circumstances such as terrorist attacks. Member states would retain the discretion to draw up national laws that extend or limit law enforcement uses of the technology.
Precisely how facial recognition is to be regulated across the EU is, however, far from settled. Both the European Data Protection Board and the European Data Protection Supervisor, Europe’s privacy ‘watchdogs’, believe that these exceptions are too broad and could still lead to mass surveillance. Civil society organizations across Europe have welcomed this criticism, which is likely to prove a central point of contention in upcoming debates over this new component of the EU regulatory framework (one that, like the GDPR, is expected to generate a worldwide ‘ripple effect’).
Among the provisions of the AI Act is the adoption of a risks-assessment approach to better gauge the implications of specific AI deployments. This entails targeting applications of AI that pose greater threats to the public good, and lowering the burden for less risky applications of the technology. This approach offers a valuable proposition and an interesting model for Latin American countries to consider, as they design their national AI strategies: identifying which AI applications are particularly risky, and enabling national debates about those which pose significant challenges to fundamental rights. These conversations will be important to foster innovation and provide predictability for entrepreneurs and investors in the AI sector. Whether they are operating in the EU or Latin America, regulators must consider state capacities to enforce safeguards and potential political intent to abuse exceptions.
Facial recognition in the UK
The extensive use of CCTV by the UK’s law enforcement agencies is well documented, as are those agencies’ exploratory deployments of facial recognition. London’s Metropolitan Police and the South Wales Police have made the most extensive use of the technology, although Big Brother Watch, a civil liberties campaign group, was reporting in August 2022 that pilot projects and deployments were confirmed or believed to have taken place in at least another eight UK cities. The Metropolitan Police has run facial recognition trials in the UK capital since 2016, with two live pilots having taken place as recently as January and July 2022. Londoners are not unaccustomed to the use of technology for monitoring streets, with their city having the most extensive CCTV network of any, outside China. This prolific use of networked CCTV in London has prompted the development of robust legislation to regulate video surveillance that seeks to minimize its potential impact on individual rights and liberties. The South Wales Police, on the other hand, is the national lead on testing automated facial recognition. It is reported to have run 50 facial recognition trials between 2017 and 2019 at mass events, including concerts and sports matches.
The use of facial recognition technologies in the UK is currently governed by a complex regulatory framework: supervision of existing deployments falls under the purview of a range of government entities commissioned with overseeing video surveillance and biometric technology systems.
Beyond existing regulation, the weight of case law has also been very important in shaping the use of facial recognition. Edward Bridges vs South Wales Police (2018–20) has been a seminal case. Following a legal complaint by a resident of Cardiff, who challenged the legality of having his face analysed by the South Wales Police after his image was captured by facial recognition systems during a trial of the technology, the Cardiff Court of Appeal found irregularities with the way the facial recognition was implemented. This included a lack of clarity about the rules that determined whether the police could use facial recognition, and how the police force had compiled the watch list of individuals to monitor. The court ruling also found that the police had not thoroughly studied the potential discriminatory impact of the technology. The South Wales Police had won the case at first instance, before losing in the Court of Appeal, indicating that the breach of rights was not self-evident.
This UK ruling does not render all uses of facial recognition technologies unlawful, but it highlights the importance of crafting detailed guidelines with robust standards in relation to potential interferences with the right to privacy. Since the ruling was made, the South Wales Police has resumed facial recognition trials, making a concerted effort to ensure the deployments are legitimate and proportionate, and avoid breaching equality requirements through bias or discrimination. This measured approach indicates that police forces across the UK are incorporating the lessons learned from the Bridges case. The College of Policing for England and Wales has also issued guidelines for police authorities to use live facial recognition in a manner that is ethical and respectful of human rights.
While debates on how to regulate facial recognition are still not fully settled in any of these three jurisdictions, regulation in the US and EU appears to be moving towards the authorized use of the technology in public spaces only under specific circumstances related to public safety. Civil society and watchdog organizations continue to challenge whether these limitations are sufficiently robust to prevent mass surveillance. They have persisted in calling for comprehensive bans, while continuing to expose the potentially discriminatory biases of the technology. In the UK, in addition to civil society, the judiciary has contributed to the debate and raised the bar by calling for more robust privacy protections which have encouraged the incorporation of additional human rights safeguards and oversight mechanisms.
The case of the US provides relevant lessons for Latin American countries which have federal systems of government and where city-level legislation has served to enact more stringent rules in the use of the technology than those offered by state or national legislation. In the few US states that have regulated facial recognition, legislation has provided macro-level frameworks that contemplate exceptions and outline the various levels of authorization required to use the technology. In the US, where federal legislation tends to be less prescriptive, regulation is likely to be shaped at the state level whereas in Latin American countries, regulatory frameworks are more likely to be developed by policymakers at the national level. In either case, city-level regulation should not provide lesser protections than those upheld by state and national regulation.
Lastly, the EU’s AI Act offers a potential model for Latin American countries to follow in terms of integrating facial recognition regulation within their AI strategies to develop coherent, overarching frameworks. Indeed, the risk-assessment approach adopted by the EU may serve as a valuable methodology to countries beyond Europe, enabling them to identify AI applications that challenge fundamental rights and either limit or ban their deployment.