The deployment of facial recognition in Argentina and Brazil reveals common patterns and shortcomings in the rollout of the technology.
The adoption of facial recognition technologies in multiple spheres of life has been rapidly embraced in two of the largest countries in Latin America – Argentina and Brazil.
Live facial recognition – which is widely feared by detractors of the technology – was regularly used in Argentina’s capital city, Buenos Aires, between 2019 and 2022. Security forces in the city employed live footage to vet passers-by against the country’s national fugitive database, in order to identify potential criminals who had evaded justice. The system worked through video monitoring systems set up throughout the city, including in the three main railway stations and on the underground transport network, which is used by more than 1.3 million passengers per day. The use of the technology was temporarily suspended in April 2022 by a court order which alleged that the system had been misused to run unauthorized searches. Shortly afterwards, in September 2022 a city court declared the current conditions under which the system was operating to be unconstitutional. The ruling, against which an appeal is likely to be lodged, is expected to further extend the suspension of the facial recognition system.
Facial recognition has been deployed for marketing purposes, with highly controversial emotion detection techniques having been used to place advertisements in front of passengers in the São Paulo Metro.
While implementation is most highly consolidated in the capital, Argentina’s Association for Civil Rights (Asociación por los Derechos Civiles – ADC) reports that as at early 2021 facial recognition had also been deployed or piloted in the provinces of Córdoba, Salta and Mendoza, as well as in the county of Tigre in the province of Buenos Aires. There are also programmed deployments in the province of Santa Fe.
In the case of Brazil, the use of facial recognition is far more widespread, with deployments identified in 30 cities as of 2019. The technology is used for diverse purposes. Facial recognition has been adopted purportedly to prevent fraud in the distribution of social benefits: it has been used to verify the identities of beneficiaries of public transport subsidies in multiple Brazilian cities, and to track school attendance requirements for cash transfer programmes in the state of Pernambuco. The technology has also been deployed for marketing purposes, with highly controversial emotion detection techniques having been used to place advertisements in front of passengers in the São Paulo Metro. The latter project was eventually rolled back, after a local court declared that data collection on Metro passengers did not meet minimum consent requirements.
Perhaps the most controversial application of facial recognition is in the context of public safety. Widespread crime and high murder rates in Brazil have rendered average citizens amenable to embracing the promises of new surveillance technologies. The ascent to the presidency in 2019 of the far-right Jair Bolsonaro was itself facilitated by his controversial promises to crack down on domestic insecurity, relying on the increased involvement of military forces to deal with public safety issues. In 2022, public safety continued to be a central theme in the presidential campaign, along with the state of the Brazilian economy. Specific examples of facial recognition applied to public safety in the country include the deployment of live monitoring during the Carnival celebrations in São Paulo, the use of cameras mounted on police uniforms in Rio de Janeiro, and the establishment of facial recognition systems in the cities of Salvador de Bahía and Campinas.
Common trends: the cases of Buenos Aires and São Paulo
Deployments in the cities of Buenos Aires and São Paulo offer some compelling insights into the adoption of facial recognition in the Latin American region. This section of the paper will focus closely on two distinct implementations in public spaces, for law enforcement purposes: the adoption by the Buenos Aires city police, starting in 2019, of live facial recognition technology to study passengers on the public transport network; and a pilot deployment conducted during the 2020 Carnival by the Civil Police of the State of São Paulo.
Six common trends are identified: (1) a justification of the use of facial recognition in the name of public safety; (2) the adoption of facial recognition systems through obscure procurement processes, and amid growing efforts to place surveillance technologies in Latin American markets; (3) the deployment of facial recognition systems on weak legal grounds, and without proper human rights assessments; (4) the establishment of inadequate transparency and oversight mechanisms; (5) a reliance on the use of police databases that reinforce structural discrimination; and (6) poorly defined standards in data use and retention.
Public safety as the justification for deployment
Security is a central concern across Latin American cities, and Buenos Aires and São Paulo are no exception. In both cities, government officials and law enforcement agencies have leveraged public safety as the leading justification for deploying facial recognition in public spaces.
In the case of São Paulo, the biometric identification laboratory at the Instituto de Identificação Ricardo Gumbleton Daunt (Ricardo Gumbleton Daunt Identification Institute – IIRGD), under the purview of the state’s Civil Police, ran a live facial recognition trial during the celebrations for the 2020 Carnival. During the inauguration of what was referred to in the press as ‘the facial recognition lab’, São Paulo’s state governor asserted that statewide security forces would find the technology to be an ‘important ally to fight against criminals and search for missing persons’. However, the use of facial recognition to curb crime appears to be disproportionate. Overall, in 2018 Brazil ranked as the country with the 16th highest murder rate in the world, with 27.38 murders per 100,000 inhabitants. The wealthy state of São Paulo, however, is significantly safer, boasting one of the lowest murder rates in the country, at 8.2 per 100,000 in 2018. Searching for missing persons also features prominently as a justification for the deployment of the technology, this proposed use being less likely to draw criticism from the public.
In the case of Buenos Aires, similar arguments have been invoked to justify the deployment of facial recognition across the public transport network. The system, operated by the Urban Monitoring Centre of the Buenos Aires City Police, was set up in April 2019. During its launch, the city’s mayor Horacio Rodríguez Larreta asserted that the government’s ‘sole purpose was to ensure the residents of Buenos are safer and not walking among criminals in the streets’. At a time when ‘smart city’ projects are booming across Latin America, the adoption of facial recognition has also been portrayed as a sign of state modernization. Reinforcing this view, Rodríguez Larreta described the adoption of facial recognition as an ‘additional step in incorporating the use of technology to protect the population’.
Security concerns carry a significant weight in Argentina, despite indications that public safety is not as severe a challenge as in other Latin American countries. For example, Argentina has a murder rate comparable to that of the US. The Economist Intelligence Unit’s Safe Cities Index 2019, which among other indicators measures the prevalence of violent and petty crime, ranked Buenos Aires as having an acceptable level of personal safety. Average citizens, however, are highly concerned about public safety. A 2020 poll found that seven out of 10 Argentinians identified insecurity as one of the most pressing policy concerns in the country.
Obscure procurement and a growing market for surveillance technology
Transparent procurement processes allow the public to independently assess a government’s acquisition of technology; to know what specific companies and countries are serving as technology providers; and to learn about important features of the systems acquired, such as the efficacy rates of different facial recognition systems or efforts to address issues of bias in AI-based technologies.
In both Buenos Aires and São Paulo, procurement processes to acquire facial recognition systems have been opaque, with little information having been made available to the public on the technologies employed. This is not unusual in either Argentina or Brazil where, in spite of existing regulation, procurement processes tend to be marred by questionable transparency practices and documented instances of rigged bidding. Available information is pieced together from ad hoc statements to the press, freedom-of-information access requests and investigative reporting by local civil society organizations.
The surveillance technologies employed across Latin America have been purchased from a varied ecosystem of sources which includes but is not limited to countries such as China, Israel and Russia, all of which have a strong trade presence in the region as providers of surveillance technology. The technology piloted in São Paulo, for example, is supplied by Western companies. The facial recognition system employed by the State of São Paulo and deployed at the IIRGD’s biometric identification laboratory was provided by the Brazilian subsidiary of Gemalto, a Dutch company which was subsequently acquired by France’s Thales Group. No information is available about the specific technology employed, or its accuracy rates in identifying individuals. For the live trials run during the carnival in São Paulo, the biometric laboratory relied on live footage collected through the ‘City Cameras’ project – a city-wide video distribution network based on closed-circuit television (CCTV) technology developed by Microsoft.
The surveillance technologies employed across Latin America have been purchased from a varied ecosystem of sources which includes countries such as China, Israel and Russia, all of which have a strong trade presence in the region as providers of surveillance technology.
In the case of Buenos Aires, the city government engaged the locally based Danaide SA, a provider which commercializes in the domestic market surveillance technologies that are developed overseas. Through a freedom-of-information access request submitted in 2019, the ADC confirmed that the facial recognition system provided by Danaide is of Russian origin. The firm that developed the software claims it has an accuracy rate of 80 per cent.
The Latin American market has been increasingly targeted by a range of overseas surveillance technology companies seeking to place their products on the continent. For example, São Paulo’s City Cameras project incorporated additional cameras donated by Chinese firms, evidencing both the intention of such firms to encourage the adoption of surveillance technologies by authorities in Latin America, and those authorities’ own interest in expanding surveillance networks. A 2021 report by Access Now on surveillance technology providers in Latin America has also drawn attention to a lack of transparency in acquisition processes across the region and a failure on the part of local governments to enable a proper public dialogue about the potential impacts of this type of technology.
Countries which export surveillance technology also hold responsibility for the use of these products in developing countries. In 2019 David Kaye, the UN Special Rapporteur on freedom of opinion and expression, called for a moratorium on the sale of surveillance equipment, particularly across the Global South, until ‘rigorous human rights safeguards are put in place’ for both governments and non-state actors. Most recently, in 2021, the UN Human Rights Council issued a resolution to revisit the UN Guiding Principles on Business and Human Rights to explore the role of the private sector in the development and spread of emerging technologies that threaten human rights.
Weak legal grounds and a lack of human rights assessments
Both Argentina and Brazil have federal systems of government where municipal, state and federal legislations coexist and, in some cases, contradict one another. This generates complex, patchwork-style regulatory frameworks with varying standards and safeguards.
This dynamic has played out in the ways local governments have sought to justify the legality of facial recognition deployments. Argentina and Brazil have seen a combination of city legislation and state-level regulatory proposals that fall short of standards enshrined in their respective constitutions, international human rights treaties and federal laws.
In the case of Buenos Aires, the deployment of facial recognition technologies initially took place on weak legal grounds, being introduced through a resolution of the city government rather than an actual law. However, in October 2020, the city legislature legalized the use of facial recognition technologies through the modification of Law 5688 of 2016, which regulates the city’s security systems. The amendment was strongly opposed by civil society organizations, which highlighted the government’s failure to conduct proper human rights assessments, particularly around the right to privacy. This point was reinforced firstly in 2019 and again in 2021 by the UN Special Rapporteur on the right to privacy, who expressed concern about how facial recognition was deployed in Buenos Aires ‘without the necessary privacy impact assessment or the desirable consultation and strong safeguards’.
The existence of a city-level regulation does not mean that facial recognition in Buenos Aires meets the principle of legality. The impact of the technology on the right to privacy – enshrined in Articles 18 and 19 of Argentina’s national constitution – remains to be properly assessed. In addition, Argentina has not only ratified international human rights treaties but has also granted constitutional status to the rights set out in the International Covenant on Civil and Political Rights (ICCPR), and the American Convention on Human Rights (ACHR). Beyond privacy assessments, the Buenos Aires city government has not evaluated how facial recognition impacts other fundamental rights, including the rights to freedom of expression, freedom of assembly and association, and the right to non-discrimination.
The existence of a city-level regulation does not mean that facial recognition in Buenos Aires meets the principle of legality. The impact of the technology on the right to privacy remains to be properly assessed.
Civil society in Argentina has played an active role in questioning the legality of facial recognition deployments. ADC, the local civil society organization, presented a legal action in 2019 to declare the use of the technology in Buenos Aires unconstitutional: after being on hold for almost three years, the request was eventually rejected. In 2020 the Observatory of Argentine Computer Law (Observatorio de Derecho Informático Argentino – ODIA) presented a writ of amparo before the judiciary to halt the use of facial recognition in Buenos Aires. The legal action led to an investigation in 2022 by a city judge, who found that the city government had used special permits granted for its facial recognition system to run unauthorized searches for individuals who did not feature in any criminal or missing persons watch lists. The local judge ordered the suspension of the facial recognition system, in what became the first active involvement of an Argentine court in the facial recognition debate. Shortly afterwards, in September 2022, ODIA’s pending writ of amparo was resolved with a city-level court finding that the facial recognition system – as currently deployed – is unconstitutional. While the resolution is likely to be contested, it highlighted blind spots in the legal framework underpinning the use of facial recognition in the city of Buenos Aires.
Argentina’s outdated data protection law has also been another point of contention around facial recognition deployments. The law, established in 2000, is widely considered as no longer fit to properly address the challenges that have emerged through the adoption of new technologies and the growth of the internet. As biometric technologies are increasingly deployed in the country, civil society actors have called for the immediate update of this piece of legislation, asking for clear guidelines and protections to be applied to the collection of sensitive personal data through technologies such as facial recognition. Efforts to have Argentina’s data protection law updated have not yet borne fruit.
Facial recognition deployments in Brazil have also rested on weak legal grounds. In the case of São Paulo, the local city government has steered clear from attempting to regulate facial recognition through city-level legislation. This appears to be a trend across the country, where city legislatures have refrained from pronouncements on the use of facial recognition. However, the state of São Paulo came close to passing regulation on the subject. During his tenure as governor (2019–22), João Doria – who had otherwise been an avid supporter of the deployment of surveillance technologies – vetoed a bill approved by the state legislature following a successful advocacy campaign from civil society. The law would have required the São Paulo Metro and metropolitan train system to deploy facial recognition, preparing the ground for later partnerships with security forces. At least four further bill proposals were made at the state and federal levels between 2019 and 2020, indicating a growing intention to regulate facial recognition technology. As in Argentina, many of these proposals sought to provide a ‘green light’ for the adoption of facial recognition in Brazil, with little attention being paid to building in adequate safeguards.
This situation may soon change, however. In March 2022 the Brazilian federal senate created a commission of legal experts to advise on the drafting of a proposed bill for the regulation of AI. Following a series of public audiences to incorporate contributions from subject-matter experts across a wide range of backgrounds – from academia to local think-tanks, civil society organizations and legal experts – the commission’s rapporteur expressed concern about algorithmic biases in the technology and the impact on the rights of children, hinting that the commission may consider banning the use of facial recognition for law enforcement purposes. In June 2022, a civil society-driven campaign entitled #SaiDaMinhaCara [which translates as ‘Get out of my face’] encouraged 50 state and municipal legislators to introduce proposals to ban facial recognition from being used in public spaces.
Discussions around the use of facial recognition in Brazil are strongly underpinned by existing national regulation and the right to privacy as enshrined in the country’s constitution. In addition, the country has ratified both the ICCPR and the ACHR. In sum, this means that facial recognition deployments must uphold privacy standards and their impacts on rights to freedom of expression, peaceful assembly and non-discrimination must be considered.
Brazil also boasts one of the most progressive data protection laws in the wider Latin American region: the General Personal Data Protection Law (Lei Geral de Proteção de Dados Pessoais – LGPD). Likened to the European Union (EU)’s General Data Protection Regulation (GDPR), Brazil’s LGPD offers strong protections but sets explicit exceptions for activities related to public safety, national defence, state security, and the investigation and prosecution of criminal offences. This means that uses of facial recognition by public security forces, such as the deployment by the São Paulo civil police during the 2020 São Paulo Carnival, are beyond the protections of the LGPD. An expert committee has put forward a bill proposal to regulate data protection in law enforcement, but it is uncertain whether this will come into force in the near future. The research undertaken for this paper has been unable to identify any proof that human rights assessments were conducted in connection with this specific pilot.
Inadequate transparency and oversight
The use of facial recognition in both Buenos Aires and São Paulo has been marked by inadequate transparency and oversight.
Transparency in the use of facial recognition systems is crucial for the proper assessment of the effectiveness and proportionality of deployments. This entails making information available to the public in adequate measure: for example, this might include details about the technology providers and the key technology features, where the technology is deployed, how data is collected and kept secure, and whether any data is retained.
In both instances, such information as has been made available on the performance of facial recognition systems has come either from official statements to the press or from responses to freedom-of-information access requests submitted by civil society representatives. No verification mechanisms exist to corroborate performance data, and information rarely offers longitudinal data, preventing the continuous assessment of deployments over time.
In a response to a freedom-of-information access request presented by ADC in April 2019, the Buenos Aires city government reported that facial recognition deployments in the city had an effectiveness rate of 90 per cent in identifying missing criminals; no access was provided to databases to independently verify such a claim or help understand the methodology employed to arrive at such a result. This reported performance is unusually high when compared, for example, to a deployment in 2018 in Cardiff, UK, where the police reported a 92 per cent ‘false positive’ rate.
In the case of Brazil, access to performance indicators for surveillance technologies is also inconsistent, and relies on ad hoc public statements by authorities to the press. During the live facial recognition pilot during the 2020 São Paulo Carnival, no statistical results were reported. In other areas of the country where data has been made available, false positive rates appear high. For instance, facial recognition techniques employed during the 2019 Salvador de Bahía Carnival identified 903 potential suspects but led to only 33 confirmed identifications and arrests.
Oversight mechanisms, on the other hand, are crucial to ensure accountability and build safeguards. They allow for the monitoring of those in charge of deploying facial recognition technologies, so that abuses can be prevented and opportunities for contesting misidentification can be offered (and redress sought). They also provide a means of assessing the application of data protection standards, and of ensuring compliance with transparency requirements.
In Buenos Aires and São Paulo, government authorities have provided little information regarding the existence of oversight mechanisms. The City of Buenos Aires has reported having disciplinary procedures in place for individuals within the police force that make inappropriate use of the system. However, no oversight mechanism exists to monitor institutional abuses. A wave of wrongful detentions triggered a review by the Buenos Aires Ombudsman in February 2022, though this has been an ad hoc assessment rather than a systematic review. The judicial resolution that declared the system unconstitutional in September 2022 identified the failure by the Buenos Aires city government to set up an oversight body as one of the existing irregularities within the system.
In the case of the São Paulo biometric identification laboratory, it is not clear whether any specific oversight mechanisms apply. While it is likely that the protocols of the regular security forces are enforced, the research conducted for this paper was unable to find whether these protocols are robust enough or are conducted by an independent body.
Reliance on police databases and reinforcement of structural discrimination
Live facial recognition techniques like those employed in Buenos Aires and São Paulo rely on police databases to identify potential suspects. Police regulation specialists Barry Friedman and Andrew Guthrie Ferguson have highlighted how ‘mugshot’ databases in the US are the product of decades of discriminatory policing; this also holds true for countries such as Argentina and Brazil, where police records similarly reflect the disproportionate criminalization of individuals based on race and income level. Just as algorithmic bias can reinforce inequalities, police databases can further contribute to the replication of structural discrimination.
In the case of Buenos Aires, police forces employ the National Inquiry System on Default and Detention Orders (Consulta Nacional de Rebeldías y Capturas – CoNaRC) database. Following a visit to the country in 2019, the UN Human Rights Council’s Special Rapporteur on the right to privacy expressed concern about the CoNaRC database which, despite being described as a list of ‘most wanted’ criminals, includes individuals who are sought for committing petty crimes. When used in tandem with facial recognition systems, this type of police database can reinforce the criminalization of minor offenders.
The Special Rapporteur also highlighted that the CoNaRC database is plagued by errors. Some 29.5 per cent of the more than 46,000 entries do not specify the offence for which the person is wanted. The wrongful detention of computer science professor Leo Colombo Viña in 2020, after his identification details were erroneously entered into the database, exposed the fact that the information listed could contain serious errors. The UN report additionally highlighted that 61 children were listed on the database. Human Rights Watch publicly criticized the Argentinian and Buenos Aires governments for failing to meet ‘international obligations to respect children’s privacy in criminal proceedings’ and asserting that the national authorities should remove these records. (The records in question have reportedly since been removed.)
Little concrete information is available about the composition of the criminal databases used in São Paulo. The live facial recognition pilot that was conducted during the 2020 Carnival compared facial images captured through live camera feeds against databases of wanted criminals and missing persons, with an estimated 30,000 and 10,000 entries each. In the case of wanted criminals, the database only contains details on suspects who have evaded justice starting from 2015, one year after criminal records began to be digitized. Both databases are reported to be maintained, secured and accessed only by the IIRGD.
The criminal database, however, is likely to reflect discriminatory biases within Brazil’s police system. A 2019 report by the country’s Rede de Observatórios da Segurança (Network of Security Observatories) indicated that 90 per cent of arrests resulting from the use of facial recognition in the states of Bahia, Ceará, Rio de Janeiro, Paraíba and Santa Catarina involved black Brazilians. This corresponds to the demographic composition of the Brazilian prison population, which is disproportionately black, suggesting that police databases in the country tend to reinforce forms of structural discrimination.
While no information is available about whether the police databases in Brazil include minors who have committed criminal offences, the commission in charge of drafting the proposed AI regulation bill (see above) has expressed concern as to how facial recognition may affect the rights of Brazilian children and has listed this as an important consideration in their deliberations about whether to ban the use of the technology for law enforcement purposes.
Poorly defined standards in data use and retention
Facial recognition systems rely on the processing of large amounts of personal and sensitive data. In the deployments in Buenos Aires and São Paulo, there has been little transparency about what data use and retention practices apply, and whether these meet minimum standards.
The government of the City of Buenos Aires has only reported on its data processing practices following the 2019 submission by the ADC of a freedom-of-information access request (see above). According to the city government’s response, the data generated by the facial recognition system is managed by police authorities and is subject to security, privacy and confidentiality protocols prohibiting data transfers to other administrative authorities in the city of Buenos Aires. Furthermore, the response stated that data is destroyed when judicial orders are withdrawn or lines of inquiry exhausted. The mere inclusion of an individual’s data on the CoNaRC database appears to be sufficient grounds to trigger searches aimed at locating wanted individuals – including those wanted for petty crime – without requiring a specific court order. Lack of sufficient clarity about data use and retention render proper human rights assessments of the technology unviable.
In the case of São Paulo, authorities have provided little information about how personal and sensitive data is processed, retained and kept secure. As mentioned above, IIRGD’s biometric identification laboratory reports that the databases are maintained, protected and accessed only by the IIRGD itself, although no specific procedural information has been made available. If held to the standards laid down by the LGPD, the laboratory would be required to demonstrate that it has an adequate reason for processing data; prove that the data is kept secure; abide by transparency requirements; and respond to data access requests from the public. Authorities would also be required to ensure that data is not utilized for discriminatory purposes.
Lack of sufficient clarity about data use and retention render proper human rights assessments of the technology unviable.
Facial recognition systems require data collected through live footage to be cross-referenced with police watch lists. In the case of Buenos Aires, given that the CoNaRC database does not contain photographs, biometric information is pulled from the national population registry, RENAPER (Registro Nacional de las Personas), with which the ministry of justice and security of Buenos Aires has an agreement for running queries. This process allows the CoNaRC database to be cross-referenced with the biometric data of wanted individuals.
Argentina is known to have one of the most intrusive data collection systems in Latin America. The court ruling of April 2022 that suspended the use of facial recognition in Buenos Aires was based on the claim that the Buenos Aires city government abused its access to RENAPER, to search for individuals – including political figures, human rights activists and social leaders – whose data was not held in the criminal database. On the other hand, the city government claims that the agreement with RENAPER authorizes other uses, beyond the comparison of live data from the facial recognition system. However, the staggering number of searches run – reportedly nine million between April 2019 and March 2022 – speaks of the potential for abuse in the absence of clear policies for the use of this data.
Similar arrangements are reported to exist in relation to São Paulo’s biometric identification laboratory, with security forces likewise having access to existing citizen databases for cross-referencing purposes. For example, beyond live facial recognition pilots, the laboratory regularly runs searches using static images of wanted persons. These static images are compared with a citizens’ database which contains 32 million entries, with data derived from identity documents issued by the state of São Paulo. The database includes biometric information on São Paulo residents, such as digital fingerprints and photographs.
Across Latin America, governments have a poor track record in preventing data breaches, which raises serious concerns about the ability of local governments to secure the sensitive data that is collected through facial recognition systems. In 2019, hackers leaked 700 gigabytes (GB) of data they had obtained from Argentina’s federal security forces and the Buenos Aires police; the leak included ‘confidential documents, wiretaps and personal information of police officers themselves’. More recently, in October 2021, the RENAPER database was hacked following a security breach at Argentina’s federal ministry of health; the data was subsequently reported to be available for purchase online. Brazil recorded its largest personal data leak in January 2021, when a massive database containing the records of 223 million Brazilians, including deceased individuals, was detected on the ‘Dark Web’. The leak included personal data such as facial images, names, addresses and unique taxpayer identification codes, among other sensitive information. Many other breaches have been reported: in 2016, the São Paulo city administration accidentally leaked the personal data of 365,000 patients, including some medical records, and in 2018, the tax identification numbers of some 120 million Brazilians were made available online due to a misconfigured server.