On September 26, 2019, thirty-nine technology and law-enforcement organizations sent an open letter to Congress. The letter championed facial recognition technologies as a law enforcement tool to more efficiently promote public safety. As a reaction to an increase in public fervor against the use of facial recognition technologies nationwide, the letter proposed guidance, training, and increased scrutiny of underlying data sets. Just a day earlier, Amazon CEO Jeff Bezos announced the company’s new policy initiative to draft regulations on facial recognition technology. Amazon will propose these regulations to lawmakers in an attempt to avoid a nationwide ban on facial recognition technology, which became part of the discussion after the ban San Francisco put on the technology in May. Amazon has a comprehensive technical knowledge of biometric data collection and the potential security risks inherent in facial recognition technology. For example, the company is currently developing a proprietary image analysis service called Rekognition, which is a major player in a facial recognition marketplace projected to exceed $8B in 2022. Given the vast amount of profit to be gained in the emerging market for facial recognition technology, critics have questioned Amazon’s motivations for advocating for increased regulation and control. Critics claim paydays are prioritized over the privacy rights of individuals.
While the United States looks to address more widespread implementation of services like Rekognition, the European Union has already adopted comprehensive data protection reforms by enacting the General Data Protection Regulation (GDPR), and these reforms apply to facial recognition technology. The regulations, which came into effect in May 2018, classify the biometric data gathered from facial recognition technology as “sensitive personal data.” Notably, the use of facial recognition technology to collect sensitive data is restricted, but there are significant exceptions that allow its use, including when a data subject gives consent; the data is required for employment, social security, or social protection obligations; or the data is necessary to aid the public’s interest.
Technology companies and law enforcement organizations in Europe have experimented with the boundaries of these exceptions by developing and testing facial recognition systems on the public. In February, the French city of Nice launched over 2,600 CCTV cameras as a trial initiative that matched the databased likeness of consenting adults to camera feedback during a city carnival. The city used facial recognition technology to gather the biometric data of volunteers as they arrived at the carnival and to subsequently identify them in crowds during the festivities. While city officials declared the trial a success, CNIL, France’s independent data protection authority, is conducting an ongoing investigation to determine the system’s effectiveness as well as any concerns relating to the storage of the data collected. The Nice trial was careful to include only consenting adults, but high schools in France and Sweden have contemplated trials of their own to track high school attendance through biometric portals at building entrances. NGOs, teachers’ unions, and parents have brought suit to halt the school trial while insisting that student consent is invalid. A CNIL decision is still pending and will surely set a precedent for facial recognition technology use in the country and region.
Even in the United Kingdom, a nation with a history of widespread surveillance, public concerns of privacy rights violations have surfaced. A high court in Cardiff recently ruled against a man who alleged South Wales Police violated his privacy and human rights by gathering his sensitive personal data without his consent using facial recognition technology. In September, a London property developer admitted facial recognition technology was used at Kings Cross, one of the busiest train stations in Europe. These events have incited calls for increased transparency about facial recognition technology and its public use. Nevertheless, British officials have been testing the technology in various public functions since 2016 and continue to insist it keeps individuals safer more efficiently and at lower cost.
With the legality of facial recognition technology up for debate, law enforcement organizations and private companies continue advocating for its use. Opening up public discourse on the topic will require tech giants and law enforcement agencies to communicate with honesty and accountability, whether the topic is the ownership of data, public safety, or the money to be made.