Much more rapidly than anyone originally thought possible, facial recognition technology has become part of the cultural mainstream. Facebook, for example, now uses AI-powered facial recognition software as part of its core social networking platform to identify people, while law enforcement agencies around the world have experimented with facial recognition surveillance cameras to reduce crime and improve public safety. But now it looks like society is finally starting to wake up to the immense privacy implications of real-time facial recognition surveillance.
“This is the first piece of legislation that I’ve seen that really takes facial recognition technology as serious as it is warranted and treats it as uniquely dangerous.” Woodrow Hartzog, Northeastern University Privacy laws in Texas and Illinois require anyone recording biometric data, including face scans and fingerprints, to give people notice and obtain their consent.
San Francisco moves to ban facial recognition surveillance
For example, San Francisco is now considering an outright ban on facial recognition surveillance. If pending legislation known as “Stop Secret Surveillance” passes, this would make San Francisco the first city ever to ban (and not just regulate) facial recognition technology. The bill is being brought for a vote, as concerns mount that facial recognition surveillance unfairly targets and profiles certain members of society, especially people of color. When law enforcement agencies adopt this technology, for example, they usually roll it out in high-crime, low-income neighborhoods, and those disproportionately are neighborhoods with a high percentage of people of color.
Facial recognition: It’s time for action
This outright ban on facial recognition surveillance is part of a broader package of rules designed to give the city’s Board of Supervisors enhanced oversight over all surveillance technology used by the city. In addition to blocking any facial recognition surveillance technology from being used by any city agencies or law enforcement authorities, the new rules would require an audit of all existing surveillance technology (such as automatic license plate identification tools); an annual report on how technology is being used and how data is being shared; and board approval before buying any new surveillance technology for the city.
A tipping point for facial recognition surveillance
One reason why the outright ban on facial recognition technology is so important is because it fundamentally flips the script on how to talk about the technology. Previously, the burden of proof was on the average citizen and advocacy groups – it was up to them to show the hazards and negative aspects of the technology. Now, the burden of proof is on any city agency (including local police) that would like to implement the technology – they not only have to demonstrate that there is a clear use case for the technology, but also demonstrate that the pros far outweigh the cons for any high-tech security system (including a facial recognition database).
While the American Civil Liberties Union (ACLU) of Northern California supports the legislation, the local law enforcement authorities do not. As they see it, there is a clear use case for such technology – it helps to improve overall security, it discourages crime in a certain area if people know they are being watched, and images captured enable them to solve crimes by understanding who was at a particular scene at a particular point in time. Moreover, they resent what they see as far too much oversight from the city over the way they implement facial recognition surveillance. Earlier, they challenged a bill that would have given municipalities more oversight over how police departments can use surveillance technology.
London’s embrace of facial recognition surveillance
Ultimately, the privacy battle over the proper use and scope of facial recognition surveillance will play out on the streets, not in the courts. In London, for example, the Metropolitan Police have experimented with facial recognition technology, with mixed results. Before rolling out the new technology, the police stated that people who covered their face in areas where there were cameras would not be stopped for suspicious behavior.
The new patent pairs facial recognition with products from Ring, a doorbell camera company that Amazon bought: the application describes a system that the police can use to match the faces of people walking by a doorbell camera with a photo database of persons they deem “suspicious.” Likewise, homeowners can also add photos of “suspicious” people into the system and then the doorbell’s facial recognition program will scan anyone passing their home.
Yet, that was not necessarily the case once the technology was already in place. In one high-profile case, a man who was stopped for covering his face was later fined by the police after swearing and becoming hostile. Of course, you can view this behavior in one of two ways – as the actions of a “guilty” person who was properly stopped and detained by the police for covering his face, or as the outraged actions of an “innocent” person who was improperly stopped and detained on a ridiculous charge.
Facial-recognition technology poses a unique surveillance threat and is being deployed without adequate privacy protections. In the wake of a terrorist attack or other violent incident, we should expect CBP to collect and share more data, including facial images, with other law enforcement agencies.
Without a doubt, there is a blurry gray line here. At what point do you stop someone simply because they are trying to maintain their privacy? London police officers were told to “use judgment” when stopping people who avoid the cameras, but doesn’t that imply that certain types of people – such as young people of color – will be stopped more often than others?
Facial recognition surveillance at the White House
In another high-profile case of facial recognition surveillance being rolled out with mixed (some might say dubious) results, consider the case of the U.S. Secret Service experimenting with the use of facial recognition surveillance outside of the White House. The idea is simple: scan the faces of all people strolling around the perimeter of the White House, in order to detect potential “people of interest” (i.e. terrorists) who might cause a security risk for the U.S. president. For now, this is only a limited test, and is scheduled to be completed by the end of summer 2019.
When is the best time to regulate a new technology?
These three examples – the proposed ban of facial recognition surveillance in San Francisco, the use of facial recognition surveillance by London police, and the White House experiment with real-time face recognition – highlight a key point. The time to regulate a new technology is at the very outset, and not after it has become so entrenched and ingrained that getting rid of it would seem to be unnecessarily complex.
That’s why the current moment is so important. Facial recognition systems are now used in airports and at border crossings by immigration and customs enforcement authorities. They are used as crowd control tools in authoritarian nations. And they are used by social networks. It’s now just as common for someone to login to their digital device with their face as it is with their fingerprint. So we really are at a tipping point when it comes to deciding what to do with high tech surveillance systems that use our face as the primary form of identification – if we wait a few more years, it might be too late to do anything about it.
And at AI Now—a symposium held October 16 about the intersection of artificial intelligence, ethics, organizing and accountability, presented by an NYU research institute of the same name—panelists warned that facial-recognition technology has troubling implications for civil rights, especially amid current debates about who has access to public space.