If a local tech industry critic has his way, San Francisco could become the first US city to ban its agencies from using facial recognition technology.
Aaron Peskin, a member of the city’s Board of Supervisors, proposed the ban Tuesday as part of a suite of rules to enhance surveillance oversight. In addition to the ban on facial recognition technology, the ordinance would require city agencies to gain the board’s approval before buying new surveillance technology, putting the burden on city agencies to publicly explain why they want the tools as well as the potential harms. It would also require an audit of any existing surveillance tech—things like gunshot-detection systems, surveillance cameras, or automatic license plate readers—in use by the city; officials would have to report annually on how the technology was used, community complaints, and with whom they share the data.
Those rules would follow similar ordinances passed in nearby Oakland and Santa Clara County. But with facial recognition, Peskin argues an outright ban makes more sense than regulating its use. “I have yet to be persuaded that there is any beneficial use of this technology that outweighs the potential for government actors to use it for coercive and oppressive ends,” he says.
Facial recognition technology is increasingly common for unlocking our phones and tagging our Facebook friends, but it remains rife with potential bias, especially around identifying people of color. In the hands of government, critics like Peskin argue, it enables all-too-easy access to real-time surveillance, especially given the availability of large databases of faces and names (think your driver’s license or LinkedIn).
“This is the first piece of legislation that I’ve seen that really takes facial recognition technology as seriously as it is warranted and treats it as uniquely dangerous,” says Woodrow Hartzog, professor of law and computer science at Northeastern University.
“This is the first piece of legislation that I’ve seen that really takes facial recognition technology as serious as it is warranted and treats it as uniquely dangerous.”
Woodrow Hartzog, Northeastern University
Privacy laws in Texas and Illinois require anyone recording biometric data, including face scans and fingerprints, to give people notice and obtain their consent. But that’s not always so effective in practice, explains Hartzog. As the technology grows more pervasive, simply declining to participate becomes less practical. The San Francisco proposal, while not addressing private surveillance in public spaces, takes a different tack. “Moratoriums and bans prevent the technology from getting embedded in everything,” Hartzog says. “Abuse doesn’t happen at the outset. It happens when the technology becomes entrenched and dismantling it becomes unimaginable.”
Those concerns have been echoed by prominent tech executives, including Microsoft CEO Satya Nadella, who last week in Davos warned that the use of facial recognition technology could become a “race to the bottom” without government oversight. According to Microsoft, the potential for abuse may put facial recognition beyond the reach of industry self-policing.
But the technology draws continued interest from law enforcement. Amazon’s Rekognition system has been tested by police in Orlando and in Washington County, Oregon. In the Bay Area, an official at BART, the regional mass transit system, briefly floated using facial recognition technology after a string of violence at stations last fall. That proposal was swiftly swatted down by privacy advocates.
The WIRED Guide to Artificial Intelligence
Peskin is a well-known gadfly to tech, with proposals aimed at the heart of the local industry—not all of which have proceeded smoothly. Last year, in response to Facebook’s string of privacy gaffes, he sponsored legislation to strip the name of Facebook CEO Mark Zuckerberg from the city’s main public hospital. Another proposal would have banned workplace cafeterias in an effort to help restaurants struggling to woo customers.
This proposal, cosponsored by Board of Supervisors president Norman Yee, could run afoul of law enforcement agencies. A bill in the California legislature last year that would have given municipalities oversight over local law enforcement’s use of surveillance technology, and which did not single out facial recognition, failed after facing opposition from police groups. When reached for comment, the San Francisco Sheriff’s Office responded that it was still reviewing the proposal, and the San Francisco Police Department said it does not comment on proposed legislation.
Facial-recognition technology poses a unique surveillance threat and is being deployed without adequate privacy protections. In the wake of a terrorist attack or other violent incident, we should expect CBP to collect and share more data, including facial images, with other law enforcement agencies.
In any case, when San Francisco tries something, people tend to watch, says Matt Cagle, an attorney with the ACLU of Northern California, which supports the legislation. “The city at the core of our technology center is saying we shouldn’t deploy surveillance technologies just because we can.”
More Great WIRED Stories
- Proposed changes to Chrome could weaken ad blockers
- Fender's new acoustic guitar has a million different voices
- Watching our weight could be killing us
- One couple’s tireless crusade to stop a genetic killer
- How Trump could wind up making globalism great again
- 👀 Looking for the latest gadgets? Check out our picks, gift guides, and best deals all year round
- 📩 Want more? Sign up for our daily newsletter and never miss our latest and greatest stories