US politicians discover personally the pitfalls of facial recognition: is it time to ban it completely?




For all the theoretical concerns about the limitations of facial recognition systems, there’s nothing like personal experience to hammer the point home. That was confirmed recently when the ACLU ran an interesting experiment using Amazon’s cloud-based Rekognition system, which Privacy News Online discussed a couple of months back:

Using Rekognition, we built a face database and search tool using 25,000 publicly available arrest photos. Then we searched that database against public photos of every current member of the House and Senate. We used the default match settings that Amazon sets for Rekognition.

The result was noteworthy: Amazon’s software incorrectly matched 28 members of Congress, identifying them as other people who have been arrested for a crime. Particularly troubling was the racial bias displayed by the matches: nearly 40% of Rekognition’s false matches in the ACLU’s test were of people of color, even though they make up only 20% of Congress. Also striking was how little this experiment cost – just $12.33, which underlines how cheap facial recognition systems have become. That fact means the barrier to deploying them widely is low.

Not surprisingly, some US politicians were not best pleased by these results. Cnet obtained copies of letters sent by some of them to Amazon’s CEO, Jeff Bezos. One pointed out why Rekognition’s apparent racial bias was such a concern – something that the Congressional Black Caucus had already raised back in May, when details of the Rekognition system were first revealed. As the recent letter put it :

Given the results of this test, we are alarmed about the deleterious impact this tool – if left unchecked without proper, consistent, and rigorous calibration – will have on communities of color: immigrants; protestors peaceably assembling and others petitioning the Government for a redress of grievances; or any other marginalized group.

Calibration is a key issue for facial recognition systems. If the training data is unrepresentative, or used incorrectly, the system will produce biased results. Many people probably view matches obtained with facial recognition as trustworthy, thanks to a kind of circular reasoning that since it’s a match, it must be correct. That’s not the case: a badly-configured system will find matches where they don’t exist, as the ACLU experiment shows. Another issue was raised by Amazon’s response to this incident, written by Matt Wood, who is in charge of Machine Learning at Amazon Web Services. He noted that there was an important variable that helped determine how good the match is, as measured in terms of the false positives that are generated – apparent matches that are incorrect. This is the so-called “confidence threshold”:

The 80% confidence threshold used by the ACLU is far too low to ensure the accurate identification of individuals; we would expect to see false positives at this level of confidence. We recommend 99% for use cases where highly accurate face similarity matches are important (as indicated in our public documentation).

Amazon’s post went on to explain that the company carried out its own experiment, seeking matches for public photos of all members of US Congress among a dataset of over 850,000 faces commonly used in academia. Despite the much larger dataset, there were zero false positives when the confidence level was set to the more stringent 99% level. As Amazon rightly notes, this shows the critical importance of choosing appropriate confidence levels when deploying facial recognition systems – something that is rarely discussed. However, an ACLU spokesperson pointed out to Ars Technica that this argument is rather undermined by a post on Amazon’s site explaining in detail how police forces can use Rekognition to “identify persons of interest for law enforcement”, which set a confidence level of 85%, not 99%.

Amazon’s response to the ACLU’s results also suggested that in real-world public safety and law enforcement situations, Amazon Rekognition is “almost exclusively used to help narrow the field and allow humans to expeditiously review and consider options using their judgment”, rather than allowing the system to make fully autonomous decisions. That may be a rather over-optimistic view.

One of the key concerns with the wider roll-out of low-cost facial recognition systems is that they will become a routine part of policing, for example as part of the new FirstNet system discussed last week. The technology is likely to be deployed in situations where officers are required to make rapid decisions; there may not be much time to “review and consider options” – they must act. The natural tendency will be to trust facial recognition systems without worrying about technical issues like confidence levels. It is easy to imagine errors being made, perhaps with fatal consequences. Even Axon, the biggest supplier of body cameras in the US, thinks this is a concern.

Amazon’s Matt Wood concludes his blog post by supporting Microsoft’s earlier call for government regulations:

machine learning is a very valuable tool to help law enforcement agencies, and while being concerned it’s applied correctly, we should not throw away the oven because the temperature could be set wrong and burn the pizza. It is a very reasonable idea, however, for the government to weigh in and specify what temperature (or confidence levels) it wants law enforcement agencies to meet to assist in their public safety work.

There are already suggestions how this might be done through regulation. A 2016 report entitled “The Perpetual Line-up: Unregulated Police Face Recognition in America” from a group at the Center on Privacy & Technology at Georgetown Law, not only analyzed the problem, but offered some solutions. These include Model Face Recognition Legislation, and a Model Police Face Recognition Use Policy.

However, Woodrow Hartzog, Professor of Law and Computer Science at Northeastern University School of Law and College of Computer and Information Science, takes a more pessimistic view. He believes that a regulatory framework is not enough – he says we need a complete ban on the technology:

Because facial recognition technology holds out the promise of translating who we are and everywhere we go into trackable information that can be nearly instantly stored, shared, and analyzed, its future development threatens to leave us constantly compromised. The future of human flourishing depends upon facial recognition technology being banned before the systems become too entrenched in our lives. Otherwise, people won’t know what it’s like to be in public without being automatically identified, profiled, and potentially exploited. In such a world, critics of facial recognition technology will be disempowered, silenced, or cease to exist.

Hartzog’s call is bold and well-meaning, but is it realistic? Are police forces or intelligence agencies likely to be willing to forgo the surveillance capabilities that facial recognition offers? Even if it is forbidden to use the technology, it is hard to tell when it is being deployed: every CCTV, every photo, can potentially feed into a facial recognition system, and no one would know. Equally, the technology and expertise is already out there, and there is no way to call it back. It will always be used and abused by those who want its power, and who are prepared to dodge or simply ignore any legal consequences that might follow in some jurisdictions.

Maybe the best we can hope for is reasonable government regulation that limits the worst excesses of facial recognition systems. The fact that Microsoft and now Amazon are showing a willingness to move in that direction means that other tech companies would probably do the same. Some US politicians have just learned the hard way how flawed facial recognition systems can be. Perhaps they are beginning to appreciate the serious consequences that might flow from these kinds of false positives. As a result, now seems like a good moment to begin a wider conversation about regulating the technology before it is too late, and the situation becomes as bad as Hartzog fears.

Featured image by ACLU.

About Glyn Moody

Glyn Moody is a freelance journalist who writes and speaks about privacy, surveillance, digital rights, open source, copyright, patents and general policy issues involving digital technology. He started covering the business use of the Internet in 1994, and wrote the first mainstream feature about Linux, which appeared in Wired in August 1997. His book, "Rebel Code," is the first and only detailed history of the rise of open source, while his subsequent work, "The Digital Code of Life," explores bioinformatics - the intersection of computing with genomics.

VPN Service

Similar Articles:

Dedicated first responder network raises privacy, transparency and net neutrality issues

Dedicated first responder network raises privacy, transparency and net neutrality issues

Eye in the Sky – Drone Surveillance and Privacy

Eye in the Sky – Drone Surveillance and Privacy

After call to implant microchips in people awaiting trial, are they about to become the next threat to our privacy?

After call to implant microchips in people awaiting trial, are they about to become the next threat to our privacy?

Julia Reda – Out-of-control censorship machines removed my article warning of out-of-control censorship machines

Julia Reda – Out-of-control censorship machines removed my article warning of out-of-control censorship machines