But, wait, there’s more. Completely under the radar, the company has been making the tool available to hundreds of law enforcement agencies, ranging from local cops to the F.B.I. and the Department of Homeland Security. It claims--although it wouldn’t show the Times a list--that more than 600 agencies are already using the tool. And, not surprisingly, they love it. Consider this reliable testimonial from what appears to be a hastily put together Clearview AI website designed to assure us that it intends to do no evil. Someone identified as a “Detective Constable in the Sex Crimes Unit, Canadian Law Enforcement” said:
Clearview is hands-down the best thing that has happened to victim identification in the last 10 years. Within a week and a half of using Clearview, [we] made eight identifications of either victims or offenders through the use of this new tool.
Also not surprising, privacy and ethical groups hate it. The backlash has been swift and forceful.
- Twitter has demanded that Clearview AI remove the data it collected from the public profiles of its users and sent the company a cease-and-desist letter.
- A lawsuit filed in the US District Court for the Northern District of Illinois East Division by an individual alleges that Clearview AI's actions are a threat to civil liberties. The charges read:
Without obtaining any consent and without notice, Defendant Clearview used the internet to covertly gather information on millions of American citizens, collecting approximately three billion pictures of them, without any reason to suspect any of them of having done anything wrong, ever," the complaint alleges. "Clearview used artificial intelligence algorithms to scan the facial geometry of each individual depicted in the images, a technique that violates multiple privacy laws.
- New Jersey State Attorney General Gurbir Grewal has banned police in the state from using the app and to take down a promotional video that claims Clearview AI helped capture a terrorist in the state.
- The Electronic Frontier Foundation (EFF) and 40 other privacy groups sent a letter to the Department of Homeland Security’s Privacy and Civil Liberties Oversight Board calling for a ban on further implementation of facial-recognition technologies funded by the government. Said the EFF:
All of this shows, yet again, why we need to press pause on law enforcement use of face recognition. Without a moratorium or a ban, law enforcement agencies will continue to exploit technologies like Clearview’s and hide their use from the public…We need to stop the government from using this technology before it’s too late.In addition to concern about the obvious invasion of privacy, there are also concerns in some circles about the company’s close ties to conservative politics and politicians and fears that it might be used to troll political enemies. Clearview was founded by Richard Schwartz, a former aide to Rudolph W. Giuliani when he was mayor of New York, and backed financially by Peter Thiel, the conservative venture capitalist behind Facebook and Palantir.
As toxic as politics are in the U.S. and Britain right now, this further erosion of the right of anonymity seems likely to further the distrust that liberal citizens have been feeling.
My takeIt’s hard to know just how seriously to take Clearview AI. But the lengths the company has gone to stay under the radar and keep the public in the dark about what’s it up to do not inspire confidence. It also doesn’t inspire much confidence in the tech press which missed it.
At this point, we don’t know for sure just how accurate its technology is or all the underhanded ways it might be misused. We do know that its more extravagant claims have proved less persuasive than they were presented but that should not surprise in a post-Cambridge Analytics world.
I’m sure Ton-That is a heck of a developer but it seems obvious that the big guys like Google could have developed a similar product years ago and turned it into a thriving business. The fact that they didn’t (or at least if they did then they've kept that one very well hidden) tells me this is serious stuff that has wide scale implications across multiple ethical and development dimensions.