Documents obtained by BuzzFeed News via public records requests showed that Clearview AI previously misrepresented how law enforcement used its software and once told police officers to "run wild" with the tool by testing it on friends and family members.
More than 2,400 police agencies have entered contracts with Clearview AI, a controversial facial recognition firm, according to comments made by Clearview AI CEO Hoan Ton-That in an interview with Jason Calacanis on YouTube.
(Bloomberg) -- Australian and British privacy regulators opened a joint probe into Clearview AI Inc., saying they want to examine how the company’s facial-recognition technology uses people’s data, just days after the company suspended operations in Canada.
Clearview AI will no longer sell its facial recognition software in Canada, according to government privacy officials investigating the company.Government officials from Quebec, British Columbia, and Alberta provinces continue to investigate Clearview AI and Royal Canadian Mounted Police use of its facial recognition software despite Clearview’s exit.
Clearview AI’s planned expansion into the EU hit a roadblock yesterday when the bloc’s privacy watchdog said it “doubts” that the service is legal.The protests have already led Amazon to pause police use of facial recognition for a year, and IBM to stop offering the software entirely.
The American Civil Liberties Union is suing controversial facial recognition firm Clearview AI for violation of the Illinois Biometric Information Privacy Act (BIPA), alleging the company illegally collected and stored data on Illinois citizens without their knowledge or consent and then sold access to its technology to law enforcement and private companies.
Vermont’s Attorney General alleges that this database violates the Vermont Consumer Protection Act, as well as the Data Broker Law. Clearview AI built its controversial app by scraping billions of publicly available images of individuals from the internet without their knowledge or consent.
The complaint goes on to explain that the "unfair acts and practices in commerce" Clearview's actions represent violate the Vermont Consumer Protection Act and: offend public policy as it relates to the privacy of Vermont's consumers; are immoral, unethical, oppressive, and unscrupulous; and cause substantial injury to consumers which is not reasonably avoidable to consumers themselves and not outweighed by countervailing benefits to consumers or to competition.
Business Insider Clearview AI, a facial recognition technology company, has been using social media images to help law enforcement agencies.Lawmakers should take steps to ensure that police and other government officials can't use facial recognition technology to identify law abiding residents and citizens.
Apple has disabled the iOS application of Clearview AI — the facial recognition company that claims to have amassed a database of billions of photos and has worked with thousands of organizations around the world — after BuzzFeed News determined that the New York–based startup had been violating the iPhone maker’s rules around app distribution.
James Martin / CNET Clearview AI, a facial-recognition software maker that has sparked privacy concerns, said Wednesday it suffered a data breach.The company has a database of 3 billion photos that it collected from the internet, including websites like YouTube, Facebook, Venmo and LinkedIn. New York City-based Clearview said the database of images wasn't hacked.
TORONTO (Reuters) - Canadian privacy authorities have launched an investigation into New York-based Clearview AI to determine whether the firm’s use of facial recognition technology complies with the country’s privacy laws, the agencies said on Friday.
Obviously, we're not recommending for anyone to do this, but rather we're pointing out how preposterous it is that they'll only delete the data they have on you if you send them more data, including your government-issued ID.
In a nutshell, the New York Times published an article on Ton-That (and others’ as you will see) tiny company Clearview AI on January 18, 2020 that revealed, among many other serious things, that the company claims to have quietly scraped Facebook, YouTube, Venmo and millions of other websites to assemble a database of 3 billion faces.
Since the New York Times Clearview story was published, there has been some discussion online about using the federal Computer Fraud and Abuse Act (CFAA)—a notoriously vague pre-Internet law intended to punish those who break into private computer systems—to go after scraping of publicly available websites.
The order was issued Friday to county prosecutors, concerning a New York-based company called Clearview AI.“Like many people, I was troubled,” state Attorney General Gurbir Grewal said about the company’s techniques, which were first reported by The New York Times.
Clearview AI, which has scraped millions of photos from social media and other public sources for its facial recognition program — earning a cease-and-desist order from Twitter — has been pitching itself to law enforcement organizations across the country, including to the NYPD.