If signed into law, Massachusetts would become the first state to fully ban the technology, following bans barring the use of facial recognition in police body cameras and other, more limited city-specific bans on the tech.
Documents obtained by BuzzFeed News via public records requests showed that Clearview AI previously misrepresented how law enforcement used its software and once told police officers to "run wild" with the tool by testing it on friends and family members.
More than 2,400 police agencies have entered contracts with Clearview AI, a controversial facial recognition firm, according to comments made by Clearview AI CEO Hoan Ton-That in an interview with Jason Calacanis on YouTube.
Clearview AI has been in the spotlight since a January investigation from The New York Times showed that its facial recognition technology was in widespread use among law enforcement agencies and private companies.
(Bloomberg) -- Australian and British privacy regulators opened a joint probe into Clearview AI Inc., saying they want to examine how the company’s facial-recognition technology uses people’s data, just days after the company suspended operations in Canada.
Clearview AI will no longer sell its facial recognition software in Canada, according to government privacy officials investigating the company.Government officials from Quebec, British Columbia, and Alberta provinces continue to investigate Clearview AI and Royal Canadian Mounted Police use of its facial recognition software despite Clearview’s exit.
Clearview AI’s planned expansion into the EU hit a roadblock yesterday when the bloc’s privacy watchdog said it “doubts” that the service is legal.The protests have already led Amazon to pause police use of facial recognition for a year, and IBM to stop offering the software entirely.
According to a BuzzFeed report, many law enforcement agencies in and near Minneapolis—where current national protests began after police were recorded killing George Floyd—use Clearview and other similar platforms to identify individuals.
The American Civil Liberties Union is suing controversial facial recognition firm Clearview AI for violation of the Illinois Biometric Information Privacy Act (BIPA), alleging the company illegally collected and stored data on Illinois citizens without their knowledge or consent and then sold access to its technology to law enforcement and private companies.
New Zealand Police first contacted the firm in January, and later set up a trial of the software, according to documents RNZ obtained under the Official Information Act. However, the high tech crime unit handling the technology appears to have not sought the necessary clearance before using it.
Vermont’s Attorney General alleges that this database violates the Vermont Consumer Protection Act, as well as the Data Broker Law. Clearview AI built its controversial app by scraping billions of publicly available images of individuals from the internet without their knowledge or consent.
The complaint goes on to explain that the "unfair acts and practices in commerce" Clearview's actions represent violate the Vermont Consumer Protection Act and: offend public policy as it relates to the privacy of Vermont's consumers; are immoral, unethical, oppressive, and unscrupulous; and cause substantial injury to consumers which is not reasonably avoidable to consumers themselves and not outweighed by countervailing benefits to consumers or to competition.
Business Insider Clearview AI, a facial recognition technology company, has been using social media images to help law enforcement agencies.Lawmakers should take steps to ensure that police and other government officials can't use facial recognition technology to identify law abiding residents and citizens.
Clearview collects pictures posted online, combines them in a huge database and lets others - law enforcement agencies, companies, but also some elites - search for your data.
Clearview was unknown to the general public until this January, when The New York Times reported that the secretive start-up had developed a breakthrough facial recognition system that was in use by hundreds of law enforcement agencies.
A startup that scraped billions of images from major web services – including Facebook, Google, and YouTube – created software that can be loaded onto smartphones to identify people using publicly available photos.
Apple has disabled the iOS application of Clearview AI — the facial recognition company that claims to have amassed a database of billions of photos and has worked with thousands of organizations around the world — after BuzzFeed News determined that the New York–based startup had been violating the iPhone maker’s rules around app distribution.
James Martin / CNET Clearview AI, a facial-recognition software maker that has sparked privacy concerns, said Wednesday it suffered a data breach.The company has a database of 3 billion photos that it collected from the internet, including websites like YouTube, Facebook, Venmo and LinkedIn. New York City-based Clearview said the database of images wasn't hacked.
TORONTO (Reuters) - Canadian privacy authorities have launched an investigation into New York-based Clearview AI to determine whether the firm’s use of facial recognition technology complies with the country’s privacy laws, the agencies said on Friday.
Obviously, we're not recommending for anyone to do this, but rather we're pointing out how preposterous it is that they'll only delete the data they have on you if you send them more data, including your government-issued ID.
In a nutshell, the New York Times published an article on Ton-That (and others’ as you will see) tiny company Clearview AI on January 18, 2020 that revealed, among many other serious things, that the company claims to have quietly scraped Facebook, YouTube, Venmo and millions of other websites to assemble a database of 3 billion faces.
Since the New York Times Clearview story was published, there has been some discussion online about using the federal Computer Fraud and Abuse Act (CFAA)—a notoriously vague pre-Internet law intended to punish those who break into private computer systems—to go after scraping of publicly available websites.
Getty Images A lawsuit is taking aim at Clearview AI, a controversial facial recognition app being used by US law enforcement to identify suspects and other people.
The order was issued Friday to county prosecutors, concerning a New York-based company called Clearview AI.“Like many people, I was troubled,” state Attorney General Gurbir Grewal said about the company’s techniques, which were first reported by The New York Times.
Clearview AI, which has scraped millions of photos from social media and other public sources for its facial recognition program — earning a cease-and-desist order from Twitter — has been pitching itself to law enforcement organizations across the country, including to the NYPD.
Facebook has a setting that can recognize your face so that you're automatically suggested as a tag in pictures and video that your friends upload.
A secretive facial recognition software used by hundreds of police forces is raising concerns after a New York Times investigation said it could "end privacy as we know it.".