Australia and the UK open joint investigation of Clearview AI

This site is reader-supported. When you click through links on our site, we may be compensated.

A young Hispanic businesswoman looks up while in an office lobby with businesspeople all around her. A facial recognition scan reveal her personal data.

SDI Productions via Getty Images

Australia and the UK have opened a joint investigation into Clearview AI. Specifically, the regulatory bodies are concerned with Clearview’s practice of using “scraped” data and biometrics. 

The two countries aren’t the first to question Clearview AI, the company behind the controversial facial recognition program. Clearview AI claims to have a database with three billion images gathered from the open web. It offers that database to law enforcement, supposedly so they can identify criminals and victims. But the practice raises some obvious privacy concerns

Twitter, Google and YouTube have all sent Clearview AI cease-and-desist letters, alleging that Clearview violates their terms of service. Facebook and Venmo also demanded Clearview stop scraping their data. The ACLU rejected Clearview’s claim that its tech is “100% accurate,” and it recently sued the company for allegedly violating an Illinois state law.

Despite these concerns, thousands of public law enforcement agencies and private companies work with Clearview. A data breach earlier this year exposed the company’s full client list, which includes Best Buy, Macy’s, the Department of Justice and a number of foreign states, like the UAE. That hack also raised concerns about how secure Clearview’s database really is.

The Office of the Australian Information Commissioner (OAIC) and the UK’s Information Commissioner’s Office (ICO) will conduct the investigation. Until it’s complete, OAIC and ICO aren’t saying much — just that the investigation will be conducted in accordance with the Australian Privacy Act 1988 and the UK Data Protection Act 2018. OAIC and ICO may work with other data protection authorities who have raised similar concerns.

Facial recognition as a whole is facing increased scrutiny in the US. IBM has stopped working on the tech due to human rights concerns. Amazon placed a “moratorium” on police use of its tech, and Microsoft says it won’t sell facial recognition software to police without federal regulation — though reportedly, Microsoft attempted to sell its tech to the DEA. Police in San Diego and Boston won’t use facial recognition, and New York City passed a NYPD surveillance oversight bill. Meanwhile, in Detroit, facial recognition has already led to one wrongful arrest.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.