Oosto is once again speaking out against Clearview AI. This time around, the company called attention to a recent article in The Washington Post that provided more details about the pitch deck that Clearview has been sending to potential customers. That pitch deck declares Clearview’s intensions to expand its business to the private sector, and to grow its database to 100 billion images from all over the world.
The problem, according to Oosto, is that Clearview is building an extremely dangerous identification tool, and breaking international laws in order to do so. The company’s database already has 10 billion images, most of which were acquired by scraping social media sites like Google and Facebook. Several countries – including Canada, France, and Australia – have already ruled that the practice violates their privacy laws, with the decision of the Australian Information Commissioner having drawn praise from Oosto in November.
However, Clearview has largely ignored those legal setbacks, and instead seems to be accelerating. The company does not seem to have complied with court orders to delete images from Australia and elsewhere, and has boasted that it will be able to identify almost anyone in the world when it hits the 100 billion image threshold. It also wants to move beyond law enforcement to put its technology in the hands of private companies like Airbnb and Uber.
Oosto believes that that would be a catastrophic development from a privacy perspective. In that regard, the company drew a distinction between 1:1 and 1:N identification, noting that companies like Uber often do need to verify the identities of their customers, but only to the extent that they need to confirm that someone is in fact the person they claim to be. That can be done on a 1:1 basis (perhaps by matching a selfie to an identity document), and without gathering personal information from the rest of the general public.
Clearview, on the other hand, is built to perform 1:N identification on an unprecedented scale.* Selling that technology to private companies would essentially give any Clearview customer the ability to identify anyone, regardless of their relationship to a particular company. Oosto argues that that power is wildly disproportionate for use cases that only require 1:1 verification, and is akin to placing an atomic bomb in the hands of private citizens with very little oversight.
“The practice of scraping people’s images and identities without their consent and performing facial recognition based on that data is questionably legal and a serious violation of public privacy,” said Oosto CEO Avi Golan. “Even used only by law enforcement agencies – this is a violation of privacy and public confidence in the technology. The leakage of these capabilities into the private sector is a dangerous escalation.”
While Oosto is a developer of surveillance tech, the company has consistently advocated for the construction of legal frameworks that guarantee the ethical use of the technology. Clearview has provided facial recognition services for companies like Macy’s in the past, though the company suspended all of its private sector contracts in 2020 to avoid a formal injunction when its activities first came to light.
*An earlier version of this article stated that Clearview is built for “large-scale 1:N surveillance operations.” Clearview AI CEO Hoan Ton-That has reached out to clarify that the company does not sell any camera products for real-time facial recognition. He also stated that real-time facial recognition should be reserved for “persons of interest, missing persons, those with outstanding warrants for serious offenses, or for a specific security related purpose known in advance. Any deployment of facial recognition technology should have the appropriate and proportionate amount of data for that particular use case.” Updated February 25, 2022
February 24, 2022 – by Eric Weiss