Facial recognition is one of the most contentious biometric technologies on the market today. It also happens to be one of the most confusing, insofar as the term ‘facial recognition’ is often thrown around quite casually, and can describe a wide range of different use cases. Depending on the situation, facial recognition can refer to everything from a self-contained authentication solution like Face ID, all the way up to full-scale surveillance networks that can track the movements of people all over the world.
That lack of clarity can make it difficult to parse some of the issues in play, especially now that lawmakers are starting to regulate facial recognition. What (if anything) should the public be afraid of, and what are the priorities for those tasked with placating those fears?
To help answer those questions, we’ve compiled a list of recent stories that highlight different aspects of the facial recognition debate. These stories are indicative of trends that are taking place all over the world, and the challenges facing lawmakers as they try to navigate between private stakeholders and public opinion.
New York’s Biased Facial Recognition Map
For those who are opposed to facial recognition, the debate often begins with racial bias. Past studies have shown that many facial recognition algorithms struggle when asked to identify women and people with darker skin, and false matches can create even more problems for disadvantaged communities that are already subjected to heightened police scrutiny.
In truth, accuracy is unlikely to be as big of an issue in the future, at least from a pure technology perspective. Historically speaking, the databases that were used to train facial recognition algorithms were not representative of the general population. Most developers have now recognized that that is a problem, and have taken steps to build datasets that more accurately reflect the population. Those efforts are already starting to pay dividends. The racial gap is closing, and it is likely that the top facial recognition systems will soon perform at a high level regardless of demographics.
However, that does not necessarily mean that racial bias has been eliminated. Even if the technology is agnostic, facial recognition algorithms are still implemented by human beings, and it is possible to deploy the technology in a discriminatory fashion.
Those concerns are typified in a recent report from Amnesty International that mapped the spread of police facial recognition cameras in New York City. That report found that facial recognition cameras were more densely clustered in neighborhoods with larger minority populations, exposing those residents to more surveillance than people living in other areas and reinforcing biased policing policies that are already in place. Facial recognition can be a dangerous tool if it is applied unevenly, with China’s attempts to build a facial recognition solution that could identify and monitor Uighurs standing as another targeted example.
While those that don’t want to see any form of facial recognition will continue to raise concerns about the technology’s accuracy, those objections will carry less weight as technology improves. The debate amongst lawmakers and activists will likely shift to specific use cases, as they try to prevent governments and businesses from deploying facial recognition in a way that infringes on people’s privacy.
The Legal Battle in New Orleans
Law enforcement agencies have been some of the most ardent supporters of facial recognition, and have consistently fought to expand their access to the technology. Civil rights activists, on the other hand, have tried to limit the state’s access to automated identification tools, and have been doing well in the legislative battle thus far. Most of the facial recognition laws that have been passed have either banned or limited the scope of police facial recognition programs, while still permitting the use of the technology in the private sector.
Having said that, the matter is far from settled. That is aptly demonstrated in recent stories from Virginia and New Orleans, where lawmakers are now trying to roll back facial recognition laws that were passed in the past two years. As in New Orleans, police agencies around the world will continue to fight for access to facial recognition, and insist that they need unfettered access to the technology to solve crimes even after guardrails have been put in place.
That back and forth should theoretically lead to the passage of bills that fall somewhere between the two extremes. Privacy activists will try to deny police access to mass surveillance technology, while lawmakers will try to make facial recognition available to the police when they need to identify a witness or a suspect during an active investigation.
The goal would be to close some of the loopholes that exist in the current environment. For instance, some bills have made executing a facial recognition search similar to executing a warrant, with comparable oversight and permission requirements. The legal language will get more precise (and more standardized) as lawmakers move away from blanket bans and try to dictate exactly what is and is not allowed with regards to facial recognition.
Public Relations and the IRS
Critics tend to focus on surveillance applications of facial recognition, and not without reason. The technology can encroach on people’s privacy, and some organizations (and law enforcement in particular) have shown a callous willingness to do so when left to their own devices.
At the same time, sensationalistic headlines can crowd out some of the more small-scale (and far less invasive) applications of the technology. The reality is that many applications of facial recognition are not nearly so dystopian, and much of the momentum (and financial support) behind facial biometrics in 2021 was geared toward more personal authentication use cases.
The problem for many developers is that surveillance often gets conflated with authentication in the public consciousness. That can create confusion in facial recognition debates, and amplifies the resistance to face-based authentication based on fears about surveillance.
The IRS learned that lesson firsthand when it announced that ID.me identity verification would soon be mandatory for people accessing online accounts. Public outrage prompted them to walk back from the requirement only a few short weeks after it was announced, while ID.me has since expanded its non-biometric identity verification options.
On the other hand, ID.me is still an IRS partner (and has been since 2017), and biometric verification is still an option for those who are interested in the program. Millions of people have already opted into the service, and millions more will continue to do so as long as it remains available.
That suggests that the public’s concerns are often more emotional than they are practical. People are genuinely worried about privacy, but they are willing to try new technologies when they encounter them in a real-world situation. They will also keep using those technologies if they come with meaningful convenience benefits.
While that trend bodes well for identity verification providers, they will still need to educate the public about the differences between the different kinds of facial identification if they want to accelerate that process. The ID.me saga is telling because it shows that messaging matters, and suggests that providers will need to make certain concessions to convince people to buy in. For instance, ID.me has stressed that people will have full control of their personal information, and can delete any selfie images associated with their accounts.
Other identity providers have similarly emphasized the fact that they perform one-to-one identification. That means that they are only trying to determine whether or not a new selfie matches the face associated with the account holder, and not searching for a match in a larger database. Those more contained solutions have proven to be palatable in personal devices like the iPhone (with Face ID), and history could repeat itself as face-based identity verification becomes more common and people become more familiar with the technology.
The Clearview Conundrum
Clearview AI is the single most controversial company currently operating in the facial recognition space. Much of that has to do with its data collection methods as much as it does with its technology. The company has quite openly pulled images from social media platforms like YouTube and Facebook to train its facial recognition algorithm, often in violation of terms of service and despite cease and desist orders in the US and legal injunctions abroad.
In doing so, Clearview is testing internet privacy laws while pushing facial recognition to its most extreme application, where any photo that has ever been taken of an individual is fair game for a match. Even if some photos escape the dragnet, almost everyone’s image is now online in one way or another. In practice, that more or less guarantees that almost anyone could be identified and found, no matter how hard they try to stay off the grid.
Clearview also raises fundamental questions about who has access to the technology. Though the company focuses primarily on law enforcement, it has worked with the private sector in the past, and has hinted that it would like to do so again in its latest investor pitch deck. That, again, takes facial recognition to a logical extreme, insofar as would make the technology available to anyone able to pay for Clearview’s services. That proliferation would make it far more difficult to control the technology, and provide far less oversight for those who might try to use it for illicit purposes (with stalking being one of the most obvious examples).
The question moving forward is whether or not Clearview will set the tone for the entire industry. The company has thus far asked for forgiveness rather than permission, but that approach has drawn the ire of privacy advocates and other facial recognition developers that have made privacy protections a key component of their platforms. More so than any other company, the fate of Clearview AI will indicate where facial recognition is headed, and what the landscape will look like in the next few years.
Europe Writes the Laws
While lawmakers have moved to regulate facial recognition in the US, such initiatives have not gained much traction at the federal level. The regulatory framework in the country is far more piecemeal, with different laws being implemented on a state-by-state (and city-by-city) level.
With that in mind, the European Union is more likely to provide the template for widespread facial recognition legislation. The European Commission and European Parliament have both indicated that they are committed to strong data privacy regulations, and while it will still be a couple of years before any laws are passed and go into effect, their efforts suggest that they are at the very least more committed to a facial recognition bill than their American counterparts.
With that in mind, Europe has an opportunity to craft a facial recognition legal framework for the entire world. Lawmakers elsewhere could mimic the language of the European bills, and those laws would inform the development of facial recognition systems now and in the future.
March 4, 2022 – by Eric Weiss