• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
  • Log In
  • Member Registeration
  • Account
  • Our Services
  • Contact Us
  • Newsletter
  • Top Nav Social Icons

FindBiometrics

FindBiometrics

Global Identity Management

  • Biometrics
    • What are Biometrics?
    • FAQ
    • Biometric Associations
    • Companies
    • Premier Partners
  • News
    • Featured Articles
    • Interviews
    • Thought Leadership
    • Podcasts
    • Webinars
    • Year in Review
  • Applications
    • Biometric Security
    • Border Control and Airport Biometrics
    • Consumer and Residential Biometrics
    • Financial Biometrics
    • Fingerprint & Biometric Locks
    • Healthcare Biometrics
    • Justice and Law Enforcement Biometrics
    • Logical Access Control Biometrics
    • Mobile Biometrics
    • Other Biometric Applications
    • Physical Access Control Biometrics
    • Biometric Time and Attendance
  • Solutions
    • Behavioral Biometrics
    • Biometric Sensors and Detectors
    • Facial Recognition
    • Biometric Fingerprint Readers
    • Hand Readers & Finger Scanners
    • Iris Recognition
    • Biometric Middleware and Software
    • Multimodal Biometrics
    • Physiological Biometrics
    • Smart Cards
    • Vein Recognition
    • Voice and Speech Recognition
  • Stocks
  • Events
  • Companies
  • Podcasts

Researchers Claim Their ‘Master Faces’ Can Fool Facial Recognition Systems. Are They Right?

August 11, 2021

In the past few years, it has become increasingly clear that a facial recognition system is only as good as the subjects that were used to put it together. Critics and industry insiders alike have tried to raise awareness about racial bias, noting that algorithms that are only trained to identify white faces will struggle when asked to identify Black ones.

Researchers Claim Their 'Master Faces' Can Fool Facial Recognition Systems. Are They Right?

Unfortunately, there is not as much consensus about how to address the problem. Watchdogs like Fight for the Future have argued that the technology is inherently flawed, and have asked lawmakers to ban the public use of facial recognition. Technology providers like Onfido, on the other hand, have argued that facial recognition is worth pursuing, and that any bias concerns can be addressed with more representative datasets.

That debate gained a sharper focus following the release of a new paper from a team of researchers at Tel Aviv University. Ron Shmelkin, Tomer Friedlander, and Lior Wolf are members of the University’s Blavatnik School of Computer Science and its School of Electrical Engineering, and claim that they were able to create nine “master faces” that can impersonate more than 40 percent of the general population during a facial recognition scan.

Is Facial Recognition Safe?

At a glance, the report would seem to be damning for facial recognition advocates. If true, it would essentially confirm the worst fears of the technology’s critics. Any face-based identification system would be extremely vulnerable to spoofing, and hackers would not even need a real image of their target’s face in order to execute an attack. They could simply generate a fake master face, and use that to compromise millions of accounts.

So how does the researcher’s system work? The master faces were created with Nvidia’s StyleGAN system, which produces fake (read: computer generated) faces that look reasonably realistic. The faces are not based on any real-world individuals, but they could have a passing resemblance to someone you might meet on the street.

The researchers set out to exploit that tendency, comparing their StyleGAN faces to real photos in the University of Massachusetts’ Labeled Faces in the Wild (LFW) dataset. They then used a classifier algorithm to determine whether the fake faces matched the real ones, and kept the fake images if there was a strong resemblance. Those results were used to train a separate evolutionary algorithm that could produce better fakes when the process was repeated, in the hope that they would capture a higher percentage of the population with each iteration.

That process eventually culminated with the nine master faces detailed in the report. The researchers described them as master keys that could unlock the three facial recognition systems that were used to test the theory. In that regard, they challenged the Dlib, FaceNet, and SphereFace systems, and their nine master faces were able to impersonate more than 40 percent of the 5,749 people in the LFW set.

Representation Matters

While those numbers are obviously concerning, there is good reason to question the researchers’ conclusion. Only two of the nine master faces belong to women, and most depicted white men over the age of 60. In plain terms, that means that the master faces are not representative of the global public, and they are not nearly as effective when applied to anyone that falls outside one particular demographic.

That discrepancy can largely be attributed to the limitations of the LFW dataset. Women make up only 22 percent of the dataset, and the numbers are even lower for children, the elderly (those over the age of 80), and for many ethnic groups.

Researchers Claim Their 'Master Faces' Can Fool Facial Recognition Systems. Are They Right?

It is possible that another team could use the same process to produce more master keys with more representative data. The University of Tel Aviv researchers try to make that case, arguing that their technique can scale and therefore exposes a major facial recognition flaw.

Even so, their claims that their nine keys are good for 40 percent of the public are exaggerated at best. They also become more dubious when accounting for the facial recognition systems that led to that number. Dlib, FaceNet, and SphereFace are not commercial facial recognition systems. Rather, they are simply the most accurate systems tested using the LFW dataset, which means they are likely to exhibit many of the same biases as the set itself, and lack the robustness that would be expected from a more rigorous facial recognition system.

Caution, Consideration, and Good Data

Given the nature of the data, it’s unclear if the report has any bearing on the real world. There is a chance that master faces could evolve into a significant threat. There’s also a chance that countermeasures like liveness detection make the more advanced commercial systems resistant to such forms of spoofing.

Either way, the report clarifies the ideological stakes in facial recognition debate. For many people, the mere existence of the technology threatens people’s privacy and civil liberties. Even if the numbers are overblown, the damage of a security breach cannot be undone, and that possibility (heightened with a master face) may be enough for some to swear off the technology entirely.

The counterargument is that better development and testing practices can make facial recognition safe enough to use in practical settings. The question is whether or not governments and private companies can be trusted to live up to those standards. The new report underscores the importance of quality data, and developers will need to keep that in mind if they want to alleviate those fears.

Sources: Vice, The Register

–

August 11, 2021 – by Eric Weiss

Related News

  • UK Regulator Determines Onfido’s Anti-Bias Biometrics Research is in Public InterestUK Regulator Determines Onfido’s Anti-Bias Biometrics Research is in Public Interest
  • Does LFR Have a Racial Bias? The UK’s National Physical Lab Weighs InDoes LFR Have a Racial Bias? The UK’s National Physical Lab Weighs In
  • Meta’s Open Source AI Training Dataset Features 26,000 VideosMeta’s Open Source AI Training Dataset Features 26,000 Videos
  • Federal Judge’s Biometric Information Privacy Act Dismissal Could Prove ConsequentialFederal Judge’s Biometric Information Privacy Act Dismissal Could Prove Consequential
  • ID R&D’s Liveness Solution Shows Minimal Bias in Independent TestingID R&D’s Liveness Solution Shows Minimal Bias in Independent Testing
  • Nine of the Top 10 Liveness Detection Systems are Vulnerable to Deepfakes: ReportNine of the Top 10 Liveness Detection Systems are Vulnerable to Deepfakes: Report

Filed Under: Featured Articles, Features, News Tagged With: Biometric, biometrics, biometrics research, demographic bias, face biometrics, facial recognition, machine learning, Master Faces, presentation attacks, racial bias, spoofing, Tel Aviv University

Primary Sidebar

Watch This Finance-Focused On-Demand Webinar

Sponsored Links

facetec logo

FaceTec’s patented, industry-leading 3D Face Authentication software anchors digital identity, creating a chain of trust from user onboarding to ongoing authentication on all modern smart devices and webcams. FaceTec’s 3D FaceMaps™ make trusted, remote identity verification finally possible. As the only technology backed by a persistent spoof bounty program and NIST/iBeta Certified Liveness Detection, FaceTec is the global standard for Liveness and 3D Face Matching with millions of users on six continents in financial services, border security, transportation, blockchain, e-voting, social networks, online dating and more. www.facetec.com

TECH5 logo

TECH5 is an international technology company founded by experts from the biometrics industry, which focuses on developing disruptive biometric and digital ID solutions through the application of AI and Machine Learning technologies.

TECH5 target markets include both Government and Private sectors with products powering Civil ID, Digital ID, as well as authentication solutions that deliver identity assurance for various use cases. 

Learn more: www.tech5.ai

Mobile ID World Logo

Mobile ID World is here to bring you the latest in mobile authentication solutions and application providers. Our company is dedicated to providing users with the best content and cutting edge information on technology, news, and mobile solutions for your mobile identity management needs.

HID logo

HID powers the trusted identities of the world’s people, places and things. Our trusted identity solutions give people convenient and secure access to physical and digital places and connect things that can be identified, verified and tracked digitally. Millions of people use HID products to navigate their everyday lives, and billions of things are connected through HID technology. https://www.hidglobal.com/

Prove Logo

As the world moves to a mobile-first economy, businesses need to modernize how they acquire, engage with, and enable consumers. Prove’s phone-centric identity tokenization and passive cryptographic authentication solutions reduce friction, enhance security and privacy across all digital channels, and accelerate revenues while reducing operating expenses and fraud losses. Over 1,000 enterprise customers use Prove’s platform to process 20 billion customer requests annually across industries including banking, lending, healthcare, gaming, crypto, e-commerce, marketplaces, and payments. https://www.prove.com/

Recent Posts

  • ROC Expands Into Iris Recognition, Enhances School Security
  • Flippy’s Serving Up Burgers and Biometrics – Identity News Digest
  • Circle Security Brings MFA Solution to Auth0 Marketplace
  • Clearview’s BIPA Case, Fraudulent Prison Guards, and More – Identity News Digest
  • Australian Minister Acknowledges Role for Mobile Biometrics in Expanding Digital ID Program

Biometric Associations

IBIA and fido

Footer

  • About Us
  • Company Directory
  • Advertise With Us
  • Contact Us
  • Privacy Policy
  • Terms of Use
  • Archives
  • CCPA: Do not sell my personal info.

Follow Us

Copyright © 2023 FindBiometrics