FaceTec’s Rich Lobovsky Demystifies Biometric Liveness Detection in Conversation with FindBiometrics

FaceTec's Rich Lobovsky Demystifies Biometric Liveness Detection with FindBiometrics

In 2019 there are two critical trends developing in biometrics: liveness detection for authentication, and user education. That’s why FindBiometrics Managing Editor Peter Counter recently interviewed Rich Lobovsky, SVP Business Development, FaceTec. Their conversation centers around the biometric face authentication specialist’s latest entry in a series of educational white papers, “Liveness Detection: Biometric Frontline or Final Frontier?” – an in-depth document that clearly communicates everything you need to know about the importance of biometric liveness detection.

In this exclusive interview with FindBiometrics, Lobovsky discusses how technological advances have necessitated the evolution of the language used to describe biometric tech, clarifies common misconceptions regarding liveness detection, and outlines what’s at stake if the security community doesn’t embrace robust anti-spoofing measures when implementing biometric authentication. He also provides a glimpse into the FaceTec spoof-lab, which played a key role in the company’s landmark iBeta Presentation Attack Detection certifications.

Read the full interview with Rich Lobovsky, SVP Business Development, FaceTec:

Peter Counter, Managing Editor, FindBiometrics: Thanks for joining me to talk about this new white paper of yours.  My first question has to do with the technological advances within the last couple of years. Artificial Intelligence has really accelerated the development of solutions in the digital security business and how we’ve described things, and what we once accepted as state of the art has been changing very rapidly.  What are the most important changes in the technological performance that you are seeing, and how has that changed the descriptive terms that you use?

Rich Lobovsky, SVP Business Development, FaceTec: AI is starting to make an impact in many different industries, but its effectiveness really depends on the use case and the level of accuracy required.  In some cases, if the AI is correct 90 percent of the time it adds value. But in other applications like piloting autonomous vehicles, and, in our case, using AI for human liveness detection during biometric authentication, you have to do far better than 90 percent.  It can take dozens of algorithms working together to get to a “four-or-five 9s” confidence level.

Regarding terminology, “facial recognition” typically makes you think of what is being used in an airport, a stadium or casino checking for a match to someone in a database.  With AI-driven face authentication, there’s an additional, concurrent component that is just as important as the matching, and that’s Liveness Detection. So the term “face authentication” is used because traditional matching is only one part of a comprehensive authentication process involving liveness and 3D depth detection as the first critical security checks.

FindBiometrics: Absolutely.  In the same vein of facial recognition versus face authentication, another terminological thing that has cropped up lately is presentation attack detection getting conflated with low false acceptance rates in the mainstream discussion.  To clarify for everybody, what is the difference between liveness detection and high accuracy? Is there a correlation between them or are they two different beasts?

FaceTec:  They really are two entirely different concepts that are commonly conflated.  Liveness Detection assesses concurrent combinations of human traits, and they have to be more sophisticated than just response actions like smiling or blinking, which can be so easily spoofed.  Our algorithms are looking at things like skin and hair texture, reflection of light off the skin and eyes, pupil dilation, subtle movements in the face, etc. There are several dozen human parameters, actual physical traits, that collectively determine the individual present is alive.  We also simultaneously look at 3D depth, ensuring that the object is not two-dimensional like a photo or video. ZoOm does this by leveraging the patented ZoOm motion, where you fit your face into a small oval on the screen, and the oval gets larger and you move closer to fit into the second oval.  During the two-second process, we capture 30-60 frames of video per second and create an encrypted 3D FaceMap, in which perspective distortion, or the fisheye effect, is measured. If it’s not a three-dimensional object, our algorithms will conclude it’s a 2D image like a picture, rejecting the spoof attempt.

Image-matching accuracy is about the ratio of False Acceptance Rate at a certain False Reject Rate; how many times do the algorithms wrongly match an imposter to the users.  Liveness checks are not a part of accuracy, they just tell us if we are dealing with a real human or a spoof artefact. So they are two separate concepts and each must have dedicated algorithms to work in the real world.

FindBiometrics: I understand where that confusion comes from, specifically because we are now in a place in the biometrics industry where liveness detection is needed.  But a lot of education has to happen and FaceTec is doing that, as this is the second in a series of educational white papers. So my question is, why is education on the topic of liveness detection so crucial now?

FaceTec: If you can prove liveness, particularly during enrollment, that establishes the chain of trust.  It anchors the digital identity of a real person and strengthens the entire trust chain, especially if the biometric data in stored centrally.  Liveness wasn’t critically important for on-device biometric sensors, like fingerprint, because the bad guys still had to get ahold of the device.  But once the biometric data is stored in the cloud and it can be accessed from any location on any device, it becomes critical to check the user’s Liveness even before matching takes place.  Fraudsters can’t use spoof artefacts and we know they don’t want to put their real faces on camera. We’re all aware there is a dramatic rise in identity theft and phishing schemes. Liveness Detection plus 3D face matching is an effective way of stopping the majority of these attempts.  Any biometric vendor who claims to have Liveness Detection can and should test it with iBeta to go through a rigorous process to back up their currently unsubstantiated performance claims.

FindBiometrics: Keeping on the topic of education, what do you think is the biggest misconception about biometric liveness detection right now?

FaceTec: I think it has been that “all liveness detection is created equal”.  Customers and vendors treat it like it’s just another box that needs to be checked.  This mindset has been allowed to persist for a few years because there was no way to actually scientifically test anti-spoofing.  Now with the ISO testing standard published, everyone can have confidence that the security angles are covered, because we all know ISO as very reputable.  They issued 30107-3 standard in September of 2017, Presentation Attack Detection (PAD) testing guidance on biometric systems, which is essentially spoof-testing using a wide range of artefacts.

For years, liveness checks were response-type tests, with most vendors deciding that they would commit to a very simple user interface and then figure out the security part later.  This wrong thinking led a lot of vendors to build their liveness on the shaky foundation of blinking, nodding and smiling. While those actions could indicate Liveness, they are extremely easy to fake.  Some applications in use today are fooled by waving a pencil in front of a 2D photo to make it look like it’s blinking. One webcam-based system from a very well known hardware company allows for mannequin heads to be enrolled and even used to authenticate!  It’s surprisingly weak, but I don’t think the special hardware sold very well so probably not many real people are at risk, maybe just the company’s reputation.

iBeta testing is based on three levels of difficulty: the first level uses mostly 2D photos and videos where they even bend photos to simulate three dimensions; the second uses off-the-shelf masks, mannequins and dolls; and Level 3 uses custom “Hollywood-grade” artefacts, like thousand-dollar masks.  To combat these non-human artefacts, FaceTec built a comprehensive spoof lab and tested on hundreds-of-thousands of photos, masks, prosthetics, mannequins and dolls. As more people understand this rigorous third-party process from an unbiased organization, misconceptions should diminish and the liveness detection that vendors bring to the market will get better.  We are glad to be contributing to that increased quality and security, as well as investing in the education of the market and demanding more transparency from the other vendors.

FindBiometrics: It is fascinating, and really open for challenges.  At FindBiometrics we really think that it is important to have this sort of transparency in the industry.  We have already talked about how Level 2 certification from iBeta is different from Level 1, but something that is really impressive in both certification tests is FaceTec’s technology scored 100 percent on the anti-spoofing score.  What gives FaceTec’s technology the advantage there, and what is the breakdown of that number? What does it truly mean?

FaceTec: To answer the last question first, a 100 percent score means that in iBeta’s testing no spoof artefacts fooled our system.  ZoOm rebuffed over 3,300 spoof attacks over the two tests.  The reason ZoOm is so much better than the competition is the data.  At FaceTec we figured out how to use a standard 2D camera to create a 3D FaceMap.  That simple, patented user interface provides an immense amount of human signal – about 100 times more than from a 2D image – and provides the data foundation for our AI to detect liveness, a true game-changer.  Further, we’ve spent years trying to break our own software. We hired white hat hackers, put out bounties, and made ZoOm available to test to every customer and even competitors. Our willingness to provide access to anyone that wants to test ZoOm has resulted in us having seen a lot of spoofs, and over the years we’ve learned how to stop them.  By the time we tested with iBeta, we were very well prepared and able to pass tests no one else had ever passed. Our hard work over the last five years has paid off in class-leading security and usability.

Also, just to add a bit more context, the iBeta testing took weeks and ZoOm remained un-spoofed, yet devices like the Samsung Galaxy S8 and the iPhone were spoofed within the first few minutes using several 3D artefacts.  And these are specialized hardware solutions, but they still can’t even come close to passing the iBeta tests. While ZoOm, a universal software solution that runs on pretty much every smart device and PC with a webcam, got perfect scores.

FindBiometrics: You have mentioned the spoof lab a few times and there is a section about it in this new white paper,  but what is the spoof lab and what kind of testing goes on there?

FaceTec: The lab in San Diego is, essentially, a continually growing collection of artefacts, devices, evolving processes, machines and many very smart people.  We’ve been fortunate to have collected many millions of face images from around the world – from more than 160 countries – to use as AI-training data. We’ve have different ethnicities, skin tones, eye shapes, head shapes, and just about every age and gender combination, that combined with the thousands of artefacts we’ve acquired, gives us the data we need to do a good job at Liveness and face matching in lot of lighting conditions and real world environments.  I invite any of our potential customers or partners to schedule a trip out to San Diego to see our spoof lab. It’s an eye-opening experience and the team there is very knowledgeable. It’s a pretty dynamic place.

FindBiometrics: It sounds like it.  I’ve seen a few videos of some spoof tests and demonstrations on how to create spoof artifacts and it is always something I have been very fascinated with.  I saw one recently by you guys in which you create a 2D animation from a regular profile picture using the CrazyTalk animation software, and I just think that is so… crazy.  Is that your picture that they animated with the scary voice underneath?FaceTec: Yes, that’s me, and we made that video with Crazytalk in five minutes.  I spoke at a partner event a couple of weeks ago in San Francisco and many in the group were amazed by how easy it was to create something that could be used to spoof biometric systems.  I’ve also shown it at the Biometrics Institute conference recently in Washington D.C and it never failed to surprise people. It’s actually easy to do and fools many liveness detection systems.

FindBiometrics: That flows into what I’d like to discuss next, but to use that CrazyTalk software you can just go on somebody’s Facebook and Twitter or Instagram and use that picture?

FaceTec: You sure can!  It’s really scary how easy it is to spoof a lot of our competitors solutions.

FindBiometrics: That is even scarier when you consider cooperative users, a huge problem right now with fraud, specifically with biometrics.  A lot of people are saying this is going to be the year that biometrics systems get tested in the wild, and I think that cooperative user fraud is probably the biggest threat.  What is a cooperative user, and why is it so important in the liveness conversation, and why is this type of fraud so difficult to prevent?

FaceTec:  A cooperative user is someone who shares (consciously or not) all of their information with someone else – it could be their photos, passwords, biometric data.  If I give my coworker an account password so he can print out a document, then I’m a cooperative user. With this information it becomes much easier to subsequently access a system.  A large percentage of breaches occur through cooperative users at home and work. It is definitely a security issue when users willingly share their access credentials. Liveness Detection is the key to stopping cooperative users and preventing imposters from accessing their accounts.

FindBiometrics: Right. The definition also encompasses people who have been manipulated through phishing, and that type of thing where somebody will try and convince them to surrender their biometrics, right?

FaceTec: Absolutely, in this case a good example of being cooperative without even knowing it.

FindBiometrics: This white paper really positions liveness detection as a do-or-die proposition for the biometrics industry, a strong stance.  But what use cases do you see opening up for biometrics, and if liveness detection isn’t there, what is at stake?

FaceTec: We know liveness detection is critically important within an unsupervised environment.  It’s just the industry has very little experience with anything but on-device, FIDO style deployments, and liveness hasn’t been as critical.  But now as we move into the new age of digital identity, with centralized biometric databases stored by our banks, service providers and governments, it’s Liveness Detection that becomes the most important first step.  In these unsupervised authentication scenarios, gaining access to a shared vehicle for example, liveness detection is the only defense from a bad actor with a photo or a video of the user getting access. In these cases the losses can be very significant, and the more you allow the user to do, the more risk there is.  So, in any situation that doesn’t have a person of authority watching the user and checking to make sure they are present in person, liveness detection is do-or-die. It’s the difference between insecure and secure access.

Coming from a financial services background, I also think about how liveness detection can be used for onboarding, for example, as in our partnership with Jumio.  They are automating the identity verification process through a combination of identity proofing and face authentication, where detecting liveness really comes in.  In the past, nearly every solution used only image matching or a weak response approach. So, we see a lot more growth in onboarding and digital identity platforms.

What’s at stake if it’s not embraced?  Less access and fewer valuable features available on mobile devices, more friction, more fraud, theft and damage of all sorts.  But in a case like healthcare, health fraud is a big contributor to the opioid crisis, with doctor shopping and prescription abuse, it literally could be do or die.  And while in most cases it might not be a life or death situation, it could change a person’s life forever.

FindBiometrics: It really does feel in order to get past the popular use cases right now, which are small payments, low-stakes access control and just unlocking your phone, it really does feel like this is necessary or else nobody is going to use it.

FaceTec:  I agree, especially as the stakes keep getting higher as we expand our digital lives, requiring digital access to more sensitive and confidential information.  Healthcare is another important area. When you think about use cases, for example picking up prescriptions and accessing (and controlling) confidential medical records, there’s a strong need for liveness detection.

FindBiometrics: Thanks for taking the time to speak with me today, Rich, and congratulations on all the great things happening at FaceTec.FaceTec: Thanks Peter, I appreciate it.