FaceTec has had a big year. The company marks a successful 2018 with some high-profile integrations of its AI-driven ZoOm 3D Face Login authentication solution, which utilizes standard 2D smart device cameras and webcams to create 3D FaceMaps with its patented ZoOm motion technology. ZoOm is now securing user logins on five continents in several industries, including banking, connected transportation, government and ID/document verification management, most notably in a high-profile integration with Jumio’s globally-deployed Netverify trusted identity service.
But perhaps the most important development for FaceTec this year had nothing to do with business deals: In late summer, the company became the first in the world to attain Level 1 certification for its solution in iBeta’s Presentation Attack Detection evaluation program. And what’s more, it scored 100 percent – a remarkable achievement in today’s security landscape.
In Part One of a new interview with FindBiometrics and Mobile ID World, Managing Editor Peter Counter and FaceTec CEO Kevin Alan Tussy delve into the details of this achievement, including how FaceTec developed its spoofing detection technology and the variety of ways in which it was tested. And they discuss a newly published white paper that outlines just why spoofing detection, and standards for testing it, is so important today.
Read Part One of our interview with Kevin Alan Tussy, CEO, FaceTec:
Peter Counter, Managing Editor, FindBiometrics (FB): FaceTec just released a white paper about standardized biometric anti-spoof testing. Why is “Liveness Detection” so important given the current market landscape?
Kevin Alan Tussy, CEO, FaceTec: Hi Peter, nice to speak with you again, especially about such an important topic. Over the last three or four years, Liveness Detection really started to enter the zeitgeist of the biometrics industry in a big way. And unfortunately, while I think people started with good intentions regarding Liveness Detection, when they chose their interfaces and sensors they weren’t capturing enough human signal. Once they realized that it was far more difficult than they had anticipated to deliver on their liveness security promises, it was too late for many of the companies. They had a choice to make: take products to market that weren’t secure, or go back to the drawing board. Most didn’t have the budgets, creativity or expertise to continue innovating, so Liveness Detection ended up as a check-box on a feature set, and as long as you said you had “Liveness” then you could sell your product because no vendor had a better solution than the rest, and the customers didn’t know any better.
Then late last year ISO published the 30107-3 presentation attack detection (PAD) standard they’d been working on for years. And as far as I know, it’s the first standards organization document that describes what presentation attack detection actually needs to do, to be considered secure on any level. This was a tipping point in the industry. Individuals and organizations are now able to say, “We need for robust Liveness Detection in our biometric authenticators; we need Liveness Detection that has been built to handle the rigours that the international ISO standard outlines.”
FB: As you said, people just sort of make Liveness a checkbox. Working in this industry for a long time, it does seem like that “checkboxification” of everything led to this competitive hype-machine where people are extending claims of un-spoofability without backing it up. Do you expect the emergence of standardized biometric testing to cut through the marketing hype?
Kevin Alan Tussy, CEO, FaceTec: I sure hope so, and I also hope that the customers will start demanding proof of those claims. A lot of vendors will try to get away with puffery when they can, or say things like “As long as no one has your biometric data, then liveness isn’t a problem.” They benefit from the fact that most people don’t spend much time thinking about how to fool systems and find workarounds, but it doesn’t take too much imagination to think someone could get a video of someone else, or take a photo and make it look like the eyes blinked just by taking a pencil and moving it up and down to block the eyes for a split-second.
These are the types of things that vendors in the past have used – I would call them Liveness Detection gimmicks – that won’t hold up to any kind of third-party testing. In fact, it seems iBeta has spoofed most of the solutions people brought to them in minutes. And for a company to be claiming that they have Liveness Detection, and then be demonstrably spoofed in minutes, to me is disingenuous. Just because something appears like it’s alive because there’s motion is not nearly enough. And when you read the ISO standard you start to realize the difficult scenarios you must be robust to, to have true Liveness Detection and ensure that your anti-spoofing capabilities are meaningful.
So I certainly do hope that the industry will be more responsible in the language they use, and be more forthright and transparent. Now that we have the ISO standard and we have testing available from iBeta… there is really no excuse. I know they have tested several companies now, over ten I believe, and ZoOm is the only face authenticator that has ever passed Level 1. So the vendors who have failed the iBeta test should dial their claims way back. If they don’t, next year we’ll make a video showing how to spoof them all, as a public service.
Read the new FaceTec White Paper: Standardized Anti-Spoof Testing – Cutting through the hype and finding integrity in biometrics
FB: It makes so much sense. Why did it take so long for testing like this to emerge? Was it just that the ISO standard wasn’t there yet? Because it really feels like we really needed this sooner.
Kevin Alan Tussy, CEO, FaceTec: Yes, it definitely does seem like biometrics have been around for long enough, and people have been working on them for long enough, that a standard and a test like this should have come around sooner.
I think the reason why it’s taken so long is, number one, it’s a difficult process even for me to explain, and it’s even harder for people to understand. And, two, it doesn’t really present itself until you have the other problems in biometrics solved. You need to be able to do matching extremely well. And once you can do the matching, then instantly you say, “Well, I wonder if I can use a photo?”; “Maybe I can bypass it with a video?”
I think part of the challenge comes from understanding what it means to release a security product that could be tested by a literally unlimited amount of spoof artifacts. With the ultimate goal of saying “This is unspoofable.” but that can’t be stated honestly, because you can’t prove a negative. Has the authenticator been tried against every single spoof that could ever be created? With every combination of every spoof modality for every age, gender and ethnicity Of course not. That’s impossible.
Biometric matching is very different than saying, “We can check to see if the Saved Password and the Typed Password are equal, and if they are we have proof-positive.” In biometric authentication, we are up against the opposite; every biometric sample is different, so we have to work with probabilities, and we can never get a 100 percent match on two different biometric samples. The best we can hope for is knowing that the enrolled biometric data sample of Face A and biometric data sample of Face B are very likely to be from the same person. But, at the same time we need to ensure that it’s not matching a photograph, video, mask, doll, avatar, holographic projection or anything else that anyone can think of. And it’s vitally important that matching and liveness be done concurrently, using the same biometric data sample for both.
I think the biometrics industry evolved in stages as the technology has evolved. We’ve gotten to the point where matching is now a commodity. Many companies have accurate matching algorithms, so with that problem pretty well solved now the question becomes what can done with this great biometric matching? Well, importantly, we can now pick a face out of a large crowd. Another use case would be that we can authenticate that the correct user is alive and present in person when they’re trying to log into their own account.
Picking a face out of a crowd comes with warranted privacy concerns. The surveillance and “Big Brother” types of conversations do need to happen. But face authentication – me using my biometric data to access my own account – in my opinion, is the highest and best use case for biometric technology there is. And when you start to use biometrics as a security for account access management, now you’re squaring up against the bad actors, the hackers and the fraudsters who are trying to gain access to your accounts. So we have to build a system robust enough to keep them out so that the biometric interface isn’t the path of least resistance to accessing the account.
This is why it’s so challenging and difficult for people to wrap their heads around the biometric industry as a whole. There were aspects that had to be in place before we could use technology for its highest and best use cases. The matching algorithms work great when singling out a face in a crowd, but they are not enough for authentication. We need both matching and Liveness Detection to truly authenticate a user. Those are the two fundamental pieces, and the more people that understand that fact, the better this industry will do and the better security we will provide.
And so, the testing. We’ve had NIST testing matching algorithms since around 1986, but authentication requires Liveness Detection as well. And I think that’s why the ISO standard took so long to release Presentation Attack Detection guidelines. People had to prove that matching could be done before they needed to worry about Liveness Detection in account access management security systems. Now that we have tests for both, we finally have everything we need to determine if a biometric vendor has a viable, secure solution to the account access management.