A number of biometrics specialist have risen to prominence in recent years over excitement about face-based authentication and onboarding, but FaceTec has established a particularly burnished reputation thanks to its focus on sophisticated liveness detection technology. It was the first company to receive third-party certification to Level 1 and Level 2 of the ISO 30107 Presentation Attack Detection standard, and it went on to raise the ante with the launch of a spoof bounty program offering significant cash payouts to those who can discover vulnerabilities in its authentication system.
In this exclusive new interview, FaceTec CEO Kevin Alan Tussy delves into his company’s liveness detection technology as well as the spoof bounty program, and also tackles the importance of encryption, the on-device vs. server-side biometrics debate, and the impact of COVID-19, before offering a glimpse of what’s in store from FaceTec for 2021.
FindBiometrics: In the past few years, biometric liveness detection has become a buzzword, and when anything technical gets this popular it starts to get conflated with similar terms (authentication and verification is another example). What is the difference between biometric liveness and biometric matching, and how do they work together in modern biometric access management systems?
Kevin Alan Tussy, CEO, FaceTec: To enable a user to remotely access a digital account we must require the real-time biometric data captured from a live human to highly match the entitled account holder’s previously stored biometric data.
Liveness and Match Level are independent factors, and both must be satisfied in a remote identity verification system before access is granted to a user. First, liveness determines if the presented biometric data was collected in real-time (“first generation”), from a living human, then Match Level determines if the biometric data provided is consistent with that of the entitled privilege holder.
A high Match Level only matters if the liveness can be trusted. At FaceTec our liveness has been shown to be over 99.998 percent accurate, and our 3D:3D Face Matching is rated at one in 12.8 million FAR and less than one percent FRR.
Peter Counter, Editor in Chief, FindBiometrics: A subject quickly gaining interest in conversations in every part of this industry centers around the need and effectiveness of formally testing digital security biometrics. Central to trusting a solution is knowing how well it actually performs. While it makes sense that there needs to be a way for prospective customers to assess performance, after really carrying the torch for third-party testing for PAD (presentation attack detection) for a couple of years, FaceTec stepped away from sanctioned testing in 2020 in a highly visible launch of the industry’s first – and still only – spoof bounty. Looking back after 15 months and more than 40,000 spoof attempts later, do you still feel a bounty was an important move, and will it continue for 2021?
Kevin Alan Tussy, CEO, FaceTec: We “don’t know what we don’t know” when it comes to security vulnerabilities, so we must do everything we can to know more. Labs are forced by the process to perform the same tests over and over for years, and they don’t expand their repertoire of attack vectors like real attackers do. Security has to be dynamic, it has to be agile. Lab testing can set a baseline, but it doesn’t address cutting-edge threats because it’s testing for “knowns.” Bounty programs are by nature dynamic and immediate, and incentivize the best, most creative minds to bring their A-game. If they are successful in finding a new way to beat the security, those learnings fold back into the system quickly and the vulnerability gets patched much earlier.
Having the bounty program in place to draw those attacks in – first – gives us a glimpse of future threats that may be coming down the pike. Slow-to-develop testing standards and lab workers inherently can’t provide the same forward visibility.
I’d also like to note that if you want to trust the AI’s Liveness and Matching decisions, it’s critically important to ensure that the camera feed hasn’t been hijacked. We run numerous on-device checks to ensure the video feed hasn’t been compromised, and since the launch of our Level 4 and 5 Template Tampering and Camera Hijacking aspects of our $100,000 Bounty Program, our AI has defended successfully without a single attacker claiming to be able to tamper or bypass. (Please see: https://dev.facetec.com/spoof-bounty-program)
FindBiometrics: Regarding camera feed issues, the FaceTec Spoof Bounty program includes provisions for video injection attacks, which appear to be incredibly sophisticated and are a growing threat in the fraud arms race. How does a video injection attack work, and what can be done to thwart it?
Kevin Alan Tussy, CEO, FaceTec: We consider these Level 5 Bypass attacks, but unfortunately, they aren’t difficult for attackers to employ. A virtual camera program like ManyCam is pretty much all you need, and you can spoof any purely server-side single frame “passive” liveness solution, so don’t fall for that nonsense. Deepfake puppets are a little more complex to create, but most active liveness systems are also at risk from them. To block both of these vectors, it starts with securing the camera feed so that you can trust that the device isn’t compromised. To even have a chance at doing that, you need a device SDK that runs a myriad of checks right before you perform every capture session to ensure the camera captures first-generation biometric data from a live human.
FindBiometrics: For about as long as consumer mobile biometrics have been widely available, there has been a tug-of-war between in-device and server-side biometrics, and some of that discourse seems to have built a stigma around centralized authentication. But now, nearly a decade after these conversations set the tone of biometric public perception, how safe is server-side processing from a security standpoint?
Kevin Alan Tussy, CEO, FaceTec: Linking humans to their accounts via biometrics is much more valuable than just linking humans to devices. Consumers lose, have stolen, and replace devices often, and in-device biometrics don’t provide the ability to bring a new device into the ecosystem and trust that the user truly is who they are purporting to be.
I think we can all agree that since governments are the arbiters of legal identity, and they bind us to our legal identities with a photo, we’ll need to pass new, remotely captured face data to their centralized systems at some point in the identity verification process.
We already know cryptography works and is not some moonshot, so the FaceTec team spent its time working on the tech that could answer this question: What would make centralized biometric architecture safe to use?
Our answer is “Trustworthy Liveness Detection.”
And that means using liveness data that is either ephemeral or can’t be replayed successfully – even if stolen. We are able to do both and have many crypto-based, time-based, and device-side security layers in our liveness data that prevents it from being replayed; or the liveness data can be deleted for ultimate confidence that a data breach never results in successful replay attacks. However, even after the liveness data is deleted, the 3D FaceMap data persists to provide the user the ability to authenticate against it in the future.
FindBiometrics: One of the key components to having server-based biometrics is encryption. What are the encryption best practices for strong centralized biometrics?
Kevin Alan Tussy, CEO, FaceTec: We use specific, proprietary encryption keys paired for each server and app, so even if the server was breached the data wouldn’t work in any other FaceTec customer’s system. There are also many additional security layers used to ensure that no “honeypot”, or huge, valuable data repository, is being created. The point isn’t to assume that all data has to be kept secret; it’s to ensure that even if it isn’t kept secret, no bad actors can do anything with it.
FindBiometrics: There are some obvious benefits for centralized biometric authentication, but most of the opportunities presented by this paradigm depend on widespread interoperability. What steps need to be taken, and what checks and balances need to be in place, to achieve that level of interoperability?
Kevin Alan Tussy, CEO, FaceTec: Legal identity issuers are the arbiters of the “identity” we use for most services, so FaceTec designed a new zero-knowledge proof architecture that doesn’t require any PII to be shared outside the identity issuer’s systems.
Biological and legal are the identity layers bound together by identity document issuers. Verification of the biometric data of a biological person against that of the person they purport to be, by the legal identity issuer, on the issuer’s own server, is the only way to truly, remotely verify legal identity, initially. Once verified, the new data can become the source of truth for subsequent verifications with similar confidence, but liveness must be re-proven every time.
FindBiometrics: What is driving more effective ways to verify legitimate users, and what else can we expect from FaceTec in 2021?
Kevin Alan Tussy, CEO, FaceTec: COVID’s effect on work and normal social interaction is showing everyone what we’ve known for a long time; that we need to be able to prove legal identity remotely, and we need to protect privacy now and in the future while doing so. We’re not going back. So we need the ability to bind identity layers that are more foundational than, for example, FIDO’s Device Layer; we need to verify the binding of the biological identity to legal identity layers remotely.
From FaceTec in 2021, you can expect free 3D:2D and 3D:3D Face Matching for all legal identity issuers, easy interoperability for identity intermediaries that protect privacy, and more OCR features for user convenience. Coming off of a very busy 2020, where we saw 350 percent YoY revenue growth, we expect 2021 will be another huge growth year, adding more integration partners (we now have almost 60), and large customers with high-value data to protect who require identity solutions that are proven to work in the real world.