A cybersecurity firm is sounding the alarm about the liveness detection systems currently being used for remote identity verification. Sensity tested the integrity of 10 of the leading liveness solutions with photo and video deepfakes, and found that that was enough to get past nine of those 10 with some frequency.
The firm did not reveal the names of those 10 solutions, citing non-disclosure agreements and the desire to avoid a potential lawsuit. On that front, Sensity acknowledged that many of its tests may have violated various terms of service. As a result, the rest of us are left to speculate as to which liveness solutions are vulnerable, and which one actually was able to thwart the majority of deepfake attacks.
The report should nevertheless raise concerns about cybersecurity in a remote environment. Sensity noted that the news should be especially troubling to financial institutions, since it indicates that cybercriminals can use deepfakes to open fake accounts to perpetrate cybercrimes, and that those institutions would not even be aware that fraud is taking place.
Sensity also criticized the vendors for being complacent. According to Sensity, most of the vendors dismissed the findings of the report, essentially stating that they did not care about the vulnerabilities that may have been exposed. That ultimately prompted Sensity to release its anonymized findings to raise awareness about the issue.
In its actual tests, Sensity generated fake IDs with a deepfake image to get past the photo scan, and then fed the same image into a video feed for the liveness component. Fraudsters will usually try to hijack someone’s phone camera with their own video feed when performing such an attack, though the technique is not as effective against facial recognition solutions that make use of depth sensors as part of the liveness check.
The liveness systems covered in the report have been used for everything from banks to dating apps to voter identification in a national election. Sensity is not the first organization to warn about the growing threat of deepfake attacks, after a team of researchers in South Korea tested leading facial recognition APIs and found that many of them could not spot high-quality spoofs. Researchers at USC similarly found that many liveness systems may not be equally effective across all demographics.
Source: The Verge
May 20, 2022 – by Eric Weiss