A team of researchers from the University of Federico II in Naples and the Technical University of Munich has created a new deepfake detection system that they believe could turn the tides in the battle against fraud. Unlike other deepfake detection systems, the new POI-Forensics system is not trained with any deepfake videos. It instead looks only at real videos of a subject, and then uses those videos to create a biometric profile of that individual.
POI-Forensics can then apply that profile to other videos to distinguish legitimate footage from deepfakes, in a manner that is similar to biometric authentication. That approach differs from more traditional deepfake detection systems that study deepfake videos to learn to recognize signs of digital manipulation.
The problem, according to the researchers, is that such a system is vulnerable to new manipulation techniques that have not yet been recognized by the detection algorithm. The POI-Forensics system, on the other hand, simply asks how well a new video compares to verified footage of a subject, and flags any videos in which anything seems off. Fraudsters would need to create a true biometric spoof in order to beat such a system, covering everything from someone’s movement tics to their particular voice and speech patterns. That technology both is a long way off and would likely be prohibitively expensive for most fraudsters.
POI-Forensics can evaluate video and speech alone, or look at the two together when determining whether a video has been faked. The system needs 10 verified videos to generate a profile, and it does not need to be retrained to account for new deepfake methods once that profile has been created.
In terms of performance, the researchers claim that their solution was more accurate than the leading deepfake detection systems, especially when applied to low quality videos. It also did a better job of sorting real and fake videos in several active attack scenarios. The researchers believe that their solution will be particularly useful for celebrities and other public figures that are more likely to be subjects of a fake, though everyday civilians could potentially use the solution to prove that they have been the victim of a deepfake attack.
Deepfakes, of course, have emerged as one of the biggest digital security threats in the past few years. Fraudsters were able to use deepfake tech to hack China’s taxation system, while South Korean researchers used it to fool many of the world’s top facial recognition APIs. That has created a demand for effective detection systems, and it will be interesting to see whether or not the POI-Forensics approach can help fill that gap.
April 8, 2022 – by Eric Weiss