
Microsoft has announced new technology designed to spot deepfake videos, and is aiming to deliver it to media partners via a third party.
As the BBC reports, Microsoft built the solution by training an AI system on a dataset of about a thousand deepfake videos and a larger Facebook database of face swap media. The system is designed to spot subtle clues that may indicate a video is a deepfake, such as blurred pixels around the edge of a face that has been swapped with another.
Microsoft is aiming to get this tool into the hands of journalists and others through a partnership with Reality Defender 2020, an organization aimed at fighting disinformation ahead of the upcoming US elections.
The tech giant’s goal in this case may be to ensure that deepfake videos aren’t used to confuse and manipulate voters, but its AI system may also prove to be a valuable tool in the world of biometrics. Selfie-based authentication is an increasingly popular means of logging into online services and unlocking smartphones; deepfake technology, by swapping one person’s face for another and even making individuals appear to say things that they never actually said, poses a threat to biometric systems that aren’t sophisticated enough to detect such fraudulence.
Of course, Microsoft’s tool is unlikely to be able to detect all manner of video manipulation, especially as deepfake technology advances. To its end of fighting disinformation, the company has involved itself in Project Origin, a media initiative aimed at proliferating certificate and data hashing technology that can be used to essentially stamp a piece of video as legitimate, and provide a clear indication when video has been manipulated.
As for biometric technology, Microsoft’ foray into the fight against deepfakes will likely prove to be among the first of numerous efforts to develop this kind of anti-spoofing tech going forward.
Source: BBC News
–
September 3, 2020 – by Alex Perala
Follow Us