Researchers at the University of Toronto claim to have developed an AI-driven program that can disrupt facial recognition systems.
The program is designed to subtly alter images at the pixel level to throw off digital facial recognition technologies, so that what may look like a familiar face to the human eye cannot be deciphered by an algorithm. And it’s effective, with the U of T researchers claiming that it can limit a facial recognition system’s accuracy to only 0.5 percent.
It’s the product of an ‘adversarial’ training model in which one neural network would assess data and produce outputs while another would look for fake data in the outputs; essentially, the pair trained each other, processing a database of 600 faces, to produce the facial recognition-jamming algorithm.
The aim, it seems, is to thwart online facial recognition systems like the photo-tagging program that has landed Facebook in some legal trouble, with researchers Prof. Parham Aarabi and Avishek Bose hoping to develop an app or website that could let users add a kind of invisible screen to their online images that would throw off any facial recognition systems scanning them.
It isn’t a solution that could effectively thwart the kinds of live, real-time face scanning systems being used by a growing number of police agencies – for that you need some ridiculous headwear – but it could help to boost consumers’ online privacy in everyday applications, at least until the ongoing AI arms race produces facial recognition systems that can overpower it.
June 1, 2018 – by Alex Perala