Zighra Details AI Secrets for the Left-Brained

Zighra is revealing some trade secrets in a new blog post, provided you’ve got the kind of math-oriented brain that can take them in.Zighra Details AI Secrets for the Left-Brained

The post, authored by Zighra’s Chief Data Scientist, Hari Koduvely, isn’t really about Zighra’s mobile authentication technology in particular, but rather covers an approach to AI training called “Bayesian Learning”. It’s based on “Bayesian Inference”, which in turn is a branch of AI that came out of Bayesian Statistics, a field that first took shape hundreds of years ago.

Without going into detail – see Koduvely’s full blog post for that – the Bayesian Learning approach is essentially about how to compress the data needed for Deep Neural Network (DNN) processing. This involves ‘pruning’ a network down to only its most essential connections; a ‘quantization’ process in which data is redistributed across neurons; and the use of ‘Huffman Coding’, a data compression algorithm. Koduvely asserts that “this approach can reduce the size of a network up to 50 times.”

That’s particularly useful with respect to bringing DNN processing to smaller devices, rather than running more data-intensive systems across huge networks. And for Zighra, that means bringing sophisticated DNN AI to the smartphone, allowing a single device to learn a user’s behavioral patterns in order to establish a behavioral biometrics profile that can be used for authentication. Ensuring that this can all be done on a single device offers enhanced protection of user data, since it can’t be hacked from external servers: As Koduvely explains, the “Zighra AI team is working towards this direction to make AI completely decentralized and on-device for the protection of privacy of users.”

It’s an approach that’s likely to be echoed by other mobile authentication specialists, and Zighra offers plenty of detail on how to go about it, so long as they’re ready to appreciate the finer points of things like the “Markov Chain Monte Carlo Simulation”, “Variational Interference”, and the “Kullback-Leibler (KL) divergence”. The rest of us can only stand in awe, and then resume playing with our phones.

September 18, 2018 – by Alex Perala