Microsoft Reduces Biometric Bias (But Won’t Say How Much)

Microsoft Reduces Biometric Bias (But Won't Say How Much)Microsoft’s face scanning technology is now better able to determine the genders of subjects with darker skin tones, the company has announced.

In a post on its AI Blog, Microsoft explained that it had reduced its system’s error rates by “up to 20 times,” and specified that for women the error rates had been reduced by nine times.

Tellingly, the company did not disclose any remaining differences in accuracy with respect to race and gender. Microsoft was one of a few companies singled out in an MIT Media Lab report from earlier this year detailing findings that suggest AI-driven facial recognition systems tend to perform more accurately on lighter-skinned subjects – a bias that has been flagged by groups like the ACLU as a key reason police and governments should not be using facial recognition technology to surveil the public.

The bias appears to be a result of the kinds of datasets used to train these AI systems, which evidently have not tended to include a diversity of samples. IBM, another of the company’s named in the MIT Media Lab report, announced this week that it was compiling a new, highly diverse dataset for AI training that it would make available to all AI developers, in a bid to eliminate bias from the field completely. And Microsoft, in detailing its reduction in bias, explained that this was largely a technical challenge that involved training its technology on more diverse datasets.

Microsoft’s facial recognition technology is available through Azure Cognitive Services, and is offered as part of a larger package of IT tools to government clients including Immigration and Customs Enforcement.

Sources: The Verge, TechCrunch, Microsoft AI Blog

June 29, 2018 – by Alex Perala