Microsoft is getting rid of some of the features that it currently offers to its facial recognition clients. The decision was made in an effort to adopt a more ethical stance with regards to the use of the technology.
The news will primarily affect companies that use Microsoft’s Azure Face service. Clients can still use the service for face-based identity verification, but will no longer have access to more controversial features like emotion detection. Microsoft will also no longer allow clients to classify users based on factors like gender, age, and hair.
On that front, Microsoft noted that emotion detection raises ethical concerns because there is often not a one-to-one correlation between an expression and a given emotion. That means that emotion detection systems are making many more assumptions than they are with a simple face match, which increases the likelihood of a mistake and raises additional concerns about privacy and bias.
“We collaborated with internal and external researchers to understand the limitations and potential benefits of this technology and navigate the tradeoffs,” said Microsoft Product Manager Sarah Bird. “In the case of emotion classification, these efforts raised important questions about privacy, the lack of consensus on a definition of ‘emotions’, and the inability to generalise the linkage between facial expression and emotional state across use cases.”
In addition to removing certain features, Microsoft will be taking steps to restrict who has access to the Azure Face utilities that are still available. New and existing clients will need to apply for permission to use the technology, and prove that they are using it in an ethical fashion, and in a way that benefits end users in some capacity. The rule applies to companies that are already using Azure Face in their own applications.
Microsoft will still be using some of the more controversial features internally in select use cases. Most notably, emotion detection will be deployed in the Seeing AI application, which is an accessibility tool that describes aspects of the world for people with impaired vision. The company will also offer more open access to less invasive computer vision products, such as one that automatically blurs the faces of anyone captured in a frame.
The tech giant is making the move after adopting a new Responsible AI standard, which extends beyond facial recognition. For example, the company is similarly restricting access to its neural voice technology, which can be used to make synthetic voices that are virtually indistinguishable from an original. Microsoft has been watermarking its synthetic voices with minor fluctuations to make sure that they cannot be used in deepfake scams.
Current Azure Face customers will lose access to the disappearing features on June 30, 2023, while new customers will never have access to them at all moving forward. Microsoft has advocated for the ethical use of facial recognition in the past, having previously refused to sell the technology to law enforcement and for mass surveillance applications.
June 22, 2022 – by Eric Weiss