A team of researchers from Durham University’s Computer Science Department have shared the results of a study that set out to reduce racial bias in facial recognition algorithms. The study was conducted by a pair of PhD students (Seyma Yucer-Tektas and Samet Akçay) in collaboration with two faculty members (Dr. Noura Al Moubayed and Professor Toby Breckon).
In the study, the researchers were able to increase the accuracy of facial recognition algorithms for people of all ethnicities. They also reduced racial bias by a full one percent. The results were presented at the virtual Computer Vision and Pattern Recognition conference on June 15.
While impressive, the one percent figure is likely not enough to alleviate the concerns of privacy advocates, who have repeatedly pointed out that facial recognition algorithms are less accurate when applied to people of color. The NIST corroborated that bias in a study that was published in late 2019, which tested 189 different algorithms and found that they were between 10 and 100 times more likely to misidentify Black and Asian individuals than white men.
To reduce bias, the Durham researchers focused on facial characteristics like nose and eyes instead of skin tone. Biased facial recognition systems can have disastrous consequences for those who are misidentified, as in the case of a Black man who was wrongly arrested in Detroit based on the basis of nothing more than a false facial recognition match.
Source: The Palatinate
August 6, 2020 – by Eric Weiss