“Now, the NPL study on NEC’s technology offers further evidence that the racial bias problem can be ameliorated, and even eliminated.”
After years of intense debate over facial recognition’s shortcomings, new research from the United Kingdom’s National Physical Laboratory offers an updated perspective on equity issues in the police use of the technology.
Assessing NEC’s NeoFace solution together with the HD5 Face Detector, the study delivered multiple key findings. One of the most important directly concerns the longstanding issue of demographic bias, in which certain facial recognition systems show disparities in accuracy across different ethnicities and genders. The NPL study found that, when used at the settings maintained by the Met, there was no statistically significant difference in the facial recognition technology’s accuracy across such demographic groups; in other words, bias was eliminated from the system.
While the study did not test a broad sampling of facial recognition solutions, focusing instead on NeoFace, its findings nevertheless offer some indication of the state of the art in 2023, years after reports first came to light indicating the presence of bias in facial recognition.
A 2019 study from America’s National Institute of Standards and Technology (NIST) found particularly concerning disparities. Testing 189 facial recognition algorithms from 99 vendors, NIST researchers found that in one-to-many face searchers, African American women were more likely than other groups to receive a false positive match. More generally, Asian and African American people could be up to 100 times more likely than white men to be misidentified by certain facial recognition algorithms.
Such findings caused alarm over their real-world implications. Even a small disparity, such as a one percent difference in accuracy between different demographic groups, can add up to hundreds or thousands more misidentifications for one particular demographic group when used in large-scale scenarios such as airport screening.
But at least some biometrics vendors have taken pains to reduce or eliminate such biases in recent years. Paravision, for example, demonstrated a 100 percent True Identification Rate across Asian, Black, and White racial categories, and across Male and Female groups, in testing by the Department of Homeland Security last spring. Now, the NPL study on NEC’s technology offers further evidence that the racial bias problem can be ameliorated, and even eliminated.
Of particular importance is the study’s assessment of the facial recognition system when set to a matching threshold of 0.6. This is the default setting for NeoFace, and the one applied by the Met in Live Facial Recognition, as mentioned above. The matching threshold determines which faces could be a potential match; when it is set lower, more faces will be considered, and vice versa.
While the 0.6 threshold setting did produce a lower True Positive Identification Rate for Black female subjects, the variation in outcomes was the same as the variation produced by environmental effects, and not considered statistically significant.
At the higher threshold of 0.62, there was only one false positive, and at a threshold of 0.64, there weren’t any. This means that the possibility of demographic bias is completely eliminated at the latter setting. Conversely, when the NPL researchers lowered the matching threshold to 0.58 and under, they found that a statistically significant imbalance returned, with Black subjects more likely to receive false positives than Asian and White subjects.
This all suggests that NeoFace, at least, continues to have a demographic bias, but that it disappears at more selective matching configurations.
Also important is the NPL’s finding that the chance of a false match when using facial recognition at the 0.6 threshold amounted to one in 6,000. This, together with the finding concerning bias (or lack thereof), was used by another UK police agency, the South Wales Police, to help justify restarting its use of facial recognition.
“The study confirms that the way South Wales Police uses the technology does not discriminate on the grounds of gender, age or race and this reinforces my long-standing belief that the use of facial recognition technology is a force for good and will help us keep the public safe and assist us in identifying serious offenders in order to protect our communities from individuals who pose significant risks,” commented South Wales Police Chief Constable Jeremy Vaughan.
Sources: The Guardian, Swansea Bay News, Forbes, Metropolitan Police, NPL
–
April 12, 2023 – by Alex Perala
Follow Us