Parabon’s technology “doesn’t tell you the exact number of millimeters between the eyes or the ratio between the eyes, nose, and mouth,” Greytak says. Without that sort of precision, facial recognition algorithms cannot deliver accurate results—but deriving such precise measurements from DNA would require fundamentally new scientific discoveries, she says, and “the papers that have tried to do prediction at that level have not had a lot of success.” Greytak says Parabon only predicts the general shape of someone’s face (though the scientific feasibility of such prediction has also been questioned). 

Police have been known to run forensic sketches based on witness descriptions through facial recognition systems. A 2019 study from Georgetown Law’s Center on Privacy and Technology found that at least half a dozen police agencies in the US “permit, if not encourage” using forensic sketches, either hand drawn or computer generated, as input photos for face recognition systems. AI experts have warned that such a process likely leads to lower levels of accuracy. 

Corsight also has been criticized in the past for exaggerating the capabilities and accuracy of its face recognition system, which it calls the “most ethical facial recognition system for highly challenging conditions,” according to a slide deck presentation available online. In a technology demo for IPVM last November, Corsight CEO Watts said that Corsight’s face recognition system can “identify someone with a face mask—not just with a face mask, but with a ski mask.” IPVM reported that using Corsight’s AI on a masked face rendered a 65% confidence score, Corsight’s own measure of how likely it is that the face captured will be matched in its database, and noted that the mask is more accurately described as a balaclava or neck gaiter, as opposed to a ski mask with only mouth and eye cutouts. 

Broader issues with face recognition technology’s accuracy have been well-documented (including by MIT Technology Review). They are more pronounced when photographs are poorly lit or taken at extreme angles, and when the subjects have darker skin, are women, or are very old or very young. Privacy advocates and the public have also criticized facial recognition technology, particularly systems like Clearview AI that scrape social media as part of their matching engine. 

Law enforcement use of the technology is particularly fraught—Boston, Minneapolis, and San Francisco are among the many cities that have banned it. Amazon and Microsoft have stopped selling facial recognition products to police groups, and IBM has taken its face recognition software off the market. 

“Pseudoscience”

“The idea that you’re going to be able to create something with the level of granularity and fidelity that’s necessary to run a face match search—to me, that’s preposterous,” says Albert Fox Cahn, a civil rights lawyer and executive director of the Surveillance Technology Oversight Project, who works extensively on issues related to face recognition systems. “That is pseudoscience.”

Dzemila Sero, a researcher in the Computational Imaging Group of Centrum Wiskunde & Informatica, the national research institute for mathematics and computer science in the Netherlands, says the science to support such a system is not yet sufficiently developed, at least not publicly. Sero says the catalog of genes required to produce accurate depictions of faces from DNA samples is currently incomplete, citing Human Longevity’s 2017 study.

In addition, factors like the environment and aging have substantial effects on faces that can’t be captured through DNA phenotyping, and research has shown that individual genes don’t affect the appearance of someone’s face as much as their gender and ancestry does.  “Premature attempts to implement this technique would likely undermine trust and support for genomic research and garner no societal benefit,” she told MIT Technology Review in an email.

Similar Posts