Opinion: Unregulated facial recognition use poses risks for Detroit

Kami Chavis and Jim Trainum

Earlier this year, Detroit police arrested Robert Williams in front of his wife and children and held him for 30 hours before realizing they had the wrong man. Before then, Michael Oliver was wrongfully arrested and charged with a felony.

Just in the past few weeks, we learned that in both cases, the police relied on a  facial recognition match that led them astray.

It’s a shocking look at the wrongful arrests of two Black men, but what’s worse is that it’s not at all surprising. Research has shown that facial recognition is often unreliable, especially when it comes to identifying people of color, yet the Detroit police department relies on it anyway. 

These arrests should be a wake-up call for Detroit residents, as well as others concerned about criminal justice reforms. 

In response to Williams’ story, the Detroit Police Department said, “Facial Recognition software is an investigative tool that is used to generate leads only.” 

Robert Williams

But it is unacceptable to ignore the risks of facial recognition misidentifications by claiming it’s just used for leads. 

Using unreliable information as the foundation of an investigation is dangerous, regardless of whether that information is introduced in court. If police regularly based investigations on contaminated DNA or smudged fingerprints, it would be little comfort to hear “this tainted evidence is only used for leads.” And as these stories show, being targeted in an investigation based on a facial recognition error can be disruptive and potentially traumatic even if charges or a conviction never follow. 

For years, experts have warned that facial recognition can mislead police in a variety of ways. Last year we issued a report on how to address them as part of a task force on facial recognition.

First, facial recognition is far more likely to misidentify people of color. A recent study by the National Institute of Standards and Technology found some systems were 100 times more likely to misidentify Black and East Asian individuals.

Facial recognition’s reliability also varies with photo quality. Bad lighting, indirect angles, long distances, poor camera quality, and low image resolution make a misidentification more likely. Yet, all too often facial recognition scans are run on low-quality images grabbed from a grainy CCTV video, as was the case in Williams’ arrest. Law enforcement often expects a new technology to offer quick and easy solutions, not fully realizing its limitations.

While these accounts are the first recorded time that facial recognition has led to wrongful arrests, it’s almost certain other innocent individuals have been harmed by this technology. And since facial recognition use is rarely disclosed to defendants, we have no way of knowing how many times a poor facial recognition scan laid the foundation for an improper arrest — which could remain on someone’s criminal record even if charges are dropped — or even led to an innocent individual being pressured to take a plea deal for a crime they didn’t commit. 

But facial recognition can be just as dangerous when it’s right as when it’s wrong. A perfect algorithm used in the perfect way might not misidentify individuals, but it could be used to catalog protesters or track who goes in and out of a medical clinic.

Misidentifications are a serious risk, but accuracy is no safeguard from abuse.

Right now facial recognition surveillance is like the Wild West — the federal government has created no rules to prevent irresponsible uses. For these reasons, major cities like Boston and San Francisco have banned the use of facial recognition technology. 

Since these two wrongful arrests, the police department and prosecutor’s office have created some new restrictions on the use of facial recognition. But these self-imposed and self-enforced rules are not enough. Detroit should end its facial recognition contract, and lawmakers should support efforts to place a moratorium on law enforcement use of this technology at the state level. 

Kami Chavis and Jim Trainum are members of the Project On Government Oversight’s Task Force on Facial Recognition Surveillance.