Angela Lipps, a 57-year-old grandmother from Tennessee, spent more than five months in jail in Fargo, North Dakota after police used an AI facial recognition system to identify her as a criminal suspect. The only problem: the crimes she was accused of were allegedly committed in a state she says she has never visited. Her case, reported by CNN on March 29, 2026, is now one of the most detailed public accounts of what happens when facial recognition goes wrong and no one checks the alibi.
What Happened to Angela Lipps
Police in North Dakota used facial recognition software to match a photo of a suspect to Lipps's face. Based on that match alone, she was extradited from Tennessee. She was not brought before a judge quickly. She was not released when she raised concerns. She spent five months, and some accounts say close to six months, in a Fargo jail before the case collapsed.
Her family says she has no connection to North Dakota. No verified travel records placed her there. The facial recognition match was the primary evidence, and no one appears to have seriously questioned it before she was locked up.
This is not a rare edge case. It is a documented, recurring pattern.
A Pattern of AI-Driven Wrongful Arrests
Lipps's case is the latest in a growing list of documented wrongful arrests driven by facial recognition errors. In a separate case in Reno, a man named Jason Killinger spent 11 hours in jail, four of them handcuffed, after a casino's facial recognition system flagged him as a trespasser named Michael Ellis. A Reno police officer reportedly acknowledged the arrest "never should have happened."
These are not software glitches. They are system failures. The technology produces a match. Officers treat the match as evidence. No one independently verifies it before making an arrest. The result is that people lose months of their lives because an algorithm was confident and humans did not push back.
The pattern disproportionately affects certain demographics. Multiple studies have shown that facial recognition systems perform significantly worse on people with darker skin tones, women, and older individuals. Lipps fits more than one of those categories.
Why Police Keep Using It Anyway
Facial recognition is fast, cheap, and creates a paper trail that looks like evidence. Law enforcement agencies across the United States have adopted it with minimal legal framework governing its use. There is no federal law requiring departments to verify facial recognition matches before making arrests. Many jurisdictions have no policy at all.
The technology is also sold aggressively. Vendors pitch it as objective and reliable. In practice, "reliable" means something different at the 70th percentile accuracy than it does at the 99th, and for high-stakes decisions like arrest, even a small error rate produces thousands of wrongful outcomes at scale.
As noted in coverage of AI regulation debates in the US, governments are still far behind on creating meaningful guardrails for AI systems operating in law enforcement contexts.
What Should Actually Happen
Several civil rights organizations are calling for concrete reforms. The minimum standard most advocate for is simple: facial recognition cannot be the sole basis for an arrest. Before extraditing someone across state lines, you should have corroborating evidence. You should check the alibi. You should confirm that the person was actually present in the jurisdiction where the crime occurred.
That is not a high bar. It is baseline police work. The problem is that facial recognition creates an illusion of certainty that bypasses these steps.
A growing number of cities have banned law enforcement use of facial recognition entirely. San Francisco, Boston, and several others moved in this direction years ago. Federal legislation has been proposed repeatedly but has not passed. Meanwhile, the arrests keep happening.
The legal battles around AI and government agencies are intensifying, but courts move slowly. Angela Lipps spent five months in jail before the legal system corrected itself. That is five months too many.
The Stakes Are Higher Than One Case
Every wrongful arrest driven by AI makes the broader adoption of AI in high-stakes systems harder to defend. If the technology cannot be trusted to identify a single individual correctly, it certainly cannot be trusted to autonomously flag, surveil, or act against entire populations.
The Lipps case is not just a story about one grandmother's ordeal. It is a case study in what happens when humans stop being the final checkpoint in AI-assisted decisions and start being the afterthought.
Need AI tools that work for your business without the civil liability? OpenClaw Services builds tailored AI agent solutions where humans stay in control of every decision that matters.
Frequently Asked Questions
How did Angela Lipps end up in jail for five months?
Police in North Dakota used AI facial recognition software to identify Lipps as a suspect in crimes allegedly committed in the state. Based on that match, she was extradited from Tennessee and held in jail. She maintains she has never been to North Dakota, and the case eventually collapsed without conviction.
Are facial recognition wrongful arrests common?
Documented cases are increasing. High-profile examples include cases in New Jersey, Louisiana, and now Tennessee and Nevada. Studies show facial recognition systems have higher error rates for women, older individuals, and people with darker skin tones, which means errors are not random but concentrated in specific demographic groups.
Is facial recognition use by police regulated in the United States?
There is no federal law governing police use of facial recognition. Some cities and states have passed their own restrictions or outright bans. The absence of a national standard means law enforcement practices vary widely, with most departments operating without any formal policy on verification before arrest.