AI, Algorithm-Based Health Insurer Denials Pose New Legal Threat

April 8, 2025, 9:05 AM UTC

A string of lawsuits targeting the use of artificial intelligence and algorithms in claims denials is raising risks for private health insurers and employers, even as the litigation encounters early obstacles.

A California federal judge’s recent decision to partially allow a case to proceed over Cigna Corp.'s alleged use of algorithms to improperly deny benefits signals that automation tools come with legal hazards.

The lawsuits also underscore patients’ frustration with the insurance industry, which is already playing defense in courts over insurers’ use of prescription drug rebates, limited access to claims data, and allegedly stiffing doctors in surprise billing disputes.

“To the extent that these tools are being used to block or deny coverage across the board for medically necessary services and to a large number of beneficiaries—if that is the case and that’s how these have been used, that could expose insurers to significant risk,” said David Greenberg, partner at ArentFox Schiff LLP.

Courts have allowed several lawsuits to proceed, but plaintiffs still face significant challenges in making their cases. In some instances, the insurance companies denied that the plaintiffs’ medical claims were even handled by an algorithm. Proving otherwise can be difficult.

“The allegations that came out in the news are very concerning but when it comes down to a lawsuit, if you make those allegations and assertions, the court is going to force you to substantiate them,” Greenberg said.

Mixed Bag

Judge Dale Drozd of the US District Court for the Eastern District of California on March 31 allowed a proposed class action in Kisting-Leung v. Cigna Corp.to proceed over the insurer’s PxDx algorithm in a partial win for plaintiffs, who must amend parts of their complaint. The case cited reporting from ProPublica that said PxDx denied more than 300,000 requests for payments over two months in 2022, with Cigna doctors spending an average of 1.2 seconds reviewing each claim.

The judge ruled that three of the six plaintiffs lacked standing to sue on some counts because they couldn’t disprove Cigna’s assertion that it did not use PxDx on their medical claims. Cigna said that it sends disclosures to the doctor and patient whenever it uses PxDx. The revelation helped it defeat the three plaintiffs’ claims, but could easily go against them in a future case, said Henry Norwood, of counsel at Kaufman Dolowich LLP.

“That could work to their advantage if they’re sued by a patient whose claim was not subject to the technology,” he said. “Or it might also work to their disadvantage, because then it establishes that they did use the technology.”

Cigna previously won a suit from a plaintiff who ultimately conceded that his claims weren’t subject to PxDx. But the judge’s decision rested largely on the fact that he was no longer enrolled in the plan and that his lawsuit was filed too late.

So little is known about the software used to evaluate claims that it’s also hard to determine whether a program qualifies as AI, said Elizabeth Edwards, senior attorney with the National Health Law Program.

“A lot of the harm is really done by very simple algorithms or rules-based engines that automate some part of the process, but are not necessarily—by some definitions—artificial intelligence,” she said.

A spokesperson for Cigna said PxDx does not use AI and is similar to software that other health insurers and the Centers for Medicare and Medicaid Services have used for years. The program is only used for around 50 “low-cost tests and procedures,” according to the company.

A Minnesota federal judge in February partially allowed another case to proceed against UnitedHealthcare over its nH Predict technology—which allegedly does use AI—in its Medicare Advantage plans. UHC denied using the technology and, unlike the Kisting-Leung case, there is no clear disclaimer that either side can point to as evidence, Norwood said.

“The courts will hopefully determine whether or not it’s acceptable to use this type of technology in this context without disclosing it,” he said.

Federal v. State

Insurance companies are facing these lawsuits under a variety of state and federal laws and for a range of plans, including Medicare Advantage plans under the Medicare Act, self-insured plans under the Employee Retirement Income Security Act, and fully insured plans under state laws.

Longstanding statutes like civil rights laws apply to AI systems, but regulators are largely behind on AI-specific regulation, said Wayne Turner, a senior attorney with the National Health Law Program and a consumer representative to the National Association of Insurance Commissioners.

“Regulators and the law are still kind of playing catch-up,” he said.

States are increasingly taking action. California, Colorado, and Utah have all enacted legislation to curtail potential abuses from automated systems by requiring transparency and implementing guardrails in how they’re used.

Plaintiffs could theoretically fare better under state law, but federal laws like ERISA and the Medicare Act are designed to frequently preempt those claims. ERISA often requires plaintiffs to show that the insurer violated the plan’s terms, which can be hard to do.

The plaintiffs in the Kisting-Leung case ended up dropping two of four California state law claims in favor of ERISA claims, but the latest complaint’s claims on unjust enrichment grounds effectively supplement rather than replace ERISA, Norwood said, because “they were going to have to rely on the terms of the health plan anyway.”

Judge Drozd ruled that another state law claim citing California’s unfair competition law is not expressly preempted by ERISA because it falls under the law’s “savings clause,” which exempts some state insurance laws from preemption.

Even if the lawsuits don’t play out in plaintiffs’ favor, Edwards said the heightened awareness will likely spur more oversight.

“The more you know, the more you start looking at things,” she said, “and the more you start looking at things, the more you find.”

To contact the reporter on this story: Lauren Clason in Washington at lclason@bloombergindustry.com

To contact the editors responsible for this story: Rebekah Mintzer at rmintzer@bloombergindustry.com; Alex Ruoff at aruoff@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.