Outcry Over Insurer AI Use Energizes State Bids to Crack Down

December 31, 2024, 10:05 AM UTC

State legislators are renewing efforts to regulate the use of artificial intelligence in health insurers’ coverage decisions, fueled in part by the discontent many Americans voiced after the shooting death of a top UnitedHealth Group Inc. executive.

A landmark California law (SB 1120) prohibiting health insurers from using AI tools to make medical necessity determinations takes effect Jan. 1, and the law’s sponsor is talking with other states about his legislation, hoping it will set a precedent amid reports of insurers increasingly using AI to deny care.

Lawmakers in Georgia, New York, and Pennsylvania who introduced legislation on the issue this year say they plan to revive these bills in 2025 as interest grows across industries in regulating the evolving landscape of AI technology. There are divisions over whether to tailor the bills to insurance or broaden them to include hospitals, health systems, and others in the industry that use AI tools to expedite communications and care for patients.

Regardless of the approach, state lawmakers and consumer advocates say the killing of UnitedHealthcare CEO Brian Thompson has brought to the forefront many Americans’ longstanding dissatisfaction with health insurers—fueling the need for urgent action to limit improper uses of AI that can delay or deny access to care.

Law enforcement officials say the suspected shooter may have been motivated by frustrations with the health-care system. While no direct link is known to AI’s role in health care, the use of automated tools has been connected to a surge in insurer denials at the center of public criticism against the industry.

“We’re supposed to have that best-in-class health-care system in our country, and people are falling through the cracks,” said New York Assemblymember Pamela Hunter (D), the sponsor of the legislation there and president of the National Council of Insurance Legislators.

“It is incumbent upon legislators and industry to really have honest, transparent conversations about health-care delivery, health-care affordability, and the quality of care,” Hunter said in an interview.

Widespread ‘Frustration’

Americans have long called for greater transparency in health insurers’ coverage decisions, and Thompson’s death revived that criticism on social media and other online platforms.

“The killing has really exposed deep-rooted frustration many Americans feel with interacting with the health-care system,” said Hunter, who plans to reintroduce in New York’s upcoming legislative session her bill (A 9149) that would require insurers to provide notice when AI algorithms are used in evaluating the medical necessity and appropriateness of care, also known as utilization reviews.

A 2023 KFF survey of insured adults found that 6 in 10 reported encountering issues when using their insurance, including denied claims and prior authorization delays or denials.

Many health plans use AI tools to identify gaps in care and potential safety issues, facilitate quick decisions on coverage, and predict patient risks to inform treatment options, according to trade association America’s Health Insurance Plans.

But scrutiny over the tools has grown, with top insurers Humana Inc., Cigna Corp., and UnitedHealth facing class actions alleging the companies used AI technology to improperly deny coverage. A US Senate committee report said in October that the increased adoption of AI tools was associated with a surge in coverage denial rates for post-acute care among UnitedHealthcare, Humana, and CVS Health Corp.

AHIP, of which Humana and CVS are members, said in an email that health plans don’t use automated algorithms to issue clinical-based denials of prior authorization requests, and that any cases that require clinical decision-making are reviewed by a health plan’s medical staff.

California Regulations

Wayne Turner, a senior attorney at the National Health Law Program and a consumer representative to the National Association of Insurance Commissioners, said California’s law can set a precedent for putting human reviews at the center of health insurers’ coverage decisions. Starting in January, any health insurer in the state must have a licensed physician or qualified health-care provider review and decide on any denial, delay, or modification of care based on medical necessity.

“We want to see these efficiencies and consumers benefit by having prior authorization determinations made quickly, but it does not obviate the need for individualized assessments,” Turner said.

The California law goes further in regulating insurers’ use of AI than other bills lawmakers are planning to revive in the next session. Pennsylvania state Rep. Arvind Venkat (D) said in an interview that while he understands the motivation behind the California law, that shouldn’t discount that “artificial intelligence can be a positive tool in this area as well.”

“I’m not sure that I’m ready to simply say that artificial intelligence should not be used in evaluating medical determinations of necessity,” said Venkat, an emergency physician whose bill (HB 1663) would require state-regulated health plans to share information on the AI technology used in coverage reviews. It stops short of prohibiting AI tools from making necessity determinations.

A Broad View

Both Venkat and Hunter said it’s necessary that legislators look into how AI is being used across the health-care industry, and not just among insurers.

“Whether it’s a hospital or health system, a physician’s office, provider group, or an insurance company, the same transparency, anti-bias, and human decision maker requirements need to be in place,” Venkat said.

Hunter said she also believes legislation should be comprehensive but cautioned that “sometimes you have to take pilot baby steps when you’re trying to enact legislation that has broad reach.”

Georgia state Rep. Mandisha Thomas (D) isn’t returning to her seat in the next year but said in an interview that Rep. Segun Adeyina (D) plans to refile Thomas’ bill (HB 887). Similar to the California law, the Georgia legislation would require insurers to have a qualified individual review any health-care decisions made using AI tools.

“People are hurting—financially, physically and emotionally, and all of that is wrapped into health care,” Thomas said. “We need this to be done across the states.”

To contact the reporter on this story: Celine Castronuovo at ccastronuovo@bloombergindustry.com

To contact the editor responsible for this story: Karl Hardy at khardy@bloomberglaw.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.