Insurers’ AI Use for Coverage Decisions Targeted by Blue States

Nov. 30, 2023, 10:00 AM UTC

States are getting more aggressive in probing how predictive artificial intelligence models may lead to discrimination in insurers’ underwriting practices.

Colorado became the first state to adopt a formal regulation specifically aimed at insurance algorithms. But regulators in other blue states—including New York, California, and Connecticut, as well as Washington, D.C.—have issued their own warnings and notices directing carriers to show their models and data aren’t discriminatory. New Jersey, meanwhile, is one of several states that have introduced similar legislation.

The state regulators’ moves come as insurance giants—including units of State Farm Mutual Automobile Insurance Co., Cigna Group, and UnitedHealth Group Inc.—have faced proposed class actions alleging unfair practices against minorities and older customers stemming from the insurers’ use of automated processes to deny coverage.

Read More: State Farm Must Face Race Discrimination Suit Over Algorithms

Colorado’s initial regulation, which took effect in November, instructs life insurance companies to report how they oversee AI models and use nontraditional data such as customers’ social media posts, credit scores, and shopping habits. Life insurers typically use medical records, ZIP codes, smoking habits, and marital status, among other factors, to determine a policyholder’s rate.

Colorado “is just the first mover. It’s wise for us to consider this as the beginning of state-by-state rulings,” said Bryan Simms, president of Mammoth Life & Reinsurance Co. “We would expect to see every state have some form of regulatory issuance” requiring insurers to explain AI-driven decisions, he said.

Insurers are already raising concerns about a complex patchwork of laws and regulations governing their use of AI, as each state takes its own approach. States have been the primary regulators of insurers dating back to the 19th century.

If Colorado’s regulation proves to be effective, “it’s really easy to export this model to other states” and across other insurance lines including to auto and home, said Daniel Felz, a partner at Alston & Bird LLP who advises clients on data and technology issues.

Colorado has also proposed using the current life insurance AI regulation on auto insurers, setting a Dec. 1 deadline for comments, said Carole Walker, executive director of Rocky Mountain Insurance Information Association, an auto and home insurance trade group. “We do not want the same framework for auto,” she said.

Black Box

The advance of AI has brought faster underwriting and allowed insurers to sell more policies. At the same time, AI tools have renewed old concerns, and surfaced some new ones, about biases in the risk and pricing models insurance companies use.

In the pre-AI era, carriers worked with actuarial tables to calculate risks for different demographic groups. Insurance underwriting was “very empirical, concrete, and mechanical,” said Noah Johnson, co-founder of data security firm Dasera Inc. “One can reproduce the same result and explain exactly how they got there.”

AI, on the other hand, is much more opaque, and can draw correlations between two unrelated factors without showing its work or what it learned from the underlying dataset. Insurers nowadays can feed AI models a trove of consumer data obtained from public sources or purchased from third-party vendors, increasing the likelihood that disparate data points will be linked.

Regulators are concerned AI tools could make decisions based on correlation instead of true causation, said Gene Benger, a Clifford Chance attorney and former general counsel at the New York State Department of Financial Services.

Read More: Regulate AI? Here’s What That Might Mean in the US: QuickTake

For example, an AI model may find a bartender who drives home at 3 a.m. has more accidents and thus should pay a higher rate than a teacher who commutes at regular hours, he said. But commute schedule instead of occupation could be the cause of the accidents, since some people may drive more recklessly late at night on a highway when there aren’t many cars, he said.

New York, for its part, adopted a rule several years ago barring auto insurers from using occupation and education to determine insurance premiums, because that amounts to discrimination against certain professions, the regulator said.

The Empire State has also ordered insurers to explain and defend how any correlation is drawn based on an automated process, Benger said. It isn’t enough merely to assert that AI is an inscrutable black box.

Rather than relying on a policyholder’s occupation, for instance, “the regulator wants insurers to pinpoint the exact driving activity or factor that contributes to more accidents,” Benger said.

New York will soon issue additional AI guidelines on insurance underwriting, the state’s financial services regulator, Adrienne Harris, said at a fintech conference in November.

Historical Bias

AI-related bias in insurance products can’t always be blamed on algorithms.

Some AI models may draw from narrow datasets that aren’t representative of the whole population, while others likely reflect the preexisting bias of people who designed and trained them, said Dasera’s Johnson.

Long before the advent of AI, many insurers were accused of discriminatory practices.

Life and health insurers, for instance, have often avoided selling policies to minority groups by associating shorter life expectancy with lower socioeconomic classes, said Simms of Mammoth Life.

“Some life insurers have never taken a chance to try to underwrite different communities,” Simms said. “They have bias built into the system, whether they have modern data analytics tools or not.”

If anything, expanding the universe of publicly available data points for AI tools should give life insurers an opportunity to sell more policies to historically excluded communities, he said.

But complying with various states’ AI insurance directives could prove challenging.

‘Not Worth It’

Colorado’s AI regulation for life insurers is only an opening move, said Vikram Sidhu, an insurance regulatory partner at Mayer Brown LLP.

Insurers that don’t comply could face sanctions such as fines and penalties, suspension of licenses, and cease-to-exist orders, he said. Carriers will also have to follow quantitative AI testing requirements under a draft proposal the state insurance regulator issued in October.

Meanwhile, the National Association of Insurance Commissioners issued a draft bulletin in October asking insurers to have better governance over their use of AI models and big data.

“There’s a lot for insurers to get their arms around,” Sidhu said.

Insurers are worried Colorado will use the life insurance regulation as a template for auto insurers’ use of AI in underwriting, said Walker from the Rocky Mountain Insurance Information Association.

“Auto is so much more complicated than life,” she said. “There are so many variables: people drive different cars, and there are just different geography and weather factors.”

Insurers operating in multiple states are also concerned about how to satisfy AI requirements across jurisdictions. “There may not be uniformity,” said Avi Gesser, a data security partner at Debevoise & Plimpton LLP.

“It would be a problem for some insurers if they had to do different testing for their algorithm state-by-state,” Gesser said. “Some insurers may say, ‘Well, maybe it’s not worth it—maybe we won’t use external data, or maybe we won’t use AI.’”

To contact the reporter on this story: Daphne Zhang in New York City at dzhang@bloombergindustry.com

To contact the editors responsible for this story: Michael Smallberg at msmallberg@bloombergindustry.com; Anna Yukhananov at ayukhananov@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.