AI’s Racial Bias Claims Tested in Court as US Regulations Lag

Feb. 7, 2025, 10:30 AM UTC

A lawsuit developing in the Midwest highlights an AI issue that continues to trip up companies and policymakers: how to stop algorithms fed race-free data from seeing color anyway.

“Where we live, what we do for a living, what type of activities we engage in, what we purchase, even our names all have correlations with race,” said Mark Dredze, a computer science professor at Johns Hopkins University, and interim deputy director of the university’s data science and AI Institute.

“Not telling an algorithm a person’s race doesn’t prevent it from inferring that information from countless other things it may know about us,” said Dredze, also a visiting researcher at Bloomberg LP.

That’s at the heart of the suit from two Black homeowners in Illinois who allege that their State Farm insurance claims took longer and required more proof than identical claims from White neighbors after storm damage. The case, filed in 2022 in the US District Court for the Northern District of Illinois, survived a motion to dismiss and could grow into a class-action this year to include thousands of claimants across six states in the Midwest, said David Tracey, an attorney for the plaintiffs.

As artificial intelligence tools become more common and sophisticated, with companies relying on them for deciding who they hire or how they price their products, the possibility of unintentional bias remains real.

The European Union’s AI Act addresses this by asking for ”technically robust” AI systems, training models with representative data sets, and requiring companies providing AI data to trace and audit the data that is being used. Colorado enacted a law seeking to prevent discrimination through the use of algorithms in insurance practices. And New York City now requires bias audits of AI tools used for employment decisions.

At the federal level, former President Joe Biden’s executive order on AI that warned of algorithmic bias was repealed by President Donald Trump, and it is unclear if accounting for AI bias against racial or ethnic groups will remain a US priority. The White House didn’t respond to a request for comment.

Trump’s own brief executive order states that “we must develop AI systems that are free from ideological bias or engineered social agendas.”

“AI should not be biased,” said Chiraag Bains, a senior fellow with the Brookings Institution. “But while Trump is focusing on ideology, the overwhelming evidence of biased AI concerns racial and gender bias: job application tools that screen out women, facial recognition systems that can’t distinguish between dark-skinned faces, hospital algorithms that recommend less care for Black patients.”

“I would take this as a signal that the Trump administration won’t pursue civil rights enforcement against AI-based race and gender discrimination,” said Bains, who previously worked on issues of AI and equity for the Biden administration.

State attorneys general and private litigants could fill the void, said Bains.

Hail and Wind

The lawsuit against State Farm cites the federal Fair Housing Act in alleging that policy claims filed by Black homeowners Jacqueline Huskey and Riian Wynn were treated differently because of their race.

“To my knowledge, this is the first suit that is based upon potential machine-learning algorithms having this effect,” said Dan Schwarcz, a law professor at the University of Minnesota Law School who focuses on insurance law and regulation, talking specifically about lawsuits related to insurance.

A company spokesman denied the allegations.

“State Farm is committed to a diverse and inclusive environment, where all customers and associates are treated with fairness, respect, and dignity. We are dedicated to paying what we owe, promptly and courteously,” the spokesman said in an email.

The suit says that a policy claim by Wynn related to the roof membrane of her Evanston, Ill., townhome blowing off in a storm took three months longer to resolve and required additional paperwork and interactions compared to her White neighbor.

Huskey, a resident of Matteson, Ill., alleges State Farm took four months to grant her claim in part after hail damage to her roof. In that time, allegedly longer than it took the insurer to respond to White homeowners, the damaged roof led to water damaging her kitchen and bathrooms.

Wynn and Huskey allege that the disparities stem from State Farm’s reliance on algorithmic decision-making tools in its claims review process that predict fraud and decide which claims are paid out immediately and which merit more scrutiny. They allege that the company’s decision-making tools use inputs that correspond with race or learn from historic housing or claims data that is biased. And even when there is no information about race, algorithms can combine other inputs to produce discriminatory effects, the lawsuit alleges

“Algorithms can ‘learn’ to use omitted demographic features by combining other inputs that are correlated with race (or another protected classification), like zip code, college attended, and membership in certain groups,” the lawsuit claims.

Bains argues that existing provisions such as Title VII of the 1964 Civil Rights Act are inadequate to monitor AI algorithmic discrimination, and an overarching law might help. “What we really need is comprehensive civil rights protection,” he said. “We came close this past Congress with the American Privacy Rights Act.”

That bill stalled, in part, when the civil rights protections—AI impact assessment requirements or the ability to opt out of AI decision-making for housing and credit—was taken out of the bill, causing the Congressional Black Caucus to withdraw its support. Congress has yet to pass major AI legislation.

IRS Audits

Bias in algorithms can be identified by documenting each step of the modeling pipeline—from the input of data to the model used and how the output from it is used—and how each of those steps can mitigate or increase disparities, said Daniel Ho, a Stanford law professor who advised the Biden White House on AI policy.

“With the same exact training data, you can have modeling choices that may lead you to sort of decisions that have higher disparities or lower disparities,” he said.

He gave the example of research he led along with those from other universities that found Black taxpayers in certain instances, like when they are claiming the Earned Income Tax Credit, were three to five times more likely to be audited than others by the Internal Revenue Service based on an algorithm.

In May 2023, the IRS acknowledged after a review that Black taxpayers were disproportionately audited in some instances, confirming findings by Ho’s team. The agency said at the time that it was “overhauling compliance efforts” and that the selection criteria for the audits were being changed.

But as Bloomberg Tax reported, it’s unclear whether those changes will take effect under the Trump administration.

To contact the reporters on this story: Kaustuv Basu in Washington at kbasu@bloombergindustry.com; Olivia Alafriz in Washington at oalafriz@bloombergindustry.com

To contact the editors responsible for this story: Gregory Henderson at ghenderson@bloombergindustry.com; Michael Smallberg at msmallberg@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.