The recent craze over artificial intelligence, and its ever-increasing accessibility, may encourage new technological advancements in the loan industry. But while AI-driven lending strategies could increase access to credit in underserved communities, its propensity for algorithmic bias can also lead to lending discrimination lawsuits.
Right now, there are no antidiscrimination laws that specifically address the use of AI in lending. But that doesn’t mean creditors can skirt their responsibilities under existing laws just by blaming their algorithms. Discrimination lawsuits are being filed nonetheless—including in the lending industry itself—singling out AI as the culprit.
Algorithmic Bias and Lawsuits
Algorithmic bias generally arises in one of two contexts: when the algorithm itself is biased, or when an otherwise unbiased algorithm is trained on biased data. Even if the code or training data appear unbiased, an algorithmically derived lending strategy that disparately impacts protected groups would be considered unlawful under existing antidiscrimination laws, even though those laws don’t cover AI at all.
Borrowers alleging algorithmic bias can seek relief under laws like the Equal Credit Opportunity Act (ECOA), Fair Housing Act (FHA), Section 1981 of the Civil Rights Act, UDAP/UDAAP laws, and state antidiscrimination laws. None of these laws specifically mention algorithmic bias, but this is not stopping plaintiffs.
For example, a March 2022 class action brought under the ECOA, FHA, and California’s Unfair Competition law alleges that Wells Fargo’s online refinancing calculator, an algorithmic tool that requires factor inputs like ZIP code, education, and area code—all identifiers that the Consumer Financial Protection Bureau has identified as proxies for race—discriminates against minority and female applicants seeking to refinance their mortgages.
To appreciate the growing scope of algorithmic bias risks, it is also worth watching AI discrimination cases that do not involve lending, especially when the mechanisms of the alleged discrimination are similar.
For example, in a December 2022 class action brought under the FHA, the plaintiffs allege that State Farm’s use of automated claims processing methods and machine-learning algorithms discriminates against Black policyholders by treating insurance claims with greater suspicion than their White counterparts, resulting in delays that caused economic and emotional harm. The lesson for lenders? Data programs that single out members of a protected class for extra scrutiny and payout delays could expose lenders to liability.
Regulators Fill the Legal Void
Though plaintiffs appear unfazed by the lack of AI-specific discrimination laws, the federal government is mobilizing to regulate the issue nonetheless.
In 2020, the Office of Management and Budget issued guidance directing agencies to “consider in a transparent manner the impacts that AI may have on discrimination.” In May 2022, the CFPB explained that the ECOA and Regulation B require creditors to “provide statements of specific reasons to applicants against whom adverse action is taken,” even if the decisions were based on complex algorithms that make it hard to identify the underlying bases. And in October 2022, the White House released a Blueprint for an AI Bill of Rights, which broadly outlined how AI antidiscrimination laws could look.
And just this week, the CFPB, Justice Department, Federal Trade Commission, and Equal Employment Opportunity Commission issued a joint statement, noting that AI is becoming increasingly common and committing themselves to enforce “civil rights, non-discrimination, fair competition, consumer protection, and other vitally important legal protections.” The agencies pledged “to vigorously use [their] collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.”
However, none of this guidance is currently binding.
AI’s Risk-Reducing Silver Lining
AI has the potential to assess creditworthiness more accurately, reducing lending risk without unlawfully discriminating. In other words, the same technology that can lead to lending discrimination lawsuits can be a helpful tool in avoiding them.
How can attorneys advise lenders right now? They can start by ensuring that lenders understand—and, just as importantly, can explain—the mechanisms driving their AI’s credit approval process.
In particular, the AI training data used by a lender should be representative of the communities it services, and historical inequities should not be carried over into modern-day programs. The algorithms should also be calibrated to avoid over-reliance on agency-defined proxies for protected classes. And as a routine practice, lenders should review their AI-driven strategy. If it disparately affects a protected class of borrowers, it must be changed.
In the meantime, until regulators issue AI-specific rules, we can look to court decisions like those discussed above to further enlighten us on the interplay between AI, antidiscrimination requirements, and the lending industry.
Bloomberg Law subscribers can find related content on our In Focus: Artificial Intelligence resource.
If you’re reading this on the Bloomberg Terminal, please run BLAW OUT <GO> to access the hyperlinked content, or click here to view the web version of this article.
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.