Given the growing use of artificial intelligence (AI) and automated decision-making tools in consumer-facing decisions, we expect federal regulators in 2022 to continue their recent track record of interest in potential discrimination and unfairness, as well as data accuracy and transparency.
Significant technological developments in these areas and the increasing use of data analytics to make automated decisions will likely result in further regulatory action this year in three key areas: (1) assessing whether AI and algorithms are excluding particular consumer groups in an unfair and discriminatory manner, whether intentionally or not; (2) evaluating whether collected data accurately reflects real-world facts and whether companies are giving consumers an opportunity to correct mistakes; and (3) assessing whether automated decisionmaking tools are being used in a transparent manner.
Over the last year, federal regulators with enforcement authority in the consumer space—the Federal Trade Commission (FTC) and the Consumer Financial Protection Bureau (CFPB)—have expressed their intention to continue enforcement efforts.
The FTC has identified “technology companies and digital platforms,” “bias in algorithms and biometrics,” and “deceptive and manipulative conduct on the Internet” as among its top enforcement priorities for the coming years, and directed staff to use compulsory processes to demand documents and testimony to investigate potential abuses in these areas.
The FTC and the CFPB have each initiated or continued investigations into practices involving the collection of consumer data and the use of data analytics in consumer decisions, including the use of AI and algorithms by financial institutions, digital payment platforms, and social media, and video streaming firms.
Both agencies have also made public statements that provide insight into the types of regulatory action that may be coming this year.
FTC Enforcement Areas
For example, the FTC published blog posts on its website outlining its thinking on AI-enforcement focus areas.
Discrimination and Unfairness
The FTC emphasized that Section 5 of the FTC Act, which prohibits “unfair or deceptive” practices, gives it jurisdiction over racially-biased algorithms. The FTC cautioned companies that regardless of how well-intentioned their algorithm is, they must still guard against discriminatory outcomes and disparate impact on protected classes of consumers.
The FTC stated that it planned to rely on its decades of experience enforcing the Fair Credit Reporting Act (FCRA) when analyzing whether other types of consumer-related AI meet the requirements of this law.
The FTC also advised companies not to rely on “data set[s] missing information from particular populations” and advised companies to give “consumers access and an opportunity to correct information used to make decisions about them.”
The FTC said that companies should “embrace transparency … by conducting and publishing the results of independent audits” and by disclosing to consumers the key factors used in algorithms to assign risk scores.
Companies should examine their data inputs, ask questions before they “use the algorithm,” and “validate” and “revalidate” their AI models so that they fully understand the implications of their use of these data tools.
CFPB Enforcement Focus
The CFPB has likewise highlighted its interest in the following areas.
Discrimination and Unfairness
In recent testimony before Congress, CFPB Director Rohit Chopra expressed a desire to reinvigorate “relationship banking,” explaining that it would counteract the “automation and algorithms [that] increasingly define the consumer financial services market” and may “unwittingly reinforce biases and discrimination, undermining racial equity.”
A November 2021 advisory opinion by the agency emphasizes the need for accuracy in relying on data tools to make consumer decisions.
The CFPB specifically advised that “matching consumer records solely through the matching of names” is not a “reasonable procedure to assure maximum possible accuracy” under the FCRA. The CFPB further encouraged the use of more sophisticated and reliable data analytics.
In a March 2021 RFI to financial institutions seeking their views on governance, risk management, and compliance management in the “Use of Artificial Intelligence, including Machine Learning,” the CFPB stressed the importance of AI “explainability”—in other words, the need for companies to be able to ascertain and explain how their AI applications use data “inputs to produce outputs” in a conceptually sound manner.
The RFI also discussed the need for companies to monitor and validate algorithms that evolve on their own or dynamically update.
EEOC, DOJ Also Looking at AI
Other regulators have also indicated an interest in AI-related enforcement. For example, the Equal Employment Opportunity Commission has announced an initiative assessing the propriety of AI tools for hiring and other employment decisions.
In addition, the Department of Justice, along with the CFPB and the Office of the Comptroller of Currency, launched the an effort to combat discriminatory redlining by lenders; in his statement announcing this effort, Chopra said that they plan to focus on “new digital and algorithmic redlining” in addition to “old forms of redlining.”
In all, we expect these and other efforts by regulators will continue to focus on issues of discrimination and unfairness, and accuracy and transparency in the use of AI and consumer data. As the rules of the road continue to be written through regulatory activity in 2022, it is critical for companies to keep up to date with the latest developments.
This article does not necessarily reflect the opinion of The Bureau of National Affairs, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Ali M. Arain is a partner in Jenner & Block’s Financial Services Litigation practice. He focuses on financial services-related litigation and internal investigations, often involving securities fraud, commercial contracts, and complex financial products like mortgage-backed securities, collateralized debt obligations, and credit default swaps.
Michael W. Ross is a partner in Jenner & Block’s Complex Commercial Litigation and Securities Litigation practices. He represents corporate clients in complex disputes, investigations, and regulatory challenges and much of his work focuses on financial services, technology, and intellectual property.
Jonathan Steinberg is a law clerk in the firm’s Litigation Department.