Avoid Antitrust Problems With Some Research Into AI Solutions

April 12, 2024, 8:30 AM UTC

The nation’s antitrust agencies are concerned that AI and mass data analysis will facilitate collusion and price fixing. The Department of Justice and the Federal Trade Commission in March each filed a statement of interest in a private antitrust class action, saying the use of pricing algorithms among competitors can be a per se violation of Section 1 of the Sherman Act.

This is the second time the agencies have filed a statement of interest in such a case. The filings follow the agencies’ decision last year to withdraw decades-old “safety zones” that protected competitors who exchanged competitively sensitive information under specific conditions.

Businesses need to heed this shift in thinking and be proactive about assessing the antitrust risks of using algorithms to make crucial business decisions.

Both antitrust class actions in which the DOJ and FTC have filed statements of interest involve property management software. It allegedly provides rental pricing recommendations based on competitively sensitive information provided by all landlords that use the software.

In McKenna Duffy v. Yardi Systems, filed March 1, the agencies argued that it’s illegal to “jointly delegate key aspects of pricing to a common algorithm,” even if participants “retain some authority to deviate from the algorithm’s recommendations.” The same day, the FTC issued a blog post reiterating its position that simply “because a software recommends rather than determines a price doesn’t mean it’s legal.”

The FTC also referred to another enforcement action brought by the DOJ against a benchmarking company in the meat processing industry, as well as ongoing private litigation against hotels and casinos for allegedly using mass data analysis to artificially raise prices.

Back in November 2023, the DOJ and FTC stated that “algorithms are the new frontier” of potential price fixing and the volume of information that an algorithm can digest “poses an even greater anticompetitive threat than the last” frontier.

According to the agencies’ In re Real Page statement of interest from Nov. 15, “knowingly combining sensitive, nonpublic pricing and supply information in an algorithm” to make pricing decisions “with the knowledge and expectation that other competitors will do the same” constitutes per se unlawful price fixing.

The agencies’ statements of interest show how AI and related data analysis tools have led to a total reversal in policy around information sharing. Previously, using a third party, providing older data, and having the data aggregated were safety zones that allowed businesses to share information. Now, the agencies see these guardrails as potential enablers of price-fixing.

As companies evaluate new information-sharing initiatives and reassess existing benchmarking efforts, the recommendation for how to deal with AI is always “consult counsel.” While sound advice, it doesn’t provide a helpful framework for what companies should be thinking about or doing.

Here are three key things companies can do to assess and reduce the antitrust risk associated with using AI and related algorithmic tools.

Fully understand what these tools do. The risks from using pricing software, engaging in benchmarking, or using AI to analyze industry trends are all different. Companies should always ask the age of the information at issue, who uses the output and how it’s used, and who else is participating or using the same tools. Having a legal department gather the answers to these questions can help companies better understand if they are inadvertently taking on more antitrust risk than they realize.

Document the procompetitive benefits. These are benefits the company is trying to achieve, such as decreasing prices or costs or increasing efficiency or output. Any procompetitive benefit should be thoroughly documented before benchmarking activity begins. Having detailed and contemporaneous evidence of procompetitive benefits will be viewed far more favorably than claims made after the fact.

Determine whether the company can be indemnified. The indemnification can be by the AI provider or by their own insurance. Some AI providers already offer indemnification to clients, while others expressly decline any financial responsibility in their terms. Given the increased scrutiny on benchmarking, pricing software, and the use of AI, companies should know whether they are indemnified upfront.

AI and related tools can make companies more competitive. At the same time, use of these technologies can amplify antitrust risk. Companies can best protect themselves by asking hard questions upfront, taking the time to document the competitive benefits of the activity, and ensuring that the costs of responding to any government inquiry or private litigation is indemnified.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Author Information

Jeetander T. Dulani is partner at Stinson and focuses on merger control, antitrust litigation (including class actions), and civil and criminal government investigations.

J. Nicci Warr is partner at Stinson, where she focuses on antitrust and competition-related litigation and counseling.

Write for Us: Author Guidelines

To contact the editors responsible for this story: Rebecca Baker at rbaker@bloombergindustry.com; Daniel Xu at dxu@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.