Defining AI Collusion Depends on Consumer Harm and Algorithms

Oct. 3, 2024, 12:38 PM UTC

The use of artificial intelligence has been under the microscope, from its role in misinformation campaigns to its potential to violate antitrust law. Companies are increasingly using algorithms to determine their prices at scale, which can cause consumers to pay higher prices for goods.

The Federal Trade Commission and Department of Justice earlier this year filed a statement of interest in a hotel algorithmic price-fixing case, explaining that “hotels cannot collude on room pricing and cannot use an algorithm to engage in practices that would be illegal if done by a real person.” Most recently, the DOJ filed suit against real estate software company RealPage Inc., accusing it of creating an unlawful pricing scheme to charge tenants higher rent prices.

Though I agree we must fight algorithmic collusion with antitrust lawsuits, I believe there are some myths to dispel where AI is spotlighted as the enabler.

The real issue with AI algorithms isn’t really about algorithmic collusion—it’s about being able to interpret the results of AI to understand whether it’s harming consumers or simply being used to set prices in the digital economy more efficiently. AI is a tool, akin to a calculator or an Excel program, that’s based on statistics—it uses large amounts of data to make predictions through probabilistic models.

Algorithmic collusion implies algorithms are instructing computers to agree on a piece of information that’s illegal. Does AI teach a computer to agree with other computers and communicate sensitive information among themselves? Not really. The peculiarity of AI and machine learning is that these algorithms don’t simply execute an order. They typically learn to do a specific task by leveraging a large amount of available data.

Collusion also implies agreement and cooperation. But AI algorithms generally function in isolation. They don’t communicate and coordinate their conduct as companies do when they agree to set the same price or otherwise limit competition. Therefore, looking at AI to understand how to tackle algorithmic collusion seems misleading—the attention on collusion should instead be on agreement algorithms.

Such algorithms, also known as consensus algorithms, instruct computers on how to agree on shared information, such as prices, ensuring data consistency across all computers within the network/system. Techniques computers use to agree aren’t so different from those companies use in a non-computer framework to collude. They have little to do with AI and machine learning techniques.

Still, the critical question in RealPage and similar antitrust cases often labeled as “algorithmic collusion” is whether AI and machine learning harm consumers by enabling companies to set higher prices than those companies would set without using the algorithms.

AI is employed to set or recommend prices at scale by leveraging data and computational power. Additionally, large language models underlying AI are becoming more sophisticated and precise. The problem is that as the accuracy of an AI system’s predictions increases, it also becomes more difficult to explain and interpret AI results.

The question of interpretation and explicability of AI results, if addressed, could help in understanding how AI prices are set and if AI algorithms are harming consumers.

A company can’t collude with itself. Should the DOJ prosecute the AI algorithms used to set prices? That’s a possibility, but first it’s important to know how AI sets prices, and if those prices are higher than they would be without AI intervention. Factors such as inflation also must be considered.

The true value of AI on our society will be difficult to ascertain unless we can effectively explain and interpret its output. Understanding AI isn’t just an antitrust problem; it extends to all the ways it’s being used and manipulated in our society.

If we continue to tackle AI concerns without actually addressing the true issue, we all stand to lose in the end.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Author Information

Giovanna Massarotto is a research fellow at University of Pennsylvania Carey Law School.

Write for Us: Author Guidelines

To contact the editors responsible for this story: Daniel Xu at dxu@bloombergindustry.com; Alison Lake at alake@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.