AI Antitrust Enforcement Supports Consumers, National Security

Oct. 30, 2024, 8:30 AM UTC

Artificial intelligence is a powerful tool that can yield great potential rewards—more efficient services, amazing inventions, and new treatments for diseases. But it carries enormous potential risk—misuse by bad actors, mass labor displacement, and even rogue takeover by autonomous machines.

Despite the warranted concerns about AI’s unknown dangers, large technology companies are poised to profit trillions from this emerging market. To remain ahead of the demand curve, many companies have expedited the AI race.

But as Pope Francis recently said, if AI is to serve its proper role, we need “a regulatory, economic, and financial environment capable of limiting the monopolizing power of a few and ensuring that innovation benefits the whole of humanity.”

US antitrust enforcers must use existing enforcement tools to ensure mergers aren’t anticompetitive, that technology corporations don’t participate in unlawful collusion, and that companies don’t abuse their monopoly power to place their interests above the safety and well-being of consumers.

Public and governmental views on AI tend toward dystopian and utopian extremes. It’s a spectrum reflected in sparse legislation, unclear regulations, and limited enforcement of laws designed to encourage innovation and competition, empower consumer choice, and bolster national security.

The hesitancy to create and enforce antitrust laws on AI has created an environment for anticompetitive practices. Because large tech companies have vast access to input data, development frameworks, talent, and computer infrastructure, they’re uniquely positioned to commandeer AI technology and dominate access to raw materials that smaller startup companies need to develop their own AI tools.

Startups also struggle to access critical channels of distribution for end-use AI applications. Large tech companies already have existing monopolies on other technological market sectors such as search engines, social networking, and voice assistants, so they can easily integrate their AI systems into existing ecosystems and foreclose rival products.

In addition, large tech firms often shut down competition by creating barriers to entry, limiting access to essential inputs, and self-preferencing. It’s so difficult for independent startups to survive outside this bottleneck that their acquisition by, or partnership with, large tech companies could be seen as inevitable. Five of the most prominent—Amazon, Apple, Google, Meta, and Microsoft—have spent, by conservative estimates such as those from Pitchbook, more than $30 billion to acquire at least 30 AI startup companies.

Problems arise when large tech companies dodge the antitrust reviews of mergers by classifying their absorption of smaller companies as “partnerships,” “investments,” or “acqui-hires” where they hire most of a startup’s employees and then license its technology. These tactics may circumvent antitrust scrutiny and disguise effective control of a startup’s talent and technology.

Preventing collusion at the expense of consumers—in both costs and safety—will require vigorous enforcement of the Hart-Scott-Rodino Antitrust Improvements Act against harmful mergers and strong action under the Sherman Act against abuse of monopoly power. Increased market competition would create safer, higher-quality AI technologies and an added layer of caution for an emerging market that the US’ competitors and adversaries seek to exploit.

AI raises alarming national security concerns. China, for example, is actively seeking brain-computer interfaces that will merge human and machine intelligence through cognitive enhancement and “human-machine teaming” that could be used in war. AI development is no longer a niche market—it’s a driving force behind a new form of technological arms race.

The US must take a balanced approach to regulating AI and promoting competition. Over-enforcement and over-regulation could burden small businesses and limit innovation, but too little enforcement and regulation opens the door to anticompetitive tech practices and frees foreign companies to threaten national security interests.

It’s critical that we foster healthy domestic competition that will create the proper pacing needed to develop a safe, high-quality AI framework, and prohibit companies from sharing US technology with hostile countries for the sake of profit.

AI regulation isn’t simply a matter of antitrust enforcement—it’s about upholding the free enterprise principles of innovation and competition, vigilance, and excellence in this new technological frontier.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Author Information

Roger P. Alford is professor of law at Notre Dame Law School. He was formerly Deputy Assistant Attorney General for International Affairs at the Antitrust Division of the Department of Justice.

Write for Us: Author Guidelines

To contact the editors responsible for this story: Daniel Xu at dxu@bloombergindustry.com; Rebecca Baker at rbaker@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.