- USC professor says AI industry isn’t free from market failure
- Without IP, markets can’t sustain content that feeds AI apps
Regulators’ confident predictions of antitrust harms in the artificial intelligence marketplace are hard to reconcile with a competitive, fluid ecosystem that doesn’t yet provide reasonable grounds for antitrust intervention.
But there are grounds to take action now to preserve intellectual property protections for content and data assets being used in building the AI ecosystem. All stakeholders would benefit by proactively developing access, tracking, and payment mechanisms that reflect the value of content and data used by AI model and app developers.
Antitrust Risks
An AI-enabled app such as ChatGPT occupies the end-point of a tech stack comprised of multiple segments. Proclivity to anticompetitive outcomes varies across the stack but doesn’t appear to be material in any particular segment.
Consider the downstream segment populated by generative AI apps such as Stable Diffusion, an image generator, or Runway, a video generator. Entry barriers and concentration are low, and product differentiation is high, which diminishes the likelihood of a monopoly.
In the upstream segment populated by large-language models such as OpenAI’s GPT-4 and Anthropic’s Claude (on which app developers rely), entry barriers are substantially higher, mostly due to the costs of procuring cloud-computing services and AI-specialized chips. Hence there are concerns that those entry barriers could promote concentration.
Several features of the AI ecosystem mitigate such concerns. First, companies have developed competing models and some of those models address more specialized vertical segments, which pose a lower risk of dominating the AI ecosystem in general.
Concerns may nonetheless persist that tech leaders may enjoy a competitive advantage by integrating models and associated apps into an existing platform. For example, Google’s Gemini and Meta’s Meta AI are answer engines (which provide direct answers to user queries) that are offered as part of a larger platform.
Yet those services vie with each other for adoption, casting doubt on the likelihood of an AI monopoly. Entrants have also developed models, such as OpenAI’s ChatGPT or Perplexity AI, answer engines that may challenge Google’s search service.
Second, entry barriers are mitigated by partnerships between platforms and entrants that have developed models or associated apps, such as Microsoft and OpenAI, and many other relationships between tech leaders and independent model developers.
These relationships facilitate entry and increase competition by providing model developers with investment capital, computing services, and distribution infrastructure. They typically are nonexclusive, which enables the model developer to retain control and source funding from multiple stakeholders.
IP Risks
Currently regulatory intervention on antitrust grounds doesn’t appear to be warranted given the active entry into the AI stack at various levels. However, there are compelling grounds to support taking action to remediate insufficient levels of IP protection for content producers and data owners.
Content and data fuel the AI ecosystem. Model developers rely on access to large datasets to train models and generate responses to user queries. Some model developers have entered into licensing agreements with large content owners.
But the AI ecosystem currently operates in a legal netherworld in which copyright protections are either largely ignored or, as some model and app developers may claim, deemed not to apply under various legal defenses. Those defenses are currently being tested in litigation brought by content owners.
Dot-Com 2.0
Legal uncertainty surrounding the use of proprietary AI content recalls the dot-com era at the turn of the 21st century. Digital platforms (which were then mostly startups and smaller firms) often took a “move fast and break things” approach that ran roughshod over copyright.
The gamble paid off. In a landmark ruling for Google over content owners in 2010, a federal court effectively exonerated its subsidiary YouTube’s actions in encouraging users to upload and share proprietary content through the platform.
Subsequent court decisions adopted similarly infringer-friendly positions in adopting expansive understandings of the fair use defense and statutory safe harbors against contributory infringement by platform intermediaries.
Content owners were left with limited meaningful remedies against mass infringement—and effectively enriched tech platforms that are now among the world’s largest companies.
While platforms largely prevailed in weakening copyright protections, the market developed user-friendly technologies to track consumption, regulate access, and deliver some remuneration to content owners, such as streaming platforms, e-book readers, or digital image libraries.
The reason is simple: Markets can’t function without meaningful property rights.
The reemergence of property rights reflects a recognition that publishers, record labels, and movie studios are unlikely to continue investing in content production if they can’t share in a meaningful portion of the revenues generated through use of that content. YouTube and other digital platforms now regularly enter into licensing deals with content producers or produce content themselves.
This logic underlies the Supreme Court’s 2023 decision in Andy Warhol Foundation for the Visual Arts v. Goldsmith, which places some limits on the lower courts’ continuous expansion of the fair use exemption beyond its historical boundaries.
The virtuous feedback loop among property rights, investment, and content production that partially restored the legal infrastructure for digital content markets can also apply to AI. To avoid a repeat of the copyright wars that ensued in the dot-com era, the AI-enabled marketplace would benefit from the development of efficient licensing and other payment mechanisms to maintain incentives for creative production.
Without intellectual property, markets won’t fund and support the activities necessary to preserve the content pipeline that sustains the AI ecosystem.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Author Information
Jonathan Barnett is professor of law at the University of Southern California’s Gould School of Law.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.