Generative artificial intelligence has experienced massive growth, with private investment in the sector reaching $33.9 billion in 2024. But lurking beneath the headlines touting AI’s ascent lies an emerging litigation landscape that should give pause to those racing to capitalize on the boom.
Federal securities cases related to AI fall into two predictable categories: those alleging companies overstated their AI capabilities, and those claiming companies understated AI-related risks.
The exaggeration-concealment dichotomy is a staple of securities law, raising questions of whether materially false and misleading statements were made (as distinct from puffery or inherently subjective statements) or omissions rendered existing statements misleading.
The application of these traditional principles to rapidly evolving AI technology presents unique challenges for companies.
Overstating AI Capabilities
Measuring AI performance presents a fundamental challenge. This difficulty in quantifying AI capabilities occasionally has made it challenging for AI entrepreneurs to fulfill their commitments to investors and customers. Legal complications emerge when these unfulfilled commitments cross the line from optimistic projections into potentially actionable misrepresentations under securities law.
The pending case of D’Agostino v. Innodata demonstrates the challenges companies face in accurately communicating AI capabilities to investors. Innodata Inc., a software and data engineering company, experienced significant market volatility after a financial research firm published a critical report questioning the company’s technology descriptions and marketing approach.
The complaint alleges that while Innodata advertised its AI-focused operations to investors and touted its AI expertise and growing volume of Silicon Valley contracts, the company was simultaneously reducing its research and development spending—a fact not disclosed to investors.
Further, plaintiffs claim Innodata’s Goldengate AI platform was merely “rudimentary software” and that the company was essentially “pos[ing] as an AI company” by leveraging contracts with actual AI companies to provide cheap offshore labor for data annotation tasks.
In moving to dismiss, Innodata has raised a familiar defense, arguing that describing its platform as “state-of-the-art” constitutes inactionable puffery under federal securities laws. The defendants further contend that investors were fully aware of the company’s “large contingent of overseas workers” and resource constraints that prevented it from competing with larger AI companies.
Of course, not all statements about AI capabilities are puffery. In In re Upstart Holdings, Inc. Securities Litigation, the US District Court for the Southern District of Ohio drew a careful distinction between inactionable puffery and verifiable claims.
While statements about an AI model being a “fairly magical thing” were deemed “loosely optimistic statements that cannot be objectively verified,” the court held that claims about the “significant advantage” of the AI model over “traditional FICO-based models” and its ability to “respond very dynamically” to macroeconomic changes were actionable material misstatements. The court reasoned that these advantages were sufficiently specific, material, and verifiable to be actionable.
Understating AI Risks
This year saw numerous AI securities litigation claims filed (though not yet resolved) against companies that allegedly failed to adequately disclose risks associated with both their AI systems and AI systems that potentially impact their businesses. Issues may arise when companies struggle to communicate the potential limitations and risks of such AI capabilities.
In re Sprinklr, Inc. Securities Litigation illustrates this challenge. Investors claim the enterprise software company failed to disclose the risks from expanding its AI offerings, including the need to divert “needed resources and manpower” from its core suite business—diversions that posed known risks to the company’s primary revenue driver. The subsequent stock selloff following a Dec. 6, 2023, conference call underscores the complex challenges organizations face when implementing new technologies while managing existing operations.
Similarly, in Tamraz, Jr. v. Reddit, Inc. et al., plaintiffs allege the company failed to disclose that changes in Google Search’s algorithm and AI Overviews features were altering user behavior. Specifically, the complaint alleges that Google Search’s changes in algorithm caused “zero-click search,” searches in which users stopped their query on Google Search’s AI Overviews, rather than clicking through to the Reddit website.
The complaint alleges that defendants knew that AI Overviews was reducing traffic to the Reddit website dramatically in a manner the company couldn’t quickly mitigate, yet failed to disclose that material fact.
Practical Implications
Companies seeking to capitalize on the AI revolution face a delicate balancing act. The commercial imperative to demonstrate AI capabilities to attract investment and customers must be weighed against the legal risks of overpromising results or understating challenges. Combined with an evolving regulatory framework and breakneck technological advancement, AI-related securities litigation will likely continue its upward trajectory.
To minimize litigation risk while maintaining competitive positioning, companies should consider these key strategies:
- Companies should be mindful that what may seem like standard marketing language about AI capabilities—particularly claims about competitive advantages, performance metrics, or risk mitigation—may be scrutinized as verifiable statements of fact rather than mere puffery. Companies should document the basis for any quantifiable AI claims and avoid superlatives without supporting data.
- The obligation to disclose material risks potentially extends not just to the AI technology itself but to the broader impacts on the company, including how third-party AI developments might affect existing business models. Organizations should consider implementing risk disclosure protocols that address internal AI development challenges and external AI disruptions. How courts ultimately resolve the cases discussed above will shape how companies should think about their disclosure obligations going forward.
- Companies would benefit from establishing clear internal guidelines across all departments for AI-related communications and ensuring consistency between public statements and internal capabilities, particularly regarding spending and technical achievements.
Whether the current surge in AI-related securities litigation represents a temporary adjustment period or an enduring feature of the landscape remains to be seen. What is clear, however, is that as total AI investment has grown more than thirteenfold since 2014, so too have the litigation risks for companies operating in this space.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Samuel P. Groner is a partner at Fried Frank focused on securities litigation, shareholder and derivative litigation, and corporate disputes.
Katherine St. Romain is a partner at Fried Frank focused on securities litigation, complex commercial litigation, white collar criminal and regulatory matters, and corporate investigations.
Ilan T. Graff is a partner at Fried Frank focused on white collar matters, regulatory and internal investigations, and complex commercial litigation.
Daniel Liberman contributed to this article.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
