President Donald Trump’s executive order seeking to challenge and preempt state artificial intelligence regulations doesn’t give companies a get-out-of-jail-free card in 2026. Even as Washington pulls back on AI regulation, the courts won’t—and neither will consumers.
Major court rulings in 2025 already give AI companies clear pointers on where the minefields will be in the coming year: AI system design and training data. Of these two points, the most immediate and consequential one is the design liability.
AI Design Liability
AI design liability is no longer hypothetical. Tragic suicides and violent acts involving AI chatbots have led to lawsuits claiming that the chatbots manipulated vulnerable users’ emotions and encouraged suicide. In Garcia v. Character Technologies, Inc., the US District Court for the Middle District of Florida’s May 2025 ruling on a motion to dismiss is a complete game-changer: Traditional tort law is expanding its reach to generative AI in three major ways.
First, the court allowed the product liability claims to move forward by treating Character Technologies’ large language model, Character A.I., as a product rather than a service, opening the door to applying strict product liability principles to AI systems. While the court didn’t address Section 230 immunity in this ruling, the product characterization could weaken future Section 230 defenses, as companies may face questions about whether their AI systems are interactive computer services protected by that statute, or products subject to product liability standards. If AI is treated as a product, plaintiffs can argue that the “design” of the AI model was defective or that the company failed to provide adequate warnings about foreseeable damages.
Therefore, for 2026, AI companies launching chatbots, copilots, or autonomous decision-making tools should consider incorporating product safety testing into their risk assessments, just like product manufacturers. That means stress-testing models for potential harmful outputs (e.g., inconsistent decisions, dangerous recommendations, or emotionally manipulative responses), as well as disclosing material and foreseeable risks to consumers. The companies should also record testing results, which are a crucial defense in case of a lawsuit or government investigations.
Second, the court made clear that liability won’t stop at the front-end application developer. The entire AI supply chain could be pulled into the courtroom. The court allowed claims against Google to proceed under component-part manufacturer and aiding and abetting theories based on Alphabet Inc.'s Google’s provision of its Google Cloud technical infrastructure, including specialized tools such as graphics processing units and tensor processing units, which were essential for Character A.I.’s operation. This involvement went beyond generic business services and constituted substantial assistance.
For 2026, this means cloud providers, AI infrastructure and hardware providers, platform providers, foundational model developers, and enterprise AI vendors might face potential litigation risks if AI products built on their technology are alleged to be defective, depending on how essential their AI technology is tied to the harmful functionality.
Therefore, AI companies should carefully monitor how their technology is used by their clients or partners and reevaluate the indemnity clauses based on this new reality. For example, an AI company can contractually require clients to report any uses that fall outside the scope of the original agreement, prohibit them from modifying models or deploying them for high-risk use cases such as mental health, employment, or finance, and require clients to indemnify the company for lawsuits arising from the client’s model modifications or high-risk deployments.
Lastly, consumer fraud claims under the Florida Deceptive and Unfair Trade Practices Act were permitted based on the allegation that AI chatbots present themselves as real people or licensed professionals. AI developers must implement clear, unambiguous, and persistent “AI Identity” disclosures and spell out what the system can and can’t do to avoid fraud claims.
AI Data Liability
But even the best-designed AI can cause problems if it’s trained on questionable data. Three key 2025 court rulings on AI and copyright fair use are reshaping the relationship between developers and content creators heading into 2026.
In Thomson Reuters v. Ross Intelligence, the US District Court for the District of Delaware found that Ross’ use of Westlaw’s proprietary headnotes to train a competing legal AI tool was not fair use. The court sent a strong signal to AI developers: Using a competitor’s unlicensed, proprietary content to train and build a direct market substitute is copyright infringement.
In Bartz et al. v. Anthropic, the US District Court for the Northern District of California ruled at the summary judgement stage that training large language models by analyzing statistical relationships in copyrighted books is “highly transformative,” favoring a fair-use finding. But it also held that the initial acquisition of pirated books to build a permanent training library can’t be excused. In other words, even if the training process itself is transformative, illegally obtaining the data still creates major liability risks. Fair use doesn’t sanitize unlawful data sourcing.
In Kadrey et al. v. Meta Platforms, the Northern District of California court again emphasized that training models on copyrighted books to generate new content leans toward transformative fair use. But it also underscored authors’ challenge in showing actual market harm from AI outputs, especially when the AI developer doesn’t expose substantial portions of the underlying works.
Taken together, these decisions shape an increasingly clear playbook for 2026. AI developers face pressure to use licensed, “clean” data sets, rather than shadow libraries, while content creators face the challenge of proving actual market harm to succeed in infringement claims. Anthropic’s preliminarily approved $1.5 billion settlement over its use of copyrighted books signals a broader market shift: Licensing is becoming the safer path compared to litigation.
That trend is already underway. OpenAI has secured licensing deals with leading publishers such as Axel Springer, Condé Nast, News Corp, and The Associated Press. In 2026, we will likely see the blossoming of these partnerships rather than more courtroom fights.
AI regulation may slow down, but that doesn’t mean AI companies are off the hook. The courts have spoken. Your AI systems can trigger product-liability claims, your supply-chain partners may share liability, and dirty data isn’t shielded by fair use. The companies that take these rulings seriously now will be the ones best positioned to thrive in 2026.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Lena Kempe is an AI, IP, and privacy attorney with experience in Am Law 100 practice, a Fortune 500 in-house counsel role, and general counsel positions. She is a Yale Law graduate and advises multibillion-dollar companies on high-stakes technology matters.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.