For 30 years, Section 230 of the Communications Decency Act of 1996 was the technology industry’s most reliable shield deflecting every legal assault on its product decisions regarding children. In less than 24 hours, juries in New Mexico and Los Angeles found a way around it. The social contract between technology and society is being revised, and it isn’t happening in Washington.
For nearly three decades, technology companies have relied on Section 230 as a stout defense, arguing they are mere conduits and not responsible for what users post. These two verdicts provide a legal game plan to outflank that defense. In essence, they shifted the legal playing field from one about content to one about conduct.
The One-Two Punch
The New Mexico verdict targeted consumer protection, finding Meta Platforms Inc. liable for unconscionable trade practices regarding its safety representations. A judge will soon determine if these platforms constitute a public nuisance. This phase will decide if the companies must fund massive public programs to address the mental health crisis alleged by school districts and states.
Less than a day later, the Los Angeles verdict reached even deeper into the tech engineering black box. There, a jury found that Meta and Alphabet Inc.'s Google (YouTube) were liable under product liability theories.
Specifically, the jury determined that the platforms were defectively designed and that the companies failed to warn parents and children about the addictive nature of their algorithms. While Section 230 protects the hosting of speech, it offers no safe harbor from claims based on product design.
Conduct as Paper Trail
The success of these claims relies heavily on internal corporate conduct rather than user content. The evidence unsealed in these trials creates a devastating roadmap of what constructive knowledge looks like in a modern tech company.
Consider the 17-strike policy testimony from a former Meta employee. Internal documents revealed a threshold that allowed accounts to be reported 16 times for predatory behavior before facing a ban. Legally, this changes the argument from “we can’t monitor every post” to “we deliberately engineered a system that tolerated known risks.”
Furthermore, a separate internal July 2020 report titled Child Safety State of Play listed immediate product vulnerabilities on Instagram, such as the difficulty of reporting disappearing videos. Internal emails revealed that safety safeguards available on Facebook were intentionally omitted on Instagram specifically to avoid friction in user growth. When a product vulnerability is identified and a business decision is made to choose growth over mitigation, it creates a record of conduct that has nothing to do with content.
Legal Engineering Mandate
The traditional silo between legal and product teams is no longer sustainable. In fact, failing to involve legal counsel in the earliest stages of algorithmic design may soon be viewed as a form of corporate malpractice. Counsel must now engage in legal engineering that means auditing the content recommendation logic, and behavioral targeting mechanisms that determine what content reaches which users and when to surface design choices that create foreseeable harms.
In analyzing potential harms, corporate counsel should also consider worst case scenarios to stress test product features. This process continues through the product lifecycle where public perception on how the product actually operates needs to be assessed and incorporated.
If a product is engineered to match users or prioritize engagement over documented safety risks, the company is engaging in conduct that a jury can now audit. Risk identification is now essential to the initial product sprint, not treated as afterthought once a lawsuit is filed.
For in-house counsel, the immediate lesson is about governance architecture. Every internal debate over engagement features and their resolution is now potential evidence of defective product design or poorly documented product risks.
Products, Not Platforms
The implications for the burgeoning artificial intelligence, economy are immediate and existential. These verdicts suggest that any company building a predictive machine—from a niche fintech tool to a customer service bot—must treat it as a product subject to product liability.
In the social media world, companies have long claimed to be neutral hosts of third-party content. However, generative AI’s output is the direct result of a company’s own proprietary algorithm and training data. In such a scenario, the neutral host defense is likely unavailable.
If an AI produces harmful or deceptive material, courts and juries are signaling that they will view that output not as user content, but as the company’s own conduct. For AI developers, the lesson of the California verdict is clear: Your design choices and the behavior of your models are already being measured against the same product liability standards used for cars or medical devices.
Digital Guardrails
The current wave of litigation stems from a federal regulatory vacuum. In the absence of comprehensive standards for privacy, security, and algorithmic design, states have stepped in alongside existing federal sector laws, leaving companies to navigate a fragmented patchwork. As a result, digital guardrails are increasingly being defined in the courtroom.
For corporate boards, the strategic challenge is no longer just “can we build it” but “what design tradeoffs are we making in pursuit of market share that might create a design defect?” As the rush to market accelerates, the failure to consider and disclose these unintended consequences may leave companies vulnerable to massive liability under product liability, consumer protection and deceptive trade practices laws.
In a mere 24 hours, the social contract for technology has begun to be re-negotiated not in Washington, but in the jury box.
The cases are State of New Mexico v. Meta Platforms Inc., N.M. Dist. Ct., No. D-101-CV-2023-02838, verdict 3/24/26; Social Media Cases JCCP, Cal. Super. Ct., No. 5255, verdict 3/25/26.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Justin Daniels is a shareholder in Baker Donelson’s data protection, privacy and cybersecurity practice.
Jodi Daniels is the founder and CEO of Red Clover Advisors, a privacy consultancy firm.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.