A suite of new laws—aimed at protecting minors online and regulating artificial intelligence—signals a shift in how California expects technology companies to account for digital well-being. Recently signed by Gov. Gavin Newsom (D), these laws may require platforms to take additional and proactive measures to avoid liability exposure.
Several of the laws recast technology companies as gatekeepers of youth safety online, including SB 243 (Companion Chatbots), AB 56 (Social Media Warning Law), and AB 1043 (Digital Age Assurance Act).
These laws impose novel obligations requiring platform operators to issue warnings about minors’ use of their platforms. Under AB 56, social media platforms must display a “black box warning” for users younger than 18 years of age, akin to a tobacco warning, and specifies the frequency and format of the message.
SB 243 requires chatbot operators to disclose to minor users that they are interacting with AI and to provide a “clear and conspicuous” notification every three hours that the chatbot isn’t human and that the minor user should take a break. The law also requires chatbot operators to maintain a “protocol” to prevent suicide or self-harm content, including by referring users to a hotline or crisis text line if the user expresses such concerns.
AB 1043 requires companies to implement new age-verification processes. Starting in 2027, “operating system providers” such as Apple or Google must include a setup interface that invites users to voluntarily input their birthdate or age, which then generates a “signal” to developers to indicate the user’s age bracket.
These laws impose steep penalties for companies that fail to comply with their provisions. Most notably, SB 243 permits injured plaintiffs to seek injunctive relief, actual damages, and attorneys’ fees for violations of the law.
Anticipated Legal Challenges
These new laws may face legal challenges in court.
SB 243 and AB 56, with their mandated government-drafted disclosures, may draw compelled speech challenges under the First Amendment. That litigation will turn on whether the mandated disclosures are “purely factual” and “uncontroversial,” as well as whether they are justified by a legitimate state interest and not unduly burdensome. Courts have recently invalidated similar website compelled-speech requirements. And most recently, NetChoice filed a First Amendment challenge to Colorado’s HB 24-1136, which similarly requires social media platforms to warn minors about mental-health harms.
AB 1043, which requires new age-verification processes, may also face a challenge on First Amendment grounds. Litigants have challenged other age-verification laws, such as the Texas App Store Accountability Act. Unlike these other laws, AB 1043 explicitly avoids requiring photo ID, parental consent, or biometric data. This compromise allowed AB 1043 to garner support from technology companies such as Meta and Google, so it remains an open question whether this law will draw a similar challenge (and from whom) compared to other age-verification statutes.
These laws may also face preemption challenges under federal law. Section 230 of the Communications Decency Act limits the liability of platforms for third-party content, and litigants’ efforts to use the new statutes to impose broader liability on service providers could conflict with that federal immunity framework. Likewise, the Children’s Online Privacy Protection Act already regulates the collection of personal information from children younger than 13, potentially preempting California’s additional requirements under AB 1043. Courts have yet to fully examine COPPA preemption challenges to other age-verification laws.
Early litigation in these areas will determine how far states can go in mandating AI disclosures and age-based online safety measures.
Practical Guidance
The new legislation underscores the complex compliance landscape ahead. California may be one of the first states to take these steps, but it’s unlikely to be the last. Companies may soon face a patchwork of state laws, each with conflicting and onerous requirements.
These laws reflect a broader movement by state legislators and litigators to place responsibility on technology companies for the outputs of AI. AB 316 explicitly provides that chatbot operators “shall not” assert as a defense that “the artificial intelligence autonomously caused the harm to the plaintiff” in an action against a chatbot operator alleged to have caused harm to a plaintiff.
Counsel advising social media platforms, chatbot operators, and other technology companies may want to consider the following strategies to navigate this increasingly complex legal environment:
Offer a youth-specific experience. Companies may wish to design age-appropriate versions of their platforms. For example, platforms could provide certain restrictions on interactive or immersive features for minor users. Platforms may also consider different design and governance structures for youth-facing services, such as curated content libraries or increased human moderation.
Look to new laws as a potential roadmap for navigating minors’ use of platforms. While these laws may face legal challenges, they also provide a potential blueprint that companies can voluntarily implement. For example, technology companies could introduce “quiet mode” or “wellness break” prompts modeled on the new statutory requirements.
Keep up-to-date on evolving legal landscape. Companies should stay informed about new, similar laws as other states follow California’s lead. As the compliance landscape gets more complicated, companies may wish to comply with the most stringent state law or consider advocating for federal legislation with potentially preemptive effect.
California’s new youth-focused AI laws mark a significant turning point in the regulation of emerging technologies. By enacting these measures, the state has signaled its intent to lead in setting standards for AI and online safety for minors. At the same time, the coexistence of differing state and federal frameworks creates substantial uncertainty for technology companies.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Jonathan Blavin is a privacy and data security partner at Munger, Tolles & Olson and represents the world’s leading technology companies.
Shannon Galvin Aminirad is a litigation associate at Munger, Tolles & Olson, with a focus on commercial litigation and intellectual property disputes.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.

