YouTube AI Age Estimation Tech Signals New Compliance Standards

Sept. 10, 2025, 8:30 AM UTC

YouTube’s launch on Aug. 13 of an artificial intelligence system designed to estimate the age of its users shows how quickly the US Supreme Court’s decision in Free Speech Coalition Inc. et al. v. Paxton is forcing change on the ground for tech companies.

The AI system analyzes viewing habits and search history to identify minors, a clear response to the pressures of the current legal and political landscape concerning the protection of minors online.

The Supreme Court’s decision to uphold a state’s authority to mandate age verification for accessing sexually explicit content online validated a growing patchwork of similar state regulations, compelling major platforms, such as YouTube, to deploy technologies that will define the future of online compliance and liability.

In doing so, the ruling is poised to usher in a new era of litigation with broad implications for all businesses in the digital space.

The YouTube Approach

YouTube’s new AI system uses machine learning to infer a user’s age from behavioral signals such as the types of videos they watch and search for. If the AI flags a user as a minor, the platform automatically applies safety features, such as restricting access to mature content, disabling personalized ads, and enabling digital well-being reminders.

While this innovative approach avoids universal ID checks, it creates new challenges that present similar issues as presented in Free Speech Coalition.

Users who are incorrectly identified as minors must prove their adult status by providing a government ID, credit card, or a selfie—a process that has drawn criticism from privacy advocates concerned about the collection of sensitive personal data.

As leading companies adopt these advanced AI-driven verification methods, what counts as “reasonable” rises, making it harder and costlier for businesses to keep up with evolving industry standards.

Age-Gating Provisions

The issue at the center of Free Speech Coalition was Texas House Bill 1181, which is part of a wave of state statutes aimed at preventing minors from accessing sexually explicit content online.

The law applies to any commercial entity that “knowingly and intentionally publishes or distributes” material on a website where more than one-third of the content is “sexual material harmful to minors.” H.B. 1181 requires these publishers to deploy a “commercial age verification system” using government-issued identification or another “commercially reasonable method” before allowing access.

Twenty-four other states have enacted similar age-gating laws to Texas. And while the Free Speech Coalition ruling likely insulates state laws that mirror the Texas statute, significant uncertainty persists regarding both which statutory restrictions extending beyond H.B. 1181 may be vulnerable to challenge and what standard of scrutiny courts would apply in evaluating them.

The Supreme Court departed from precedent by applying intermediate scrutiny instead of strict scrutiny in Free Speech Coalition. The justices reasoned that because Texas only regulated access to pornography, leaving adults free to obtain the material after verifying their age, the burden on adult speech was merely “incidental.”

Similarly, lower courts must review whether states are regulating only the manner of access, as in Free Speech Coalition, or if regulations are also restricting the types of content that are available.

While these laws often target commercial distributors of adult material, their expansive language applies to a wider range of entities, including intermediaries such as social media platforms and search engines. Online platforms now face a rapidly growing and complex compliance burden as they navigate a patchwork of often conflicting state laws.

Evolving Liability

By design, Texas H.B. 1181 renders a platform liable if it fails to deploy an adequate “commercial age verification system.” Beyond direct enforcement, platforms face the growing risk of follow-on civil suits. The new state age-verification mandates could strengthen such claims by creating a clearer statutory duty of care, making it easier for plaintiffs to argue that a platform’s failure to implement robust age-gating was a direct cause of a minor’s injury.

By upholding Texas’s H.B. 1181, the court has opened a Pandora’s box to private causes of action. While H.B. 1181 doesn’t create such an action itself, the court’s reasoning allows for other laws to take the baton and run further.

For instance, Indiana’s Senate Bill 17 allows parents to directly sue platforms for violations of the law against their children. Indiana’s law shows how states can and will build on H.B. 1181 and Free Speech Coalition’s foundation—and fuel the trend of litigation against social media companies for alleged harm to minors.

The evolution of civil liability under the Trafficking Victims Protection Reauthorization Act provides a further analogy. After Congress expanded the TVPRA’s civil remedy to reach those who “knowingly benefit” from a trafficking venture, plaintiffs began arguing that a failure to adopt preventive measures constituted knowing benefit.

Similarly, in the age-verification context, when a statute codifies an affirmative duty to screen users, plaintiffs could argue that any lapse in that duty proximately caused a minor’s exposure to harmful content, giving rise to statutory or common-law damages.

Age-gating laws that allow companies to use a “commercially reasonable” verification method, rather than just government-issued IDs, tie compliance through that means to evolving industry standards.

The Road Ahead

The Free Speech Coalition decision paves the way for a fundamental shift in online liability by allowing state legislatures to hold corporate intermediaries responsible for protecting minors online. This new authority also creates a feedback loop: Statutory mandates drive new industry standards, which then raise the legal standard of reasonableness.

As a result, regulatory investigations and private litigation are likely to increase. As with the TVPRA, the greatest business costs may stem not from enforcement or even one-off lawsuits, but from extensive discovery, reputational harm, and operational disruption. Online content businesses should reassess their risk management protocols to keep pace with these evolving standards.

The case is Free Speech Coalition Inc. et al. v. Paxton, U.S., 23-1122, decided 6/27/25.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.

Author Information

Whitney Cloud is a partner at DLA Piper in Philadelphia.

Ellen Dew is a managing partner at DLA Piper’s Baltimore office.

Christopher Hooks is an attorney at DLA Piper in Philadelphia.

Shannon Dudic, of counsel at DLA Piper, contributed to this article.

Write for Us: Author Guidelines

To contact the editors responsible for this story: Max Thornberry at jthornberry@bloombergindustry.com; Melanie Cohen at mcohen@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.