UK Sets Example for How US Can Weaken Algorithms Targeting Kids

March 17, 2025, 8:30 AM UTC

Concerns over the long-term impact of online platforms on minors’ physical and mental health are gaining legislative attention. Policymakers, lawmakers, and advocacy groups in the UK and US are examining how AI and algorithmic design contribute to excessive screen time and addictive behaviors online.

While research suggests that media devices at an early stage can benefit to children’s development (such as helping improve literacy), there are also potential long-term negative effects on cognition functioning, such as the ability to switch between tasks.

Findings such as these highlight concerns that legislators want to address through legal frameworks aimed at curbing the effects of algorithm-driven engagement by minors.

Two-Pronged Approach

The UK has in place two frameworks to combat excessive internet use by minors. Its Information Commissioner’s Office’s pioneering Age-Appropriate Design Code (Children’s Code) requires online platforms to balance their service design against minors’ well-being. The UK’s Online Safety Act aims at reducing illegal or harmful content made accessible to users, including minors, on certain online services.

One of the Children’s Code’s key tenets is ensuring providers act in the best interests of the minor and are mindful of potential harms—such as the impact of digital features that could increase addictive behavior—when designing their services.

To combat features that keep minors online (such as serving continuous content based on their online activities), providers must switch off, by default, any form of algorithmic profiling. Providers aren’t permitted to use techniques that nudge minors to implement less privacy-friendly settings or give up more of their personal information.

The ICO announced in August that it had reviewed 34 social media and video sharing platforms using proxy child accounts and engaged with them on several issues. These included nudging techniques and the use of minors’ data in recommender systems, which can be used to promote sustained viewing. It’s clear there is an appetite to proactively assess compliance with the Children’s Code.

The Online Safety Act, by contrast, emphasizes limiting features that may expose online users to illegal or harmful content. Regulated services must conduct comprehensive risk assessments to identify and mitigate potential harms, including those caused by infinite scrolling and auto-play features, not least because they raise the risk of minors’ encountering harmful content.

While the Online Safety Act is still too new to have triggered enforcement activity from the UK’s Office of Communications (Ofcom), the shared goal of combating excessive web use between the two frameworks is evident. The Information Commissioner’s Office and Ofcom agreed to collaborate on topics such as recommender systems and default settings to ensure they work consistently to address addictive online behavior.

Patchwork System

Several US states have addressed excessive screen time through regulations that restrict online platforms’ implementation of algorithms and other potentially addictive features, among other mitigating controls.

California’s Age-Appropriate Design Code was modeled after the Children’s Code and requires businesses to conduct a data protection impact assessment to determine whether a platform’s algorithms could harm minors. While California’s requirement is enjoined and therefore not in effect, the law has spurred a wave of legislation across the US to address addictive online behavior.

New York Governor Kathy Hochul signed into law the Stop Addictive Feeds Exploitation for Kids Act in June to prohibits certain platforms from providing addictive feeds to minors without parental consent. “Addictive feeds” are defined as online services in which content is “recommended, selected, or prioritized for display” to the user based on information provided by the user.

In describing the law’s intent, the New York legislature emphasized that sophisticated machine learning algorithms can predict users’ interests, mood, and other characteristics that allow platforms to create feeds tailored to each user designed to keep them engaged with the platform.

Other states have attempted to combat prolonged internet use through alternative measures. Colorado Governor Jared Polis signed into law a bill requiring certain platforms to display pop-up notices informing users about the impact of social media on their health.

Meanwhile, Texas’s SCOPE Act mandates that certain online services provide parents with tools to monitor and limit the amount of time minors spend on the platform, and publishes information about how algorithms promote content to users.

Keeping Up

The regulatory landscape surrounding internet addiction and AI-driven engagement is evolving rapidly, with a patchwork of laws emerging across jurisdictions.

While the UK has implemented comprehensive national frameworks, the US approach remains state-driven leading to a mix of requirements that platforms must navigate. Some of these laws such as California’s face legal challenges, adding further uncertainty to compliance obligations.

To keep ahead of this undulating legal environment, online platforms may wish to take proactive steps to limit the use of algorithms and other automated features designed to maximize user screen time.

Features such as autoplay, algorithmically curated feeds, and engagement-driven notifications have come under increasing scrutiny, and restricting their use may help mitigate enforcement risks.

Platforms also may consider implementing—and even more importantly publicizing—transparency measures, offering parental controls, and enabling time-management tools to align with emerging legal expectations.

As regulators, courts, and industry stakeholders continue to shape the future of online safety for minors, platforms that prioritize responsible design and user well-being will be ready to adapt to forthcoming legal developments.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Author Information

Annabel Gillham is partner at Morrison Foerster and co-managing partner of the firm’s London office, where she specializes in data protection and data crisis management.

Jonathan Louis Newmark is a privacy and data security of counsel at Morrison Foerster.

Mercedes Samavi is a technology transactions and global privacy of counsel at Morrison Foerster.

Melissa Crespo contributed to this article.

Write for Us: Author Guidelines

To contact the editors responsible for this story: Jada Chin at jchin@bloombergindustry.com; Heather Rothman at hrothman@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.