ANALYSIS: FTC Aims to Boldly Go Where AI, Privacy Risk Converge

Nov. 6, 2023, 2:00 AM UTC

2024 is shaping up to be an unprecedented year of new exploration for the Federal Trade Commission, as it seeks to boldly go where it has not gone before in terms of its privacy enforcement authority by taking on emerging privacy challenges posed by artificial intelligence. Regulating AI is relatively new territory for the FTC, but the regulator will find innovative ways to leverage its existing powers to tackle enforcement priorities such as fighting dark patterns and optimizing algorithmic disgorgement.

It’s true that the FTC will still be constrained by a painstakingly burdensome administrative process for creating new privacy rules, but the privacy watchdog may find future success by manipulating old rules (and laws) and applying them in innovative, novel ways that businesses might not expect.

Generative AI

Artificial intelligence is truly the next frontier for all regulators, and the FTC is at the beginning of its journey in grappling with regulating emerging forms of generative AI, such as ChatGPT and other machine learning technologies.

While FTC enforcement in the AI industry arena is somewhat nascent, it is clear that the FTC intends to solidify its place as an AI regulatory in the privacy space. Ultimately, its enforcement priorities will be gleaned through future settlements or enforcement actions. But the means by which the FTC might achieve its lofty goals can be seen even now, extracted from strategies employed in its non-binding guidance and its choice of certain investigatory tools.

One insight into how the FTC will seek to govern the space can be found in the regulator’s civil investigative demand (CID) subpoena to OpenAI.

The CID is a pre-litigation investigatory tool that allows the regulator to compel businesses to produce documents and answer questions as to whether the company has met FTC compliance standards for data security and privacy in light of the company’s advertising. This underscores potent weapons that the FTC will continue to deploy in its AI advance as it utilizes the agency’s FTC Act Section 6(b) and Section 9 investigatory powers, thus enabling the regulator to subpoena and order tech companies to turn over information relating to the nuts and bolts of AI usage and algorithmic deployment, especially with regard to digital marketing.

Also, the FTC hasn’t been shy in issuing guidance—often in the form of blog posts—warning businesses to “[k]eep their AI claims in check” by being open and transparent regarding their use of AI, including development and use of such technologies. Indeed, the FTC touts itself as having “decades of experience” enforcing and investigating laws important to the development and use of AI under the remit of Section 5 of the FTC Act, which prohibits unfair and deceptive practices, including in the use or sale of algorithms. The FTC has also signaled great interest in scrutinizing the bias and discrimination that could arise from the use of algorithms and automated systems in areas of employment, housing, granting of credit, or other legally protected areas.

Furthermore, a close reading of the FTC’s recent guidance on generative AI might surprise legal practitioners about the extent of the FTC’s ambition in privacy authority in the AI arena. Specifically, the regulator singled out generative AI tools that produce copyrighted materials as within the ambit of potential Section 5 enforcement, noting that it raises issues of consumer deception or unfairness.

Algorithmic Disgorgement

Enter what might just be the FTC’s most powerful weapon in policing the AI industry: algorithmic disgorgement, a legal remedy used by the FTC to require companies to destroy AI-powered algorithms developed through the alleged use of ill-gotten data.

As I’ve written before, the FTC has wielded this enforcement mechanism to great success in algorithmic-related settlements. This year alone saw dual unfairness settlements against Amazon, requiring the company’s lines of business to delete data and algorithms trained and developed from unlawfully collected data from security cameras (Ring) and unlawfully keeping children’s voiceprints (Alexa).

Another example is evidenced by May’s weighty $6 million penalty and proposed settlement order against edtech company Edmodo Inc. What privacy practitioners may have missed in that order, however, is that in citing violations of the Children’s Online Privacy Protection Act (COPPA) and Section 5 of the FTC Act, the FTC cannily demanded that the company delete all models and algorithms developed from child data that was allegedly collected without parental consent or school authorization. Thus, the agency has a two-pronged deterrent tool, in that it can levy penalties while also cracking down on companies that abuse sensitive consumer data through algorithmic disgorgement.

Continued Targeting of Dark Patterns

The FTC has also put businesses on notice about the perils posed by emergent AI algorithms as a potential source of manipulative user interface mechanisms, known commonly as “dark patterns.”

As I detailed in prior analysis, the FTC has taken an aggressive enforcement stance against the usage of dark patterns—deceptive user interface designs that can trick consumers into giving up personal data— under the agency’s Section 5 deceptive and unfair practice enforcement authority.

Legal practitioners and businesses alike need to look no further back than March to take in FTC’s historic enforcement action (and ensuing consent settlement order to pay nearly $250 million) against Epic Games, maker of the game Fortnite, as evidence of the huge monetary risk and regulatory penalties that are faced by businesses that utilize dark patterns, ignore customer data deletion requests, or make unwanted and authorized charges.

Most recently, the FTC has taken on Big Tech and dark patterns in yet another enforcement action against Amazon, alleging that the company used dark patterns to trick consumers into enrolling in automatically renewing Amazon Prime TV subscriptions, while also making it difficult for customers to cancel or end their membership. This signals that an emboldened FTC means serious business about potential harms of dark patterns in digital markets beyond monetary injury, and that the agency feels confident in regulating and policing Big Tech’s potential misuse of consumer data. The regulator’s enforcement actions also indicate that it believes that AI, privacy, and the usage of dark patterns are a risky intersection for consumers and that businesses are on notice of potential privacy violations.

Under Commissioner Lina Khan’s dynamic and aggressive leadership, the FTC has found privacy enforcement success especially in cases where it has married hefty, tailored federal mandates (on child protection under COPPA or against data breaches with its Health Breach Notification Rule, for example) with its general authority under Section 5 of the FTC Act. It follows that the FTC will utilize this flight plan going forward to navigate privacy violations and lax data security practices amid emerging AI tech industries.

Thus, we come full circle to what the FTC endeavors to accomplish in 2024: AI privacy enforcement. And if that fails, to borrow from an old television series: These are the voyages of the starship FTC—to explore strange new worlds and to encounter the final frontier (rapidly evolving AI technology).

Access additional analyses from our Bloomberg Law 2024 series here, covering trends in Litigation, Transactions & Contracts, Artificial Intelligence, Regulatory & Compliance, and the Practice of Law.

Bloomberg Law subscribers can find related content on our Privacy & Data Security Practice Center and In Focus: Artificial Intelligence (AI) resource.

If you’re reading this on the Bloomberg Terminal, please run BLAW OUT <GO> in order to access the hyperlinked content, or click here to view the web version of this article.

To contact the reporter on this story: Mary Ashley Salvino in Washington at msalvino@bloombergindustry.com

To contact the editor responsible for this story: Robert Combs at rcombs@bloomberglaw.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.