FTC’s AI Crackdown Pushes Boundaries of Its Oversight Power (1)

Sept. 27, 2024, 9:05 AM UTCUpdated: Sept. 27, 2024, 2:01 PM UTC

The Federal Trade Commission’s actions against companies it said were deceiving customers about the value of their AI tools exemplifies the agency’s efforts to rein in overhyped claims about the technology while also raising questions about its ability to stretch its enforcement powers.

DoNotPay, which offers customers AI-powered legal services, settled with the agency for $193,000 over claims it misled users about how well its AI could replace a real lawyer. The FTC announced the settlement on Sept. 25 as part of its “Operation AI Comply,” which also included investigations of three AI companies that now face court orders halting their businesses while they await trial.

“It’s bread and butter to go after businesses that are making false claims and lying to customers, misleading consumers,” said Lee Merreot, an attorney at The Beckage Firm. “Whether they’re doing it through AI or not, the FTC is going to insert itself.”

The sweep also highlights uncertainty about how far the agency’s authorities under the FTC Act go when it comes to enforcement against AI companies, as it’s signaled possible actions targeting the use of personal data and moves to promote competition.

‘Aggressive’ Stance

The commission divided 3-2 along party lines in a fifth announced action, approving a proposed complaint and settlement accusing the firm Rytr of providing its subscribers with the “means and instrumentalities” to produce false and deceptive AI-generated content for consumer reviews.

The two Republican commissioners dissented, saying the agency had no evidence that fake reviews created by the tool had ever been posted.

“This really goes beyond where the FTC has gone before,” said Arnold & Porter’s Peter Schildkraut. “It’s questionable, had Rytr not decided to settle, that the FTC really would have been able to prove its case based on the facts alleged.”

Neil Chilson, a former acting chief technologist at the FTC, said the Rytr action represents an “extremely aggressive” use of agency authority.

“It worries me about the effect it would have,” said Chilson, now the head of AI policy at the Abundance Institute, a tech nonprofit associated with the Charles Koch-backed Center for Growth and Opportunity. “The legal uncertainty of, ‘If someone does something bad with my product, is the FTC going to sue me?’”

The FTC’s proposed order would ban Rytr from advertising, promoting, marketing, or selling any service promoted as generating consumer reviews or testimonials.

The FTC’s Rytr action could push other companies to take additional steps to rein in user behavior, such as implementing “Know Your Customer” regimes similar to what it requires for financial firms, to avoid the agency’s scrutiny. Such efforts can be costly, Schildkraut said, especially for emerging AI startups.

That “might be enough to dissuade other companies from being willing to risk it,” he said.

In a dissenting statement, GOP Commissioner Andrew Ferguson called the action a “dramatic extension of means-and-instrumentalities liability.”

Khan’s AI Priority

The FTC’s enforcement actions weren’t surprising to those following the agency, multiple attorneys said, pointing to numerous public statements regarding AI. In 2023, Chair Lina Khan joined other agency leaders in affirming they would use existing laws to rein in AI harms, a refrain they have continued to repeat.

“The FTC has been very clear about its intentions to police these issues,” said Filipp Kofman, a partner at Davis Wright Tremaine who leads the firm’s AI team.

Deceptive AI practices isn’t the only area the agency is focused on.

In July, Khan voiced support for open-weight AI models, saying they’d bolster competition in the space. That same month, the commission launched an inquiry into how companies are using advanced algorithms and AI in their pricing decisions.

The FTC is also investigating Microsoft Corp.’s ties with OpenAI Inc., the startup behind the generative AI-powered bot ChatGPT.

In February 2023, in light of the sudden public outburst of generative AI developments, the agency issued a business blog post warning companies to keep their “AI claims in check.”

Such publications are often a predecessor to enforcement, said Wiley Rein partner Duane Pozza, who was formerly assistant director of the Division of Financial Practices at the FTC.

The agency followed up with another blog post earlier this year warning companies against quietly changing their terms of service to collect more training data. That could signal future enforcement action on that front, said Pozza.

“If you look at the FTC’s general history, they do tend to put out public statements to describe their investigation priorities,” said Pozza.

Among other actions, the FTC will likely continue to focus on the “tie between AI and privacy,” said Jami Vibbert, chair of Arnold & Porter’s privacy, cybersecurity and data strategy group.

“What I see as common themes, I would say, between the actions that were announced in this press release and the privacy cases, is a real focus on providing transparency to consumers,” Vibbert said.

The FTC declined to comment on any open investigations or the future of Operation AI Comply.

The agency is just one of many entities with authority to investigate deceptive AI practices. Pozza and Kofman noted that state attorneys general have similar consumer-protection powers and could wield them against bad actors in the AI space. Texas Attorney General Ken Paxton recently settled with Texas-based artificial-intelligence health-care technology firm Pieces Technology to resolve claims the company misled and deceived the public about its AI capabilities.

While the FTC has so far primarily targeted scam companies with its enforcement sweep, attorneys say all companies should be assessing what claims they make about AI products.

“There’s just been so limited guardrails on AI that I’m sure a lot of businesses thought they could get away with it and were trying to for as long as they could,” said Merreot, who specializes in data privacy, security, and incident response. “With new state laws coming on board, and with FTC taking some action, hopefully that will deter some other businesses from engaging in these types of activities.”

To contact the reporters on this story: Tonya Riley in Washington at triley@bloombergindustry.com; Cassandre Coyer in Washington at ccoyer@bloombergindustry.com; Justin Wise at jwise@bloombergindustry.com

To contact the editors responsible for this story: James Arkin at jarkin@bloombergindustry.com; Adam M. Taylor at ataylor@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.