To complement Bloomberg Law’s In Focus: Executive Orders and Actions page and Executive Orders & Related Developments Tracker, Bloomberg Law’s legal analysts are exploring issues, data, and trends regarding the Trump administration’s executive orders. “Executive Orders: Assessing the Impact”, a new report currently available to subscribers and soon to be released to the public, features five EO-related legal analyses, including this one.
One of President Trump’s first official actions was to revoke President Biden’s executive order on the “Safe, Secure, and Trustworthy” development of artificial intelligence and to replace it with an executive order that mandates “Removing Barriers” to development of the US AI industry.
The policy shift in the executive branch, as reflected in the titles of the two EOs, is from a policy of caution about the potential risks of AI to one of removing perceived regulatory obstacles to a “national champion” industry. Trump’s EO (14179) states that US policy is “to sustain and enhance America’s global AI dominance” and instructs advisers to develop an AI action plan to further that policy within 180 days.
But is federal regulation—or even just skepticism—of AI really what’s hindering AI innovation? And what are the costs of mandating that the industry get special treatment?
More than most agencies, the Federal Trade Commission is likely to wind up in the crosswinds of any attempt to balance the risks and rewards of this fast-evolving industry and to enhance its “dominance.” As the agency charged with consumer protection and protecting competition, the FTC has its work cut out for it in a US embracing unregulated development and deployment of AI.
A Different Tone
The shift in policy is evident in how FTC commissioners talk about AI. In January 2024, the FTC hosted a Tech Summit on AI and how policy could shape its trajectory and address the risks posed by the rapid implementation of AI technology across the digital economy.
Introducing the summit, FTC Chair Lina Khan noted that AI “could open up the door to breakthroughs across fields ranging from science to education,” but also could be used to “turbocharge fraud, automate discrimination, and entrench surveillance.” Khan also expressed concerns about governance, and whether “a handful of dominant firms” would be able to “concentrate control over these key tools.”
In September 2024, Commissioner Alvaro Bedoya gave a speech at a data privacy conference composed almost entirely of questions—about half of them about the future of AI. Those included concerns about the use of AI to deny people basic services, appropriation of human-made content and private data, monetized chatbots in relationships with teens, strain on the energy grid, and the potential for the industry to become ubiquitous and highly concentrated.
In contrast, in a speech on Jan. 30, Commissioner Melissa Holyoak said that the US has “a vested interest in keeping America first when it comes to AI technology.” The Commission “should be looking for ways to promote dynamic competition and innovation in AI,” she said, and criticized the Biden administration for instead hampering it “by pursuing unclear regulations or misguided enforcement actions.”
Rubber/Road
It’s too soon to say whether the difference in tone means a difference in action at the FTC, however. Less than six months into the Trump administration, any public enforcement at the FTC is an extension of work done in the previous administration, and any new initiatives aren’t yet public.
And the FTC is still voicing concerns about the consumer impact of AI technologies, at least in some situations. New FTC Chair Andrew Ferguson echoed the national interest in dominating tech innovation in his keynote for an FTC workshop on June 4, titled “The Attention Economy: How Big Tech Firms Exploit Children and Hurt Families.” But Ferguson added that “troubling secondary effects” from technological innovation require regulation, at least where children are involved. And the FTC has continued cases under an initiative against AI schemes that are alleged to be fraudulent.
But EO 14179 calls for an “AI Action Plan” to facilitate US AI dominance, and work to formulate that plan is underway. A published request for comment on what actions should be prioritized in the new plan garnered 8,755 responses. The FTC will need to tailor its policy and enforcement approach to the overall plan once that is articulated.
There’s a lot of uncertainty about what such a plan entails for antitrust and consumer protection. The US doesn’t have a modern history with national champions. On the contrary, the Sherman Act was written to take down a series of companies that grew to exceed the power of the government itself, and antitrust regulators have a long history of enforcement against players in strategic industries—including digital platforms.
What will be crucial to watch is how the FTC strikes a balance between its enforcement mandates and an explicit executive policy to reduce regulation and “friction” for a broad digital market. Reducing friction isn’t inherently incompatible with hearty competition and consumer protection: Overregulation, in fact, can easily favor big incumbents. But where the rubber meets the road will be highly enlightening about industrial policy in the Trump administration, the next several years of antitrust enforcement (including merger control), and the power of the data barons.
Federalism at Work?
As we have witnessed with privacy law, a vacuum at the federal level invites a patchwork of state regulation. That means that Trump’s intended “wild west” for AI might not materialize because, without a single regulatory framework at the federal level, AI might face instead a host of (potentially incompatible) regulation at the state level.
Congress is seeking to preclude that outcome with H.R. 1, the “One Big Beautiful Bill,” which contains a moratorium on all state regulation of AI. Section 43201 of the bill bans any state regulation “limiting, restricting, or otherwise regulating artificial intelligence models, artificial intelligence systems, or automated decision systems entered into interstate commerce” for 10 years.
If the legislation passes, and assuming federal legislation regulating AI doesn’t materialize, the US would essentially ban new regulation of AI at all levels of government. Of course, that would face a court challenge—and the resulting uncertainty wouldn’t help legitimate AI development or consumers while the battle played out.
Nor would a federal ban mean that AI would face no regulation: Just as the FTC continues actively enforcing consumer protection law against alleged frauds that involve an AI angle, AI misconduct that falls under existing state laws of general application (be they consumer protection or privacy or licensing) might not be impacted by the AI ban in H.R. 1. Again, only a flurry of litigation will tell.
AI Meets World
A regulatory void could put the US at a further disadvantage because US-built AI systems will continue to be regulated elsewhere—including in the EU, Japan, Canada, Australia, and the UK. In other words, AI systems will still face costly regulation, but the benefits of that regulation may not accrue to US consumers.
And the stark differences among regulatory frameworks will still add complexity to AI development and implementation. Against that backdrop, how the FTC decides to play its regulatory hand also has implications for its international relationships. It’s a narrow path for success.
Other analyses of the executive orders featured in the report cover immigration, DEI, transgender health, and disparate impact liability (publishing soon).
Bloomberg Law subscribers can find a variety of Practical Guidance documents, workflow tools, and reference materials for artificial intelligence on our In Focus: Artificial Intelligence page, and in-depth information on Trump administration actions on our In Focus: Executive Orders & Actions page.
If you’re reading this article on the Bloomberg Terminal, please run BLAW OUT to access the hyperlinked content or click here to view the web version of this article.
To contact the reporter on this story:
To contact the editor responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.