The standoff between the Pentagon and artificial intelligence company Anthropic PBC over the past several weeks has focused attention on the government’s use of AI—particularly its use to monitor Americans.
Anthropic has insisted its models shouldn’t be used for mass surveillance. The Pentagon, however, has refused this request and demanded contractual language that would allow them to use Anthropic’s technology “for all lawful uses.” As a result, the Trump administration is seeking other AI partners willing to allow broader use.
This moment demands that lawmakers set aside immediate political incentives and embrace their role as stewards of our checks and balances system. This means establishing guardrails for what will best serve the country far into our future—not basing decisions on who is sitting in the Oval Office.
After news of the Pentagon standoff became public, many observers sided with Anthropic, emphasizing the privacy risks that AI presents. Those concerns sound something like: “The use of AI technology to actively monitor millions of Americans private transactions, bank accounts, and related financial information—without any legal process—is highly concerning” and “raises serious doubts” about “respect for Americans’ fundamental civil liberties.”
Did this particular warning come from an AI company that shares Anthropic’s concerns, a nonprofit urging ethical limits on AI, or a vocal critic of President Donald Trump or Defense Secretary Pete Hegseth? No—the statement came from an oversight letter written two years ago by Republican leaders of the House Subcommittee on the Weaponization of Government.
The letter was directed at the Biden administration and raised concerns about whether the IRS was using AI to monitor Americans’ “private transactions and bank accounts.” At the time, I was serving as IRS commissioner.
These were important questions then—and may be even more urgent today. Two years ago, IRS use of AI was in its relative infancy, limited to things such as chatbots on the phones, enhanced fraud filters reviewing incoming returns, and faster computer coding.
Treasury Secretary Scott Bessent has signaled, however, a broader deployment of AI. Specifically, he indicated that through “smarter IT and the AI boom,” the IRS can operate with a smaller staff and still ensure people pay the taxes they owe.
This plan may prove effective, but it underscores the ongoing relevancy of the question House Republicans raised two years ago. Of note, a recent Government Accountability Office report confirms the expanded use of AI by the IRS, but also calls out current gaps in governance and internal controls.
The current debate on the Pentagon’s use of AI for national defense presents different considerations than its use to enforce tax laws. But there’s also striking commonality in a core underlying question: What guardrails should be applied to AI when the government analyzes vast amounts of data about its citizens?
The demand for answers about the government’s use of AI came largely from Republican lawmakers during the Biden administration but are coming from Democratic lawmakers today. On the bright side, this means both political parties are asking important questions about AI and privacy. Less encouraging—the timing of those concerns being voiced depends on who currently occupies the White House.
That is a familiar pattern when questions arise about the expansion or contraction of executive authority. When one party holds power, its opponents tend to decry broader authorities. When control changes hands, the vocal critics of yesterday become largely silent today.
This silence may be simply the result of partisan alignment. Or it may be a political calculation that authorities granted to the president today can be scaled back in the future. When it comes to AI, however, I recommend caution in placing such a bet.
Once embedded across government operations, AI may be difficult—if not impossible—to fully unwind. While temporary enforcement priorities can be relatively straightforward to reverse, AI is different.
AI spreads and embeds quickly through systems, processes, and decisions. Case in point: After the Pentagon provided a six-month window for employees to remove Anthropic’s tools from use, it recently clarified that such use can extend beyond that period when essential to national security. Bottom line: AI can be difficult to untangle.
That makes the current policy debate even more critical and time sensitive. A recent legislation discussion draft on AI introduced by Senator Marsha Blackburn (R-Tenn.) focuses largely on consumer protection, innovation, and competition. It also seeks to ensure the government only procures “unbiased” tools.
Those are important issues. But the framework doesn’t directly address guardrails for the government’s use of AI. And it doesn’t clarify or confront what many have long assumed and what a court appeared to uphold last week in a ruling that favored Anthropic—private contractors can place meaningful limits on how the government uses intellectual property or technology, and when challenged, can generally decline the work altogether without consequence.
Whether the AI revolution turns out to be swift or gradual, it is here now and its impact will be profound. As Congress debates how to govern this technology, long-term checks and balances merit bipartisan attention.
Danny Werfel has twice served as IRS commissioner, most recently from 2023 to 2025. He is now executive in residence at the Johns Hopkins School of Government and Policy and a distinguished fellow at the Polis Center for Politics at Duke University, writing about the intersection of tax and policy.
Read More By All Accounts
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.