Accounting software companies are promoting AI-powered tools to taxpayers while sidestepping responsibility for errors and passing liability to clients. Last month, I sent a letter to Treasury Secretary and IRS Acting Commissioner Scott Bessent calling for comprehensive federal guidance on artificial intelligence use in tax preparation. Absent this guidance, a patchwork of conflicting state rules will undermine business compliance and CPA professionalism.
While the IRS has warned taxpayers about using AI-generated tax advice, businesses and CPAs lack sufficient guidance on transparency and accountability to distinguish responsible use from reckless practice. Federal leadership and clear standards are essential to protect taxpayers and prevent regulatory chaos.
California recently introduced legislation regulating AI use by professionals, including for attorneys and arbitrators, and other states will follow. If Congress waits much longer to act, expect more regulations across tax preparation, with each state writing different rules, leaving CPAs and businesses scrambling to comply. Unregulated AI-powered providers meanwhile would exploit the gaps.
Startups are using automated systems to prepare complex tax filings, especially for credits that US businesses rely on to stay competitive, such as the research and development tax credit. AI-powered providers promise speed and savings over a traditional CPA. But they don’t provide transparency, accountability, or professional judgment.
When errors happen, the business owner—not the algorithm—faces IRS audits, fines, and potentially criminal liability. This issue mirrors the employee retention credit scheme, which earned its place on the IRS’s “Dirty Dozen” list. Unregulated AI in tax preparation threatens to become that list’s next entry. The IRS should act to protect US businesses by setting clear rules while allowing responsible AI innovation.
Calculating complex credits isn’t often a check-the-box exercise. A CPA must exercise independent professional judgment, verify facts, and take responsibility for the work. An algorithm can’t do this alone.
If a provider entirely automates the process, CPAs risk losing their professional licenses. State licensing boards and the IRS rightfully expect CPAs to exercise independent judgment. If a CPA approves whatever an AI produces without thorough review, they’re abandoning the professional responsibility their license demands. The IRS must establish standards that clearly define responsible use versus professional negligence rather than leave this up to 50 different state interpretations.
Consider International Examples
Other advanced tax systems draw this line clearly. Ireland’s tax authority has stated that while it uses AI to support operations, AI isn’t a decision-maker. Human officials remain fully accountable. Leading UK professional tax bodies have issued guidance urging responsible and ethical use, risk awareness, data protection, and continued professional judgment.
These examples show how federal clarity can set the standard for professional accountability. The lesson for the US is clear: AI should assist, not replace, licensed professionals, and regulators should make that expectation explicit.
The US government recognizes AI’s limitations. The IRS is using AI only for basic administrative tasks such as document summaries and routine questions, not tax analysis. Given that federal regulators are this cautious, private companies that automate sophisticated credit filings without oversight are reckless.
Consider Canada’s experience. A survey of Canadian tax professionals found that businesses are losing money after relying on AI tools for financial and tax advice, with tax professionals spotting mistakes on a regular basis. This problem is materializing now, and US firms and regulators shouldn’t wait to respond.
Businesses face audit risk, substantial penalties, and potentially criminal liability for tax fraud, even if the errors were generated by an AI-first provider that they hired. Company executives can be held personally liable. And small businesses, attracted by low-cost services, are most vulnerable because they lack resources to independently verify AI recommendations.
Establish Clear Standards
The IRS has the authority and the responsibility to act. In my letter to Bessent, I urged the agency to immediately issue guidance establishing clear standards for AI use. These include mandatory disclosure when AI systems are used in tax preparation, prohibited uses in complex tax matters, mandatory human oversight requirements, and heightened audit scrutiny for inadequately supervised AI work.
Responsible innovation can thrive within clear boundaries. The Big Four firms prove this daily, using AI to enhance efficiency while maintaining genuine professional oversight. But without IRS guidance, unscrupulous providers will continue exploiting the regulatory gap.
US businesses deserve protection from misleading AI services. CPAs deserve regulatory clarity. The tax system deserves federal guardrails before it becomes a proving ground for unproven algorithms. The IRS must provide that guidance now before the damage is done.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Ryan Costello is a former US congressman and currently leads Ryan Costello Strategies, a consulting firm advising companies on complex public policy matters.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Tax or Log In to keep reading:
See Breaking News in Context
From research to software to news, find what you need to stay ahead.
Already a subscriber?
Log in to keep reading or access research tools and resources.