AI Will Never Replace a Human Tax Pro’s Judgment and Skepticism

December 8, 2025, 9:30 AM UTC

The world seems to be buying the hype of using artificial intelligence in tax preparation, but I’m skeptical. I don’t believe AI can substitute for the professional judgment and human insight that form the bedrock of financial accounting and tax practices.

The fundamental question isn’t whether AI will be a valuable tool for accounting practice and tax compliance, but whether we can harness this technological genie without it burning down the reliability and ethics of our entire tax system in the process.

Promise and Peril

AI excels at repeatable processes with predictable outcomes, processing vast amounts of structured data without complaining about overtime or demanding better health benefits. For a nation plagued by lack of accounting professionals, this sounds like a dream come true.

But it also means users must understand and grapple with the implications of AI-assisted decision-making, especially when it comes to finance, tax, audit, and legality.

New technology can become a tool to lure unsuspecting taxpayers into questionable, often fraudulent practices. We saw this pattern with the pandemic-era employee retention credit, where unscrupulous promoters used sophisticated “software” analysis to mass produce claims for businesses that didn’t qualify.

These system-generated, impressive-looking calculations masked the lack of documentation and flawed eligibility determination. The result: Billions of dollars’ worth of improper claims made in the name of well-intentioned but ill-informed business owners, many of whom were subjected to significant grind from the IRS.

Today’s AI-powered tax services may be setting the same trap. As more tax practitioners adopt AI to streamline their processes, some may overstate its abilities to clients. Whether knowingly or not, they may use AI to justify aggressive tax positions and incentive numbers that lack genuine merit.

Claiming AI can do everything a legitimate tax professional does, only cheaper, is both incorrect and dangerous. Yet, the confidence and apparent expertise with which AI can already answer tax questions means that practitioners and their clients can easily be led astray if they don’t stay vigilant.

As AI use in the tax profession deepens, tax practitioners as well as the IRS should make the public more aware of fly-by-night shops that exploit AI’s complexity as a smokescreen to hide their questionable schemes masked as legitimate tax work.

AI-Driven Compliance Risks

The reliability problem. AI systems are as reliable as the data on which they’re trained. That means if there is a flaw, AI will confidently analyze incomplete, biased, or incorrectly interpreted data, potentially amplifying mistakes.

When most businesses seek a research and development credit, for example, the most they have is a list of projects. They rarely have data on who worked on the project, the hours spent, or what their process of experimentation was like. It takes a seasoned tax professional and industry knowledge to ask the questions necessary to build a narrative to satisfy the IRS.

With just a list of projects, AI has no context for qualification, so it gives a binary result; either everything qualifies or nothing does. That’s not a risk anyone should be taking.

The hallucination problem. AI “hallucinates,” offering plausible but incorrect information with such confidence that it may make one question oneself. This can be incredibly problematic when preparing taxes.

This is why professional tax practitioners will not pass on a tax study or filing without double checking if AI was used at any point during the process, even for mere arithmetic.

OpenAI, the industry standard for AI, has released two studies: one showing that it is a mathematical certainty that large language models will hallucinate, and the second showing that it is a mathematical certainty that such models will deliberately lie to tell users what they want to hear.

It’s no surprise then that OpenAI updated its policy to explicitly prohibit people from using their services for “provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.” If this is what the industry leader on AI is saying, how could you take anyone else seriously?

The courtroom problem. AI-generated information isn’t legally reliable. It can’t take an oath because it doesn’t understand what a lie is. It can’t testify because it doesn’t really know how it’s generating information, and AI-generated content can’t be admitted as evidence without proper foundation and authentication.

The fundamental problem tax practitioners using AI face is how to make its output legally defensible. If they can’t, using AI for any complex tax, audit, or compliance-related task is futile.

The evolutionary problem. Tax laws, rules, and regulations change year over year, administration to administration. No matter how fast one trains the AI system, it will always be a few steps away from catching the latest standards.

If a tax practitioner is over-reliant on AI and hasn’t kept up with the recent developments, it will expose both them and their clients to significant risk.

Humans Among Machines

Having worked alongside tax policymakers, CPAs, tax experts, and IRS agents for the better part of my life, I’ve learned that the most critical breakthroughs come from seemingly meaningless conversations.

Consider a tax adviser preparing an R&D credit study. AI can scan thousands of pages of documentation and maybe categorize expenses with impressive speed. But it can’t sit with the lead engineer and ask, “Walk me through what you were actually trying to solve here” or “How was this any different from your other projects?” The engineer might mention the three months spent on a seemingly minor technical challenge that turned out to be fundamental to the entire project’s qualifications.

During audits, the smoking gun doesn’t always appear in financial statements. It comes when the auditor has the instinct to ask, “So what happened in those weekly after-hours meetings?” Professional skepticism lies not just in questioning numbers, but in questioning the people behind them.

Doubt, skepticism, critical thinking, and judgment aren’t just nice to have; they’re the most critical traits of a tax professional. These qualities are also human—something that AI can’t replicate.

When it comes to the question of how best to use AI for tax services, the answer is in finding the right balance. Use AI for what it’s good at: reducing human efforts in routine, repetitive tasks, such as computing large data sets, but don’t rely on it for determination, judgment, and conclusions.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.

Author Information

Dean Zerbe is national managing director at Alliant’s Washington office and former senior counsel and tax counsel to the US Senate Committee on Finance.

Write for Us: Author Guidelines

To contact the editors responsible for this story: Rebecca Baker at rbaker@bloombergindustry.com; Melanie Cohen at mcohen@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.