ANALYSIS: What Is ‘AI Washing’ and How Can Lawyers Prevent It?

Aug. 13, 2025, 2:00 PM UTC

In advertising, “puffery” is widely accepted as a persuasive marketing tactic. Courts have long recognized that exaggerated or boastful advertising statements that are not presented as factual claims or misrepresentation are legally permissible. While we would like to believe that an energy drink can give you wings, or that a sovereign nation runs on coffee and donuts, these assertions are so obviously implausible that no reasonable consumer would take them as literal fact.

But bold exaggerations can quickly veer into material misrepresentations. And when it comes to a hot product like artificial intelligence, companies are showing a troubling willingness to blur the line between claims that constitute savvy marketing and those that draw charges of outright deceptive misrepresentation.

Too often, what’s marketed as state-of-the-art AI is, in reality, little more than smoke, mirrors, and human labor—a modern-day equivalent of the Mechanical Turk, an elaborate 18th-century hoax that purported to be a chess-playing machine but was actually operated by a hidden human chess master. In today’s terms, this is considered AI washing, an emerging practice similar to greenwashing that involves making misleading representations about an AI tool’s products, competencies, or proprietorship. This includes overstating, fabricating, or concealing the true extent of AI automation, capabilities, human involvement, or ownership of the technology.

In today’s hyper-competitive AI marketplace, companies are increasingly embellishing their AI technology and marketing basic automation using AI buzzwords. But unlike playful exaggeration, terms like “AI-powered” or “driven by machine learning” are material technical claims that imply specific, verifiable functionality and have legal implications. While the legal line between mere puffery and actionable statements can, in practice, be difficult to discern, claiming a platform is “fully AI-automated” when it’s largely driven by, say, rule-based logic and manual processes goes well beyond that line.

The motivation behind the marketing hype is obvious: Companies seeking market relevance know that investors are looking to capitalize on the AI boom, while consumers tend to have limited technical literacy of complex technologies and are liable to take AI claims at face value. Beyond market concerns, AI washing raises ethical concerns and risks undermining public trust, especially as AI rapidly advances while regulation struggles to keep pace. It can adversely impact innovation and, in turn, stifle the development of critical technologies by fueling investor skepticism.

Regulatory Pressure

Regulators have begun scrutinizing AI-related disclosures that are material to a company’s valuation or product capabilities, and are paying closer attention to misrepresentations about the sophistication of companies’ AI tools.

The Federal Trade Commission has warnedthat existing consumer protection and advertising laws apply to AI-related claims and, as a result, materially misleading AI assertions constitute deceptive practices. The Securities and Exchange Commission has similarly pursued enforcement actions against publicly traded companies for material and unsubstantiated misrepresentations about AI capabilities in investor materials and public statements.

Until recently, the focus has largely been placed on publicly traded companies, but there seems to be a shift taking place. The SEC and Department of Justice are now also turning their attention to privately held companies. Recent enforcement actions illustrate where puffery ends and material misrepresentation begins, signaling regulators’ waning patience with companies that cross this line.

Three Enforcement Examples

In January, the SEC issued an order finding that Presto Automation violated the Securities Act and Securities Exchange Act by making “materially false and misleading” claims about “critical aspects” of an AI product in its SEC filings. The company claimed to have developed an innovative AI-assisted speech recognition tool that removed the need for human intervention in drive-through order-taking. The company failed to disclose that the AI technology was in fact developed and owned by a third party. Furthermore, human involvement was found to still be a key part or the functionality.

In ongoing litigation against Albert Saniger, founder and CEO of Nate Inc., the DOJ and SEC allege that Saniger pitched to investors his startup’s fully AI powered and “scalable” mobile app that could independently place orders across multiple e-commerce platforms. In reality, the regulators claim, the purported AI automation capabilities were more aspirational than tangible: Behind the scenes, the transactions were manually completed by humans and bots, not any form of intelligent automation. Although Nate Inc. is a privately held company with no obligation to file public disclosures, the DOJ and SEC charged Saniger with several criminal counts of securities and wire fraud for violations of Section 17(a) of the Securities Act, Section 10(b) and Rule 10b-5of the Securities Exchange Act, based on AI claims made in the company’s marketing materials, investor presentations, and press coverage. The DOJ and SEC deemed these voluntary statements as materially misleading, despite the company’s private status.

Settled in March 2024, the Global Predictions matter is particularly instructive on the role of social media in AI washing. The SEC found that the investment advisory firm willfully violated the Amended Marketing Rule of the Investment Advisers Actby making false and misleading public communications about its AI technology on its website, social media platforms, and client emails. In addition to marketing itself as the “first regulated AI financial advisor” and claiming to use expert AI-driven forecasts, Global Predictions failed to follow its own compliance safeguards and compliance manual relating to the use of social media platforms, according to the SEC.

Former SEC Chair Gary Gensler, who once described AI as “the most transformative technology of our time,” cautioned in early 2024 that exaggerated claims about AI capabilities could constitute fraud if they are materially misleading and relied upon by investors. These enforcement actions not only echo Gensler’s earlier warnings, but also show regulators’ continued focus on AI-related misrepresentation.

Indeed, Gensler’s departure hasn’t slowed the agency’s efforts against AI washing. Initiated under the Biden administration, enforcement efforts to address AI washing have continued under the current administration, despite President Trump signaling a more permissive and less restrictive approach to AI regulation. Notably, charges against Saniger, which expanded AI washing enforcement to privately held companies, were filed as recently as April 2025.

Legal Team Role: Gatekeepers of Trust

The aforementioned enforcement actions illustrate the evolving standard of materiality and the importance of overseeing AI disclosures cautiously. This signals that voluntary public statements, even from non-reporting companies, can trigger enforcement if it crosses the materiality threshold.

Law firms and in-house legal teams, as gatekeepers of trust, can play a critical role in mitigating risks that could invoke claims of AI washing. Legal’s role goes well beyond merely reviewing marketing materials and includes designing and enforcing robust compliance policies that govern how AI capabilities are communicated across public-facing platforms. Legal teams must be proactive in taking ownership of early collaboration, as opposed to relying on a post-approval checkbox that’s brought out solely for after-the-fact compliance reviews.

In-house Teams

Given their proximity to the tech and the teams building it, in-house counsel has a front-row seat, and arguably a front-line duty, to act long before AI claims hit the public. This can mean acting in any or all of the following ways:

  • Embed compliance steps early in product and marketing cycles.
  • Collaborate early with relevant stakeholders including engineering, marketing, IT, and risk teams to establish robust internal approval protocols. This not only positions counsel as a key partner early in the tech development process, but also expands their skill set and deepens their technical knowledge by exposing them to areas beyond their traditional legal expertise.
  • Manage disclosures across financial filings, marketing materials, press releases, and social media platforms, including vetting AI-related language for accuracy and compliance.
  • Implement AI claims review processes and cross-team sign-off protocols.
  • Define internal standards for what can be defined as “AI” using industry best practices and setting parameters around what claims the company is willing to defend, not just promote.
  • Maintain thorough documentation to substantiate tech functionality, capabilities, updates, and ownership.
  • Engage regulatory agencies and lean on outside counsel for compliance reviews.
  • Anticipate how public statements could be interpreted, (mis)construed, or scrutinized down the line.

Law Firms

External counsel can provide independent oversight, bring the critical distance needed to challenge assumptions, and ask critical questions based on regulatory insights. From working across industries, law firms are well-positioned to spot trends and help shape responsible standards. Their contributions can include the following:

  • Audit AI assertions for potential misrepresentation and reputational risk.
  • Advise on evolving AI disclosure obligations across jurisdictions.
  • Help clients establish internal governance structures, policies, and protocols.
  • Advise clients on investor documents and disclosures and assist clients with vetting public facing marketing materials on AI-related messaging.
  • Ensure M&A and vendor negotiations thorough due diligence and assess whether clients’ AI claims are backed by evidence.
  • Verify AI tech ownership.
  • Make sure that contracts with clients’ AI third party vendors include the necessary warranties and clauses to safeguard clients against vendor AI washing.
  • Provide litigation and enforcement support.
  • Engage with regulatory agencies to clarify ambiguities and seek guidance on standards and compliance issues.

As the AI boom continues, we are sure to see more companies blur the line between savvy marketing and willful misrepresentation. Unless legal teams step in and take an active role in interrogating AI narratives before they reach the public, companies will risk legal and reputational liability, and the credibility gap between what companies say and what their technology can actually do will only widen.

The new report, Artificial Intelligence: The Impact on the Legal Industry, is available to subscribers here. Non-subscribers can click here to download the report.

Bloomberg Law subscribers can find related content on our In Focus: Artificial Intelligence resource and our AI Legal Issues Toolkit.

If you’re reading this on the Bloomberg Terminal, please run BLAW OUT in order to access the hyperlinked content, or click here to view the web version of this article.

To contact the reporters on this story: Linda Masina at lmasina@bloombergindustry.com; Janet Chanchal in Washington at jchanchal@bloombergindustry.com

To contact the editor responsible for this story: Robert Combs at rcombs@bloomberglaw.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.