- Akin partner Brian Daly explores new threat for AI tools
- Small amounts of bad data can mean catastrophic outcomes
Asset managers adopting generative artificial intelligence tools to supercharge their investment decision-making process are facing a new threat that might be even greater than compliance departments imagined.
Instead of scrambling to implement appropriate controls that assumed the integrity of their large language models, managers now have to consider sifting through and siphoning out poisoned data that tools are learning from.
“Data poisoning” occurs when a malicious actor introduces corrupt information into likely areas of generative AI training, causing an LLM to reach faulty conclusions. Because the notion that any one entity could corrupt enough data—given the vastness of the internet—to cause downstream errors by generative AI seems ludicrous on its face, most asset managers’ compliance policies don’t address this risk.
These new concerns are piled on top of a general shift in tone about the optimistic future of AI.
Over the past few weeks, the public’s view of AI has darkened dramatically, with generative AI in particular being recast as a threat. Vice President JD Vance voiced this change in a Feb. 11 speech in Paris, where he warned the world of “foreign hostile adversaries” who “have weaponized AI software to rewrite history, surveil users, and censor speech.”
The vice president’s warning closely followed allegations that DeepSeek, an LLM with a penchant for whitewashing Chinese history, contains hidden code capable of transmitting user data to a Chinese state-owned company. This rapid shift is also reflected in the financial media, where in the space of weeks articles warning of LLM security threats have become commonplace.
A study published earlier this year in Nature showed that only a small amount of poisoned data could result in catastrophic results for many users.
Focusing on medical-use LLMs, researchers found that “replacement of just 0.001% of training tokens with medical misinformation results in harmful models more likely to propagate medical errors.” In other words, pushing a tiny amount of misinformation into the right places could damage drug development and testing models, possibly with life-threatening results.
Data poisoning also poses an existential threat for investment advisers who increasingly rely on LLMs and must also satisfy a fiduciary “duty of care.” A poisoned LLM trained on falsified earnings reports or manipulated social media might trigger financial losses, reputational damage, and enforcement actions from regulators such as the Securities and Exchange Commission.
Unfortunately, traditional compliance measures simply aren’t up to this challenge. SEC and Financial Industry Regulatory Authority requirements focus on data protection and operational resilience—not the integrity of training datasets. The (voluntary) National Institute of Standards and Technology AI Risk Management Framework contemplates the need to “validate” AI models and mentions “data quality,” but lacks specific actionable guidance.
While there is no fixed list of policies that managers can rely on to respond to this growing threat, investment advisers should consider adopting several measures:
Data Validation. The kneejerk response to data poisoning is to recommend implementing data validation protocols, but this is easier said than done as it’s impossible to simply compare two copies of “the internet.” However, an adviser could test an approach on multiple LLMs provided by different vendors or curate and preserve a specific dataset with validated information to test against more recently trained LLMs. The downside to this approach is that it will be an adviser- and strategy-specific exercise that will have to be revisited frequently.
Output Reviews. The 2011 SEC settlement with Barr Rosenberg dictated the need for systematic managers to have rigorous testing of actual performance against the a priori expected range of outcomes. However, part of the appeal of AI is its propensity to produce unexpected (or “orthogonal”) results, requiring a more thoughtful policy approach.
As a proxy for backtesting on market data, an adviser might focus more on reverse engineering and attempting to validate a smaller sample of LLM-generated transactions, instead of coding up a portfolio statistics report.
Security. If understanding a target’s information dependencies increases a bad actor’s likelihood of success, limiting the circulation of that information provides protection. In addition to confidentiality agreements, advisers might consider separating research teams and limiting access to research and files across teams. Training on these measures will be essential, and electronic communications reviews could support enforcement efforts.
One tradeoff for this increased security might mean a radical culture shift that forces advisers to become less transparent, as a simple and effective prophylactic would be carefully limiting the information on the investment process shared with brokers, counterparties, and investors.
AI Agents. It may also be possible to train a semi-autonomous AI agent to crawl the general internet seeking poisoned data, which would then be avoided by the investment LLM. (This may sound a bit fanciful in 2025, but President Ronald Reagan was ridiculed when he championed the idea of a missile defense system in the 1980s.)
Just as nation-states must respond to the threat of weaponized AI, individual advisory firms need to incorporate AI defenses into their mission. Structuring workable and effective operational measures for advisers will require truly out-of-the-box thinking on both the risk and compliance side, more technical input, and greater collaboration across business units than has ever been the case. This is a tall order, but one that is as necessary as it is inevitable.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Author Information
Brian T. Daly is partner at Akin, advising fund managers and investment advisers.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.