State attorneys general are asserting themselves in areas once governed primarily by federal securities regulators. Unlike the familiar guardrails of the Investment Company Act of 1940 and related federal securities laws, state-level initiatives are extending beyond what firms would reasonably anticipate under established compliance frameworks.
The result could be an evolving and unpredictable environment in which antitrust and consumer-protection theories are entering “securities compliance,” and at least one court has let those claims through the gate.
ESG Test Case
One early testing ground has been environmental, social, and governance initiatives. The US District Court for the Eastern District of Texas on Aug. 1 denied a motion in Texas v. BlackRock to dismiss and allowed antitrust and consumer-protection claims to proceed against three major asset managers, holding that the Clayton Act’s passive-investor safe harbor doesn’t protect investors who use proxy voting or engagements to lessen competition.
The court tied public climate commitments and initiative memberships to subsequent voting and engagement, treating that combination as circumstantial evidence of coordinated efforts that could reduce coal output.
State AGs also have paired antitrust claims with consumer-protection theories, alleging that firms marketing exchange-traded funds as “non-ESG” while simultaneously engaging in climate stewardship misled investors. These arguments blend securities-style disclosure scrutiny with state unfair-practice laws, widening the enforcement toolkit.
The same playbook state AGs have applied to ESG could be repurposed for artificial intelligence. As industries develop AI governance frameworks, shareholder coalitions that press companies to adopt uniform standards—through voting or engagement across competitors—could be cast as coordinated conduct and invite regulatory action.
Opportunity Or Safety
Vice President JD Vance made clear at the AI Action Summit in Paris in February that the new administration is focused on “AI opportunity,” not “AI safety.” Since returning to office, President Donald Trump has signed seven AI-related executive orders, including the July 23 AI Action Plan, which aims to “turbocharge” AI innovation and eliminate regulations that “hinder” or “burden” AI development.
That deregulatory posture was coupled with more targeted messaging. Trump has called “diversity, equity, and inclusion” one of the “most pervasive and destructive” ideologies that “poses an existential threat to reliable AI”—signaling that DEI-related AI initiatives could face higher regulatory scrutiny.
Against this backdrop of AI acceleration, shareholders are showing interest in greater transparency. Shareholder proposals on AI more than quadrupled in 2024, and a recent report revealed shareholders are increasingly worried about AI oversight, with an average of 30% supporting resolutions to enhance AI oversight or transparency in the 2024 proxy season.
The next proxy season will reveal whether investors double down on transparency or align with White House policies.
At the same time, institutional investors are recalibrating their proxy voting guidelines to address AI oversight. Pension funds in San Francisco, Vermont, and Florida have been updating their proxy voting guidelines to hold directors accountable for weak AI oversight—an approach that echoes how AGs scrutinized proxy voting in BlackRock.
Boards are responding in parallel: Nearly half of Fortune 100 companies now seek AI expertise in director qualifications, and the number of S&P 500 companies assigning AI oversight responsibilities to a designed committee more than tripled between 2024 and 2025.
Looking Ahead
Together, the White House’s AI agenda and the BlackRock decision provide a roadmap for navigating an otherwise uncertain environment.
Asset managers should, at minimum, scrutinize the AI Action Plan and recent executive orders to understand the administration’s priorities, as they are poised to shape the contours of enforcement.
Republican-led AG coalitions are likely to act as an enforcement arm of Trump’s agenda, targeting companies perceived as “burdening” AI progress. That means stewardship practices asset managers may view as responsible—favoring companies with AI governance standards, launching “responsible-AI” or “exclusive-AI” ETFs, requiring renewable energy for AI operations, or signing AI-responsibility pledges—could be reframed as anticompetitive conduct.
Likewise, efforts to mitigate racial or gender bias in algorithms may be recast as advancing “woke AI,” depending on how enforcement actors frame the narrative.
But the risks aren’t limited to red states. For blue states, the same reasoning could support claims against asset managers or companies accused of downplaying risks such as algorithmic bias, discrimination, or consumer harm.
In that context, stewardship activities framed around “responsible AI” or bias mitigation could be characterized as anticompetitive efforts to shape markets or exclude competitors under the banner of ethics or safety.
Public companies likely will be on the front lines of these risks, but asset managers are uniquely exposed, given their proxy voting and stewardship activities. They will need to balance regulatory pressure with shareholder demands. Proactive compliance and careful framing of stewardship priorities are essential to reducing litigation risk.
For now, three points stand out:
- Federal compliance is no longer sufficient. Meeting Securities Exchange Commission and 1940 Act standards may not shield against state-level activism.
- State agendas cut both ways. Republican AGs have advanced antitrust and consumer-protection claims in ESG, and similar theories could be applied next to AI. Democratic AGs may adapt the same playbook to pursue their own priorities.
- Stewardship can be cast as anticompetitive. Voluntary initiatives—whether climate commitments, responsible AI pledges, or bias-mitigation practices—may be reframed as coordination to restrain competition.
Compliance strategies built solely on federal benchmarks aren’t enough. Companies should work with counsel to develop flexible frameworks that anticipate state-level activism—whether tied to ESG, AI, or the next politically charged initiative such as DEI.
The rise of state AG activism marks a turning point in fund regulation. BlackRock demonstrated how ESG commitments can be recast as antitrust or consumer-protection violations—arguably even without clear evidence of collusion. The same reasoning could extend to AI as companies adopt “responsible AI” pledges and governance frameworks.
For asset managers, the key lesson is that compliance requires more than adherence to federal securities laws. Firms must plan for regulatory volatility and build adaptive compliance frameworks that can withstand shifting state priorities, rather than relying on the familiar guardrails of the past.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Peter Saparoff is a member and chair of Mintz’s Institutional Investor Class Action Recovery Practice.
Sofia Nuño is a litigation associate at Mintz.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.