AI is changing the practice of tax law. This series examines the ethical, legal, and practical implications of AI across key areas of tax practice.
This is a two-part article on the current and possible future effects of the AI boom on single-employer, multiple-employer, and multiemployer retirement plans.
The AI boom is usually framed in terms of semiconductor demand, hyperscale capital expenditure, and the race to build ever-larger models. Retirement plans are unlikely to encounter the story first in any of those forms, but rather through administration, fiduciary process, and workforce change. That makes the subject less dramatic than the public debate, but in legal and economic terms no less important. For retirement plans, AI is not simply another technology trend. It’s a force that may reshape how plans are administered, how long participants remain in the workforce, how contributions accumulate, and how retirement promises are financed.
The effect is also unlikely to be uniform across plan structures. Single-employer plans will feel AI first through service providers, committee oversight, cybersecurity, and participant-facing tools. Multiple-employer arrangements, especially pooled employer plans, are more likely to absorb AI through centralized providers that can spread technology and compliance costs across many employers. Multiemployer plans are in another position, because their economics depend not just on plan administration but on the continued flow of collectively bargained employer contributions. In other words, the same technology boom may present a governance problem for one plan, a scale opportunity for another, and a funding question for a third.
Part 1 of this two-part article discusses the current effects of AI on retirement plans that are operational, fiduciary, and uneven across plan types.
The most important present-tense consequence therefore isn’t hypothetical automation sometime down the road. It’s that fiduciaries are already dealing with AI-assisted operations, whether or not they label them that way. For single-employer plans, especially larger 401(k) plans and corporate pension plans with formal committees and deep vendor rosters, AI first arrives as a governance problem. Foley & Lardner’s 2024 analysis of AI in 401(k) plans stresses that the key ERISA issue is process: AI can produce unreliable or biased outcomes when data are limited, outdated, or skewed; it may struggle with black-swan events and qualitative judgment; and lawsuits are likely to focus less on whether fiduciaries adopted AI than on whether they followed a documented process for evaluating it. Michael Abbott, Aaron K. Tantleff, and Cullen J. Werwie, Generative Artificial Intelligence (AI) and 401(k) Plan Fiduciary Implications, Foley & Lardner (Apr. 17, 2024). That point deserves emphasis. ERISA committees don’t get credit merely for being technologically current. They get judged on whether they can explain what a tool does, what risks it introduces, and why its use was prudent in the first place.
That immediately broadens the due-diligence agenda for single-employer plans. Jackson Lewis advises fiduciaries to understand how AI-based decisions are made, to avoid uncritical reliance on black-box systems, to validate outputs regularly, and to build AI risk management into plan governance and vendor oversight. Nixon Peabody similarly says employers should understand how models are built and maintained, what data sources are used to train them, how accuracy is validated, and how exceptions are reviewed, while structuring vendor relationships to permit ongoing monitoring and documentation. In practical terms, AI turns ordinary service-provider review into something closer to model-risk review. A plan sponsor that once asked whether a vendor was competent and reasonably priced must now ask whether the vendor’s system is explainable, auditable, and appropriately bounded.
Cybersecurity is the second immediate effect, and it may become the most expensive one. Foley’s 2026 discussion of AI and employee-benefits cybersecurity notes that Employee Benefits Security Administration’s updated cybersecurity guidance makes clear that the Department of Labor views cybersecurity as an ERISA fiduciary responsibility. Kelsey O’Gorman and Iris Grossman, Cybersecurity in the Age of AI: Best Practices for Employee Benefits Administration, Foley & Lardner (Mar. 18, 2026). AI may improve fraud detection and anomaly spotting, but it also expands data exposure, system interconnections, and the attack surface around sensitive participant information. The result is not a simple pro- or anti-AI story. It’s a fiduciary tradeoff: plans may face more scrutiny both for adopting AI without sufficient controls and for failing to consider AI tools that materially strengthen account security.
Litigation risk is also changing now. Foley warned in 2024 that AI may be used to sue plan sponsors and fiduciaries, with claims attacking the fiduciary decision-making process or the opacity of AI-driven recommendations. Nixon Peabody adds a related caution on record creation, noting that AI-generated records of committee meetings may be discoverable and may contain inaccuracies or statements taken out of context, making manually prepared minutes the safer course. For single-employer plans, then, the first legal effect of AI isn’t a new statutory duty. It’s the intensification of old duties under conditions where more operational choices are data-driven, more vendors are embedding AI into core services, and more potential plaintiffs can use the same tools to scrutinize plan conduct.
Multiple-employer arrangements, especially pooled employer plans, or PEPs, are likely to experience AI differently because the cited legal guidance is focused on centralized provider structures. Ropes & Gray explains that a PEP allows an employer to outsource most plan-related administrative tasks and fiduciary responsibilities to third-party professionals, especially the pooled plan provider, while also using scale to reduce fees and administrative burden. Allie Alperovich and David A. Kirchner, Pooled Employer Plans (PEPs): Putting a Little PEP in a 401(k) Retirement Plan Could Help to Protect Your Portfolio Companies, Ropes & Gray (Apr. 19, 2022); David A. Kirchner, Elliot Saavedra, Revisiting Pooled Employer Plans (PEPs): A Cost-Effective, Low-Risk Solution for Providing Retirement Plan Coverage (Part I), Ropes & Gray (Jan. 22, 2024). Mayer Brown similarly notes that participating employers may be attracted to the fiduciary protections of a PEP, even though they still retain responsibility for selecting and monitoring the pooled plan provider and any other named fiduciary. Erin K. Cho, Richard E. Nowak, Hillary E. August, and Susan P. Carlson, US Department of Labor Solicits Feedback and Provides Guidance to Smaller Employers on Pooled Employer Plans, Mayer Brown (Aug. 11, 2025). The natural implication is that AI will often reach smaller and midsize employers through the pooled provider’s operating stack rather than through bespoke employer-level experimentation.
That does not make the employer passive. Ropes & Gray’s 2025 discussion of DOL guidance on PEPs says participating employers can substantially reduce exposure when the pooled plan provider assumes full responsibility for selecting and retaining an investment manager, but the employer still must prudently choose and monitor the provider structure itself. Allie Alperovich, David A. Kirchner, Joshua A. Lichtenstein, Jonathan M. Reinstein, DOL Sheds Light on the Fiduciary Responsibilities That Arise with Pooled Employer Plans PEPs, Ropes & Gray (July 30, 2025). In other words, AI may make pooled arrangements more attractive because scale can support better participant communications, recordkeeping workflows, fraud detection, and data analysis. But the basic legal discipline remains recognizable: The employer must still ask whether the provider deserves trust, whether the provider’s incentives are aligned, and whether the technology actually improves outcomes without creating opaque new risks.
Multiemployer plans stand apart because their central vulnerability is not only administrative error but deterioration in the contribution base. Proskauer describes withdrawal liability as the obligation that can arise when an employer stops contributing to a multiemployer pension plan, and it explains that these plans are collectively bargained arrangements funded by contributions under collective bargaining agreements. Lowenstein Sandler similarly explains that a multiemployer plan is maintained by various unrelated employers that contribute on behalf of a unionized workforce, and Morgan Lewis’s 2026 analysis of AI in labor relations identifies workforce intelligence, collective-bargaining strategy, compensation monitoring, and employee-engagement tools as high-impact AI use cases. Proskauer Benefits Brief: Withdrawal Liability—What It Is and Why It Matters, Proskauer Rose (Jan. 13, 2026); Andrew E. Graw, Taryn E. Cannataro, and Jessica I. Stewart, Multiemployer Pension Plans: Mitigating Risk in the Context of a Business Transaction, Lowenstein Sandler (Feb. 22, 2024); Harry I. Johnson, Nicole A. Buffalano, Kelcey J. Phillips, and John F. Ring, How AI Will Fundamentally Reshape Work in Labor Relations, Morgan Lewis (Mar. 20, 2026). For multiemployer plans, the current AI story is therefore less about chatbots and more about whether AI begins changing the organization, bargaining, and deployment of covered labor.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Samuel W. Krause is a partner at Hall Benefits Law.
Write for Us: Author Guidelines
To contact the editors responsible for this story: Soni Manickam at smanickam@bloombergindustry.com;
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.