AI Reshapes Employer Group Health Plans for Fiduciaries—Pt 2

May 7, 2026, 8:31 AM UTC

AI is changing the practice of tax law. This series examines the ethical, legal, and practical implications of AI across key areas of tax practice.

This article, Part 2 of a three-part article, aims to educate plan administrators and fiduciaries about what artificial intelligence is, and how large language models fit into the evolving compliance and governance landscape for employer‑sponsored group health plans. Part 1 explained the rapid emergence of generative AI following the public release of ChatGPT in late 2022 and situates AI as a transformative and now ubiquitous technology.

Part 2 explains that ERISA plan fiduciaries remain fully responsible for ensuring that AI is used prudently, loyally, and solely in the interest of plan participants, even when AI is deployed by third-party service providers. Because AI systems function as opaque “black boxes,” fiduciaries face challenges in monitoring their use. Fiduciaries can mitigate risk through reviewing vendor AI policies, requesting audits, requiring compliance with emerging standards (such as the International Organization for Standardization (ISO) and the National Institute of Standards and Technology (NIST)), and negotiating robust AI-specific contract and request for proposal terms addressing transparency, bias, cybersecurity, compliance, and accountability.

Particular caution is required when AI is used in claims adjudication or prior authorization, especially for adverse benefit determinations, where human oversight, audit trails, and participant disclosures are essential to satisfy fiduciary duties and reduce litigation risk. The article also highlights the risk of bias embedded in historical claims data used to train AI models, noting limitations and distortions in machine-readable files despite recent transparency rules. Finally, it emphasizes the growing importance of formal fiduciary governance structures and warns that the use of AI tools for tasks such as meeting minutes may create discoverability risks, suggesting that fiduciaries proceed cautiously until clearer regulatory or legal guidance emerges.

Core Fiduciary Concerns

The provisions of Title I of ERISA, which impose obligations on plan fiduciaries, are commonly referred to as the ERISA fiduciary rules. A person is a “fiduciary” for this purpose if, among other things, the person exercises any discretionary authority or control with respect to the management of the plan; exercises any authority with respect to the management or disposition of plan assets; or has any discretionary responsibility in the administration of the plan.

For purposes of this article, fiduciaries are typically the individuals or duly appointed members of committees who are responsible for the maintenance and operation of employer-sponsored group health plans.

Title I of ERISA prescribes four basic standards of conduct for fiduciaries:

1. a duty of loyalty, under which fiduciaries must discharge their duties “solely in the interest of the participants and beneficiaries” and for the “exclusive purpose” of providing benefits to participants and beneficiaries and defraying reasonable expenses of administering the plan;

2. a duty of prudence, under which fiduciaries must act “with the care, skill, prudence, and diligence under the circumstances then prevailing” that a prudent person would use “in the conduct of an enterprise of a like character and with like aims";

3. a duty to diversify investments (which is more relevant in the pension context); and

4. a duty to act in accordance with plan documents insofar as they are consistent with ERISA.

Fiduciaries can, and routinely do, hire service providers to handle fiduciary functions. Where group health plans are concerned, service providers include brokers, consultants, third party administrators, pharmacy benefits managers, actuaries, lawyers, data analysts, and many others. The duty of prudence includes an obligation to monitor the service providers periodically to assure that they are acting prudently.

Ultimately, it’s the job of the fiduciaries to ensure that both the plan sponsor and plan service providers use AI tools prudently and for the exclusive benefit of plan participants and beneficiaries. This means that fiduciaries must become well-acquainted with the plan sponsor’s use of AI for plan administrative functions. Fiduciaries should also proactively look for ways AI tools might be used to improve plan participant outcomes and experience.

The duty to monitor—and the “black box” problem. The fiduciary duty to monitor applies to the use of AI by any service provider to a planwhich presents a problem: AI programs and data are “black boxes,” which are ill-suited to monitoring. Fiduciaries can’t gain access to AI source code or training data. Nor is it even clear whether such access would be of much value. Perhaps with the exception of the fiduciaries of the largest plans with the greatest bargaining power, the vast majority of fiduciaries won’t be able to gain access to plan service providers’ AI development models, source code, inputs/outputs, and testing outcomes.

There are steps that a fiduciary can take, however.

  • AI use policies. Many service providers, including all the large, national carriers that operate in an administrative-services-only capacity, maintain comprehensive, enterprise-level AI use policies that establish guidelines for the responsible and secure use of AI tools by and within the organization. These policies typically address ethical, legal, and security considerations, among others. Ideally, AI policies should define acceptable practices and responsibilities, while also minimizing risks. Fiduciaries should obtain and review these policies and require that they be provided with timely, written notice of any changes or updates. Fiduciaries may find it necessary to engage experts to interpret AI policies.
  • AI use committee charters. Fiduciaries should also, where appropriate, investigate whether a vendor’s AI use committee charter is compatible with the plan sponsor’s AI use committee charter or any other relevant fiduciary policy or procedure (such as a fiduciary committee charter).

Request periodic audits. Plans with sufficient leverage should insist on periodic audits of their service providers’ AI use. In an ideal world, this would mean an audit by an independent third party, but if that isn’t possible, an internal audit may provide some comfort. This is also an area that fiduciaries should watch as AI audit standards evolve in response to widespread adoption of AI tools. Ideally, the fiduciary should be able to get some insight into AI methodologies and decision outcomes.

Compliance with independent standards. Fiduciaries can request copies of certifications from recognized standard-setting organizations. Here are some of the currently available options:

  • ISO/IEC 42001:2023 is an international standard that specifies requirements for establishing, implementing, maintaining, and continually approving an artificial intelligence management system within organizations.
  • ISO/IEC 42005:2025 provides guidance for organizations conducting AI system impact assessments.
  • NIST has developed an artificial intelligence risk management framework (AI RMF 1.0).

As with AI audit standards, fiduciaries should watch for evolving guidance and certification standards.

Contract terms and requests for proposals. Increasingly, new vendor contracts and renewals of existing contracts include AI-specific terms and conditions, and requests for proposals are insisting on AI-specific requirements and accommodations. By way of example, a contract for administrative services might include the following:

  • Scope of services/AI use. Include a clear definition of which administrative services involve AI; such as claims processing, utilization management, fraud detection, predictive analytics, or participant communications; address transparency, requiring the vendor to disclose what its AI being used for, what tools are being used; whether the AI involves automated decision-making that affects participant benefits or claims; and a description of when and how human review will occur before final determinations, especially for adverse benefit determinations.
  • Data governance and HIPAA compliance. The vendor must ensure that it will comply with HIPAA, which requires, among other things, a compliant business associate agreement.
  • Algorithmic accountability and bias mitigation. Address testing and validation. The vendor should be required to provide documentation of model testing to ensure fairness, accuracy, and lack of discriminatory bias in outcomes. Audit rights should be included (see above discussion). The vendor should warrant and represent that it will comply with all existing and emerging federal and state AI laws (such as the European Union Artificial Inteligence Act, state algorithmic accountability acts, and potentialguidance issued by the US Department of Health and Humnan Services (HHS), the US Department of Labor (DOL), or other regulator with appropriate jurisdiction.
  • ERISA/fiduciary duty issues. Clarify whether AI tools perform fiduciary functions under ERISA. In the case of an administrative-services-only (ASO) arrangement, the ASO provider is often a claims fiduciary. Thus, the use of an AI tool to adjudicate claims is a fiduciary function. The same is true if an AI tool is used to engage with the independent dispute resolution process for resolving payment disputes between out-of-network providers and insurers under rules prescribed by the No Surprises Act. Title II, Division BB of Consolidated Appropriations Act, 2021, Pub. L. No. 116-260 (2020).
  • Cybersecurity and incident response. Address AI-specific risks such as data poisoning, model inversion attacks; require NIST- or ISO-compliant security frameworks; and require prompt notification and cooperation in response to any security incidents involving AI systems.
  • Intellectual property and model rights. Clarify whether the vendor retains IP rights to the AI; whether the plan sponsor gains a license or access to model outputs; and set out restrictions on using plan data to train models for other clients.
  • Performance metrics and service-level agreements. Define service levels specific to AI tools, such as error rates, response times, appeals processing, and require clear documentation for how AI recommendations are generated to support regulatory inquiries or litigation. Where applicable, determine what controls are in place to ensure compliance with existing state law. Performance guarantees should also include compliance with the growing number of state laws governing the use of AI by carriers, especially with respect to adverse benefit determinations (state laws governing AI use are discussed below).
  • Indemnification and liability. Include indemnification provisions covering incorrect or biased AI outputs, violations of privacy or discrimination laws, and breaches of fiduciary duty caused by AI tools.
  • Regulatory evolution and flexibility. Require periodic contract updates to reflect new guidance from HHS, DOL, or AI regulatory bodies. Vendors should be required to provide evidence of employee training on AI ethics, HIPAA, and plan administration.
  • Dispute resolution and record retention. AI-related decision logs should be retained and made accessible for audits, appeals, and litigation. A process should also be established for resolving disputes involving AI-generated outputs, ensuring consistency with ERISA claims regulations.

The considerations raised by the contract terms described above can apply with equal force to requests for proposals. Critically, both in the request-for-proposal process and for purposes of contract terms, AI shouldn’t be the final adjudicator for claim denials.

AI use for adverse benefit determinations. AI has increasingly been used by health insurers and third-party administrators to automate claims adjudication. These systems scan medical codes, compare them to benefit plan coverage and eligibility, and issue determinations of payment or denial in a fraction of the time it might take to a human clinician. Similarly, AI tools can rapidly process pre-authorization requests to determine whether proposed treatments align with plan rules and clinical guidelines.

Despite promising vastly increased efficiency, however, the use of AI tools inevitably risks substituting opaque algorithmic determinations for individualized human assessments. As a result, the use of AI for the adjudication of group health plan benefit claims, prior authorization, or anything else that rises to the level of an “adverse benefit determination” raises special concerns. For this purpose, an “adverse benefit determination” means:

“A denial, reduction, or termination of, or a failure to provide or make payment (in whole or in part) for, a benefit, including any such denial, reduction, termination, or failure to provide or make payment that is based on a determination of a participant’s or beneficiary’s eligibility to participate in a plan.” 45 CFR 147.136(a)(2)(i).

Where a group health plan is concerned, an adverse benefit determination also includes a denial, reduction, or termination of, or a failure to provide or make payment (in whole or in part) for, a benefit resulting from the application of any utilization review; a failure to cover an item or service for which benefits are otherwise provided because it’s determined to be experimental or investigational or not medically necessary or appropriate; and any rescission of coverage.

There is clearly a role for AI in the process of making benefit determinations. This is especially true in the case of claims adjudication, which has traditionally relied heavily on manual paperwork and which as a result can be complicated and inefficient. AI holds out the promise of vastly increased efficiencies. AI algorithms can review medical codes, patient histories, and claim patterns to assess their legitimacy, root out fraud, spot human error, and enable faster access to care.

From the perspective of plan fiduciaries, the use of AI in adverse benefit determinations comes with an important caution: The use of autonomous AI-based claims processing may or may not be coming, but we are not there yet. Human oversight remains an essential ingredient of the claims adjudication and prior authorization process to the extent the result of the process yields an adverse benefit determination. The savings and efficiencies are too important to ignore, but there needs to be a human condition in the loop for many reasons, including data reliability and inherent AI errors or “hallucinations,” which can distort outcomes. Even the best currently available AI algorithms experience material error rates.

There is also the prospect of lawsuits by plan participants whose claims have been denied without the involvement of a human clinician, thereby raising questions of fairness. AI systems making or overtly influencing clinical determinations may be subject to ERISA-based challenges, particularly under the fiduciary duties of loyalty and prudence and the exclusive benefit rule.

Although the use of AI in claims adjudication represents the prospect of more efficient workflow and significantly lower administrative burdens, fiduciaries for group health plans should insist that AI be cast in the role of decision support and not final authority. This includes insisting on human involvement in any adverse benefit determinations. To the extent possible, fiduciaries should demand that any use of AI for claims adjudication purposes produce an audit trail that can be later interrogated on audit. It also means that participants should be informed when AI tools influence any aspect of their benefits experience.

Exhibiting bias. AI tools can routinely exhibit bias based on their training data, leading to unfair outcomes. Group health plans are particularly prone to bias when AI is applied to claim processing and adjudication for two reasons—inability to access carrier data and unreliability of the data produced in machine readable files. Before 2019, group health plan claims data was largely opaque, as vendors—principally large, national carriers who held such data—claimed property ownership. There was no central repository. 2019 saw the issuance of the Transparency in Coverage (TiC) rule, which required plans to make claims data available in “machine readable files.”

There are three separate sets of federal rules governing machine readable files.

  • The 2019 Hospital Price Transparency Rule (as amended). This rule was established by regulation (84 Fed Reg. 65,602 (Nov. 27, 2019)). The rule requires hospitals to publish their pricing in a machine-readable file containing a comprehensive set of standard charges. It also required a consumer-friendly display that makes at least 300 common “shoppable services” easy for patients to compare. Machine readable files under this rule must publish a standardized set of charges that includes gross charges, discounted cash prices, and payer-specific negotiated rates, minimum and maximum negotiated charges, and service descriptions, billing codes, and related identifiers.
  • The TiC Rule. This rule was also established by regulation (85 Fed Reg. 72,158 (Nov. 12, 2020)). The rule requires group health plans to display certain health care price information via machine readable files on a publicly available website. Machine readable files must include negotiated rates with in-network providers and allowed amounts for out-of-network providers. These files must be accessible at no charge on a public website that anyone can access without any restrictions, and they must be updated monthly.
  • Title I (The No Surprises Act), and Title II (Transparency) of Division BB of the Consolidated Appropriations Act, 2021, as clarified by the Consolidated Appropriations Act, 2026 (collectively, the CAA). The CAA imposed rules similar to the TiC rule. The rule generally requires insurers and self-funded group health plans to publicly post machine readable files that include negotiated rates for a health plan’s in-network providers and historical allowed amounts and billed charges for out-of-network provider.

The latter two rules, TiC and CAA, are referred to as the TiC rules. The information available under all three rules is welcome and even necessary. Among other things, these rules give plan fiduciaries access to the information essential to carry out their duty to monitor plan service providers.

There is, however, a problem: None of these rules addresses the embedded biases that are inevitably found in decades of previously adjudicated US medical claims. These datasets reliably contain inaccuracies, inconsistencies, and systemic biases such as skewed provider pricing. The problem isn’t hard to see in the following example:

Carrier X adjudicates claims for millions of covered lives across the country acting as the third-party administrator for employer- sponsored, self-funded group health plans under ASO contracts. The ASO provider promises to provide access to a robust network of providers, with which it negotiated favorable rates. In practice, however, the carrier/third-party administrator may not always pay their network providers the negotiated rate. One reason for this is revenue guarantees, in which the carrier/third-party administrator promises to pay certain network providers a minimum amount of revenue per year, regardless of the amount the provider billed for actual services performed.

The carrier/third-party administrator might use the assets of a self-funded plan, rather than its own fully insured plan reserves, to meet these guarantees. For the claims data to be useful, the distorting effects of provider guarantees must first be rooted out, which likely would require a plan-by-plan audit.

There is the related, though separate, question of the use of plan claims data for training purposes. While it should be possible to allow the use of de-identified data to be used for training purposes, fiduciariesat least those with sufficient bargaining powermay want to insist that their plan’s data not be used for training purposes. (The interaction of HIPAA and AI use is discussed in Part 3.)

Fiduciary governance. Beginning with the wave of claims against 401(k) retirement plans in 2007 regarding excess fees and other related fiduciary violations, fiduciary governance has become more formal, and consultant driven. Where retirement plans are concerned, the establishment of a fiduciary committee through formal action and proper documentation is widely accepted as a best practice. Increasingly, the concept is being adopted as employers are establishing formal welfare plan committees, which mitigate exposure on the part of a company’s board of directors and senior management. It’s commonplace for minutes to be kept of committee meetings. It’s also becoming less uncommon for committee minutes to be generated by an AI notetaking “assistant.”

Fiduciaries should expect that committee minutes would be discoverable in litigation. A careful review of minutes is always important, of course, but there is a legitimate worry that the original AI-generated transcript may itself be discoverable. It may be better practice to forgo these tools altogether and instead have minutes manually preparedat least until the matter is clarified by agency guidance or litigation.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.

Read more in this series.

Author Information

Alden Bianchi is General Counsel of Client Services at SBMA, LLC, in San Diego, California.

Write for Us: Author Guidelines

To contact the editors responsible for this story: Soni Manickam at smanickam@bloombergindustry.com; Katharine Butler at kbutler@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.