AI Reshapes Employer Group Health Plans for Fiduciaries—Pt 3

May 8, 2026, 8:30 AM UTC

AI is changing the practice of tax law. This series examines the ethical, legal, and practical implications of AI across key areas of tax practice.

This article is the last installment of a three-part article, the collective purpose of which is to educate plan administrators and fiduciaries about what artificial intelligence is, and how large language models fit into the evolving compliance and governance landscape for employer-sponsored group health plans. Part 1 explained the rapid emergence of generative AI following the public release of ChatGPT in late 2022 and situates AI as a transformative and now ubiquitous technology. Part 2 discussed how ERISA plan fiduciaries remain fully responsible for ensuring that AI is used prudently, loyally, and solely in the interest of plan participants, even when AI is deployed by third-party service providers.

Part 3 examines the unsettled and contested landscape of AI regulation, noting the absence of comprehensive federal legislation, shifting administrative priorities, and a cautious congressional approach that largely favors voluntary guidelines while states adopt widely divergent and sometimes comprehensive AI laws—and as other nations such as EU countries and China pursue more assertive regulatory models. It explains how the growing use of AI in group health plans intersects with existing federal laws not designed specifically for AI but nonetheless highly consequential, particularly HIPAA, which raises significant issues around the use, disclosure, minimization, and de-identification of protected health information in AI systems and the obligations imposed on business associates. The discussion highlights unresolved questions about whether training AI on claims data qualifies as permissible healthcare operations and underscores the need for stricter contractual, technical, and governance controls. Part 3 further analyzes mental health parity compliance, emphasizing that even amid paused enforcement of newer rules, the use of AI-driven algorithms that function as non-quantitative treatment limitations remains subject to documentation and presents challenges in demonstrating comparability across benefit types. It also addresses Affordable Care Act, §1557, which continues to require federally funded entities to identify and mitigate discriminatory effects of AI-based patient care decision-support tools. Part 3 closes with the claim that AI adoption raises profound ethical concerns—such as bias, transparency, privacy, and fairness—that ERISA fiduciaries must actively manage under their duties of prudence and loyalty, often with the support of specialized advisors.

Federal and State Law and Policy

The proper role of the federal government and the states regarding the regulation of AI, though not yet settled, is already a source of contention. To date, no federal legislation has been enacted establishing broad regulatory authority for the development or use of AI or prohibitions on AI, and different administrations will approach the subject with different priorities. While the Biden administration focused on security concerns, the current Trump administration is more concerned with ensuring the competitiveness of domestic AI technologies. Much of the legislation proposed in the 118th and 119th Congresses has emphasized the development of voluntary guidelines, best practices, and reporting of industry-conducted evaluations of AI systems. This reflects a generally cautious approach, which evidences a good deal of deference to the industry.

The House version of the One Big Beautiful Bill Act included a provision that would bar state regulation of AI for 10 years, which was stripped out before final passage. In the meantime, a handful of states have enacted legislation regulating AI in vastly diverse ways. Montana H.B. 178, for example, simply limits the use of AI by state agencies. In contrast, Tennessee’s “ELVIS Act” broadly prohibits the unauthorized use of an individual’s voice or likeness through AI. In a similar vein, Illinois’ Artificial Intelligence Video Interview Act (AIVIA) regulates the use of AI by employers who conduct video interviews of applicants for positions based in Illinois. Colorado S.B. 169 restricts insurers from using consumer data or predictive models in a way that unfairly discriminates against individuals based on protected grounds, such as race, gender, or sexual orientation. The most comprehensive regulation of AI to date is California. The California Consumer Privacy Act (CCPA) and California Privacy Rights Act (CPRA) together impose obligations on companies that use personal data in AI systems, including transparency, opt-out rights, and data minimization. Colorado S.B. 169 restricts insurers from using consumer data or predictive models in a way that unfairly discriminates against individuals based on protected grounds, such as race, gender, or sexual orientation.

Other countries also are moving to regulate AI—in some cases aggressively. The European Union, for example, has enacted a broad regulatory approach through the EU AI Act, which classifies AI systems into risk categories with different degrees of requirements and obligations. China, in contrast, has mandated that the government take a leading role that heavily influences private sector AI development with an eye toward enhanced national security. A comprehensive review of the regulation of AI, whether by the federal government, the states, or foreign countries is beyond the scope of this article. Regrettably, the debate over the regulation of AI is often framed as a choice between regulation and innovation. One wonders whether this characterization presents a false dichotomy. It should be apparent to even the most casual observer that some regulation is required.

Compliance With Other Laws

AI’s ubiquity ensures that its use will run up against, and potentially collide with, any number of federal and state laws, starting with the above-described efforts by federal and state legislators to govern its development and implementation. There are also laws that, although not directed at AI, will raise important and sometimes difficult compliance issues. These are discussed in turn below.

The Health Insurance Portability and Accountability Act of 1996, or HIPAA.
HIPAA’s core purpose is to wrap a protective bubble around “protected health information,” or PHI. For this purpose, PHI generally means any information in a medical record or designated record set that can be used to identify an individual that was created, used, or disclosed in the course of providing a healthcare service such as diagnosis or treatment. HIPAA regulates “covered entities,” which include health plans (including most employer-sponsored group health plans and health insurance issuers).

Our focus here is on HIPAA’s impact on group health plans. In general, as a HIPAA covered entity, a group health plan must not use or disclose protected health information, except either as the privacy rule permits or requires, or as an individual who is the subject of the information (or the individual’s personal representative) authorizes in writing. A covered entity is permitted, but not required, to use and disclose protected health information, without an individual’s authorization, for certain designated purposes—the most important of which for purposes of this paper is for “treatment, payment, and health care operations.” A covered entity group health plan’s use of PHI is further complicated by a requirement to use only the minimum amount of PHI necessary for its intended purposes.

Covered entities are permitted to retain business associates to assist with their HIPAA compliance. A “business associate” is a person or organization, other than a member of a covered entity’s workforce, who performs certain functions or activities on behalf of, or provides certain services to, a covered entity that involve the use or disclosure of individually identifiable health information. Business associate services to a covered entity are limited to legal, actuarial, accounting, consulting, data aggregation, management, administrative, accreditation, or financial services.

There is an important class of health information, referred to as “de-identified health information,” on which there are no use or disclosure restrictions. De-identified health information neither identifies nor provides a reasonable basis to identify an individual.

AI Impact: HIPAA
The use of AI in claims adjudication raises no shortage of HIPAA concerns. After all, claims data is a quintessential source of PHI. So, for example, no plan or business associate can use a publicly available AI tool to process PHI. It also means that plan fiduciaries must exercise heightened scrutiny over their business associates—starting with the third-party administrators and pharmacy benefits managers. This means, among other things, that business associate agreements will need to impose robust requirements governing compliance with HIPAA, transparency and accountability, data retention, and usage policies. In addition, only employees of the business associate who have had the proper training and satisfy appropriate access control requirements should be permitted to work with the plan’s PHI. Contracts should specifically require the business associates to adhere to the explicit requirements regarding the access, collection, use, and disclosure of PHI. They also should clearly specify the scope of their use of PHI for treatment, payment, or healthcare operations.

There is also an important and currently unanswered question: Does the use of claims data for training an AI tool qualify as “treatment, payment, or healthcare operations”? If not, then it may be necessary to only use de-identified health information for training purposes. This means stripping from the data a list of some 18 identifiers—a task that may prove difficult. In addition, for purposes of the “minimum necessary” rule, even in instances where PHI may be provided to an AI tool, the data will first need to be scrubbed of any information that is not necessary.

Mental Health Parity.
The Paul Wellstone and Pete Domenici Mental Health Parity and Addiction Equity Act of 2008, or MHPAEA, is a federal law that generally prevents group health plans and health insurance issuers that provide mental health or substance use disorder, or MH/SUD, benefits from imposing less favorable benefit limitations on those benefits than on medical/surgical benefits. MHPAEA generally provides that financial requirements (such as coinsurance and copays) and treatment limitations (such as visit limits) imposed on MH/SUD benefits cannot be more restrictive than the predominant financial requirements and treatment limitations that apply to substantially all medical/surgical benefits in a classification. In addition, MHPAEA prohibits separate financial requirements and treatment limitations that apply only to MH/SUD benefits. MHPAEA also imposes several important disclosure requirements on group health plans and health insurance issuers.

A final rule issued in 2024 broadly required plans to identify and document in the required comparative analyses of their non-qualitative treatment limitations, or NQTL, the processes, strategies, evidentiary standards, and other factors used in designing or applying an NQTL to MH/SUD. While nothing in the final regulation requires the use of an algorithm, where an algorithm is used (including an AI tool), documentation is required to the extent that the algorithm itself creates an NQTL.

In May 2025, the government announced a non-enforcement policy regarding the 2024 final rule. The announcement came in response to a challenge filed by the ERISA Industry Committee in the US District Court for the District of Columbia challenging the NQTL requirements of the rule. The effect of the non-enforcement policy is to revive the earlier 2013 final regulation along with a handful of subsequent items of sub-regulatory guidance. The pause in enforcement principally affects standards associated with the particulars of a plan’s NQTL comparative analysis relating to content requirements and the fiduciary certification requirements.

AI Impact: MHPAEA
A plan’s use of algorithms, including AI tools that create a separate NQTL, must still be documented despite the government’s non-enforcement policy. It takes little imagination to foresee a worrisome challenge: How does one interrogate an AI to demonstrate that the AI’s “processes, strategies, evidentiary standards, and other factors” used in applying the NQTL to MH/SUD benefits are comparable to those applied to a plan’s medical/surgical benefits?

Section 1557 of the Affordable Care Act, or ACA, prohibits discrimination on the basis of race, color, national origin, sex, age or disability, or any combination thereof, in a health program or activity, any part of which is receiving federal financial assistance. On May 6, 2024, the Department of Health and Human Services, or HHS, issued a final rule, which among other things prohibited federally funded covered entities from restricting an individual’s ability to receive medically necessary care, including gender-affirming care, from their healthcare provider solely on the basis of their sex assigned at birth or gender identity. The final regulations also define “health program or activity” to include all the operations of an entity that is principally engaged in providing or administering health services or health insurance coverage.

Employers that are subject to ACA §1557 include employer-sponsored group health plans that receive funding from HHS. While many and perhaps even most employers do not fit this description, employers that receive retiree drug subsidies are subject to the rule. Also subject to the rule are carriers that receive federal financial assistance (most do, by virtue of selling Medicare Advantage plans), even if acting only in an administrative-services-only capacity. Under the final rule, the HHS Office of Civil Rights will determine whether a TPA subject to §1557 is responsible for a plan’s discriminatory benefit design as follows: If the design did not originate with the TPA, but with the plan sponsor, the Office of Civil Rights will refer the matter to the EEOC or Department of Justice; but if the design originated with the administrative-services-only provider, the latter is liable under §1557.

AI Impact: Section §1557.
“On January 20, 2025, the Trump administration rescinded portions of the final § 1557 rule but did not rescind portions governing the use of AI. The § 1557 final rule requires that covered entities make reasonable efforts to identify and mitigate discrimination caused by the use of patient care support tools. “Support tools,” or more accurately “patient care decision support tool,” means “any automated or non-automated tool, mechanism, method, technology, or combination thereof used by a covered entity to support clinical decision-making in its health programs or activities.” The definition is intentionally broad and includes tools used to assess health status, recommend care, provide disease management guidance, determine eligibility, and more. The definition does not include tools that are unrelated to clinical decision-making, such as tools used for patient scheduling, supply chain management, automated medical coding, or staffing-related activities. Entities subject to ACA §1557 must make “reasonable efforts” to identify tools that use input variables that measure race, color, national origin, sex, age, or disability.

Ethical Considerations

AI and machine learning technologies offer unprecedented opportunities to enhance the maintenance and operations of group health plans, but the integration of AI and machine learning into group health plan administration and everywhere else raises significant ethical considerations that must be carefully addressed to ensure responsible and equitable deployment. This technology raises concerns about privacy and data security, algorithmic bias, transparency, clinical validation, vendor quality, and professional responsibility, among many other issues. Plan fiduciaries are the plan’s first line of defense. The ERISA duties of prudence and loyalty require them to pay particular attention to these ethical dimensions all for the exclusive benefit of plan participants. Fiduciaries must embrace ethical best practices in a new, and sometimes unfamiliar, era of personalized data-driven healthcare.

Among other things, fiduciaries must take account of bias (and bias mitigation), dataset and content curation, privacy, and, ultimately, fairness. These considerations will require the assistance (and monitoring) of outside advisers in many cases.

Takeaways

Employer-sponsored group health plans, which cover more than half of the non-elderly American population, are increasingly integrating AI technologies into claims management, utilization review, fraud detection, benefit design, and participant engagement. These technologies offer significant advantages: efficiency gains, cost containment, improved clinical accuracy, and personalized participant experiences. However, they also raise profound challenges relating to fairness, transparency, data integrity, and fiduciary responsibility under ERISA.

The job of a fiduciary or fiduciary committee member charged with the administration of an employer-sponsored group health plan grows more complicated and demanding with each passing year. The increased transparency required by the No Surprises Act carries with it the need for ever more granular attention to claims data, and the increase in class action litigation puts plan fiduciaries under the proverbial microscope in ways not previously imagined. The rise of AI and its application to the maintenance and operation of group health plans raises the fiduciary bar exponentially higher. Fiduciaries must, among other things, engage in robust (human) oversight of AI use in general and AI-assisted clinical decisions in particular; exercise rigorous vendor due diligence; and otherwise ensure the AI is at all times used for the benefit of plan participants. AI presents a high fiduciary bar that will require attention to an entirely new set of issues and the development of concomitant skills.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.

Read more in this series.

Author Information

Alden Bianchi is General Counsel of Client Services at SBMA, LLC, in San Diego, California.

Write for Us: Author Guidelines

To contact the editors responsible for this story: Soni Manickam at smanickam@bloombergindustry.com; Daniel Xu at dxu@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.