AI is changing the practice of tax law. This series examines the ethical, legal, and practical implications of AI across key areas of tax practice.
This three-part article examines the legal, compliance, and fiduciary landscape for sponsors and administrators of group health plans, and their service providers, as they seek to navigate the emergence of artificial intelligence. The articles aim to educate plan administrators and fiduciaries of ERISA-governed, employer-sponsored group health plans, including members of duly appointed welfare plan committees, about what AI is, and how large language models fit into the evolving compliance and governance landscape for such plans. This knowledge is essential to carrying out their respective statutorily-imposed duties.
Part 1 explains the rapid emergence of generative AI following the public release of ChatGPT in late 2022 situates AI as a transformative and now ubiquitous technology.
Part 3 examines the unsettled and contested landscape of AI regulation, considering the absence of comprehensive federal legislation, shifting administrative priorities, and a cautious congressional approach that largely favors voluntary guidelines while states adopt widely divergent and sometimes comprehensive AI laws—and as other nations such as the EU and China pursue more assertive regulatory models. See, e.g., National Council of State Legislatures, Artificial Intelligence 2025 Legislation (July 10, 2025).
Background
On Nov. 30, 2022, OpenAI released ChatGPT for public use. Billed as a “generative artificial intelligence chatbot,” it had, by Jan. 2023, become the fastest-growing consumer software application in history, gaining over 100 million users in two months. In short order, a broad consensus emerged that something fundamental had changed: For the first time an AI with apparent human, or near-human, intelligence became widely available. AI systems have become ubiquitous, and their true impact may take decades to fully grasp. This article examines how AI affects employer-sponsored group health plans.
Employer-sponsored group health plans are a central feature of the US healthcare landscape, covering more than 150 million Americans. These plans exist at the intersection of healthcare delivery, insurance risk pooling, and employment law.
AI has emerged as a transformative technology in health administration. AI-enabled tools promise improved efficiency, cost savings, better clinical outcomes, and streamlined administrative processes. Employers and their service providers are increasingly leveraging AI to augment human decision-making relating to their group health plans and improve their administration.
Although the use of AI for group health plan purposes offers much promise, it also raises complex regulatory, fiduciary, and ethical questions, particularly under the Employee Retirement Income Security Act of 1974, which governs virtually all private sector employee benefit plans, programs, and arrangements. AI will almost certainly come to occupy a central role in the context of employee benefits, including basics like crafting personalized communications to plan members, detecting fraud, managing open enrollment, analyzing data, administering leave, and otherwise assisting in the maintenance and operation of benefit plans, programs, and arrangements of all stripes.
Generative Pre-Trained Transformer—GPT
The overarching goal of AI is “artificial general intelligence,” which means and refers to an AI that is able to rival human intelligence. We’re not there yet, nor is it clear whether we will ever be. Current day references refer instead to a “generative” AI (the “G” in GPT), which leverages “large language models” or LLMs. LLMs are systems designed to understand and generate human-like text that are pre-trained (the “P” in GPT) on massive datasets to recognize patterns.
At bottom, LLMs process text by understanding relationships between words and their context. This allows them to predict the next word in a sentence by analyzing the input text, sound, or image—enabling them to generate coherent and meaningful responses. For the purposes of this article, it’s sufficient to understand that something in the world of computer science has fundamentally changed, and in so doing it has enabled computational capabilities of unimaginable scale with accompanying unimaginable consequences.
There is no shortage of AI “origin” stories, nor is there any shortage of commentary. There are, however, a handful of landmarks, the most commonly cited of which is a 2017 research paper entitled, “Attention Is All You Need” that introduced a new deep learning architecture known as the “transformer” (the “T” in GPT). As of April 2025, the paper has been cited more than 150,000 times. The paper’s special attention mechanism was the breakthrough that allowed LLMs to seamlessly navigate the quirks of context that bedevil human language. The same mechanism also represented a significant leap in language translation.
Transformer models are often described as based on neural network architecture. The claim is that computer-based neural networks mimic the complex functions of the human brain. What can be said reliably is that the nodes and layers of computer-based neural networks have an analog in the neurons that are an integral feature of human brains. This distinction isn’t trivial. It’s instead intended to serve as a caution to fiduciaries and others when thinking about or interacting with generative AI model. Resist projecting complex human behavior traits onto what is nothing more than a superbly (though not perfectly) capable, pattern seeking class of algorithms.
At bottom, generative AI enables machines to perform tasks that historically required human intelligence, such as making decisions, recognizing speech, and learning from experience. Nothing more, nothing less.
The Language of AI
Generative AI has its own vocabulary, and some familiarity with this will be helpful to plan administrators and fiduciaries, at the least so they will be able to follow the presentations and grasp recommendations of plan consultants and other advisers. Some key AI-related concepts include:
- AI use case. An AI use case simply refers to the application of AI techniques to solve specific problems or address particular needs in a particular domain. A properly defined AI use case should leverage appropriate AI techniques, provide a clear problem statement, establish well-defined objectives, and provide measurable outcomes that deliver value to the organization or end-users. An AI use case may broadly address all aspects of plan maintenance and operation or may be defined more narrowly to target specific administrative functions such as prior authorization.
- Bias. The term “bias” is commonly encountered in the context of LLM-based AI tools. It’s best understood as an impediment to fairness (concerns for equity and equality in the context of harmful bias and discrimination). LLMs can exhibit bias based on the data on which they are trained, which can lead to biased or unfair outcomes. Even systems in which harmful biases are mitigated aren’t necessarily fair, however, as can be the case with systems in which predictions are balanced across demographic groups that may nevertheless be inaccessible to individuals with disabilities or affected by the digital divide.
- Generative AI. Also known as “gen AI” or the “G” in GPT mentioned above—is a subset of AI that can create original content such as text, images, videos, audio, or software code in response to user prompts. The AI tools addressed in this paper are generative AI.
- GPT. As previewed above, a generative pre-trained transformer, or GPT, is a type of AI model developed by OpenAI. GPT excels in tasks like language understanding, translation, and text completion. Its ability to generate coherent and contextually relevant text has made it a powerful tool in natural language processing and various applications across diverse domains. GPT in its commercial sense is one of many LLMs that are currently available commercially either free or behind a paywall. There are also specialty or industry-specific LLMs that aren’t widely available to the public but are sequestered by government, academic, industry, or other commercial organizations.
- Large language models/neural networks. The term “large language model,” introduced above, refers to the type of AI algorithm that uses neural network techniques with extensive parameters to process and understand human languages or text. LLMs are trained using self-supervised learning techniques on vast amounts of diverse textual data. As a result, they are able to perform a wide range of tasks such as text generation, machine translation, summarization, and more.
- Neural networks are the underlying structures of large LLMs. The network nodes produce a single binary output, which is calculated from fixed inputs that are adjusted to reflect “bias” and adjusted for “weights,” which together enable the model to learn. The outputs pass through a series of layers that progressively produce the final output.
- LLMs are currently being used in various applications in a myriad of industries thanks to their ability to understand and generate human-like text. Some of the key uses include chatbots, language translation, content generation, summarization, sentiment analysis, and code generation. In short, AI use cases are limited only by (at least for now) human imagination.
- Pre-training/training. It’s common to hear references to how LLMs learn in steps or phases. The initial phase, pre-training, is the phase that is the most time-consuming and expensive. There follows a fine-tuning phase, which as the name suggests further refines the model’s capabilities. The training process succeeds progressively using a technique referred to as “forward propagation,” and the mistakes are corrected by comparing the actual to intended outputs and applying a technique referred to as “back propagation.”
- Natural language processing. Natural language processing, or NLP, is a branch of AI that focuses on enabling computers to understand, interpret, and generate human language. It involves the development of algorithms and models that allow machines to process and analyze text or speech data. Like LLMs, NLP is used in various applications, such as language translation, sentiment analysis, and chatbots, to enhance human-computer interaction. By extracting meaning from language, NLP enables computers to perform tasks that involve understanding and generating natural language, bridging the gap between human communication and machine capabilities.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Alden Bianchi is General Counsel of Client Services at SBMA, LLC, in San Diego, California.
Write for Us: Author Guidelines
To contact the editors responsible for this story: Soni Manickam at smanickam@bloombergindustry.com;
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.