- Automation tools helping to reduce workplace burden
- Lack of AI regulation creates health privacy minefield
Health providers using artificial intelligence tools to automate clinical visit notes can reduce their administrative burdens and staffing shortages, but they do so at the risk of violating health privacy laws.
The lack of regulatory oversight for generative AI and patient data authorization raise particular concerns, attorneys say, as a growing number of healthcare facilities incorporate AI into their practices.
“In the last couple years we’re seeing more and more clients interested in leveraging AI,” said Bonnie Odom, a member specializing in healthcare and telehealth regulation at Epstein, Becker & Green. “It’s definitely not going away anytime soon.”
Health Care Efficiency
A large reason for interest in AI is overburdened health care staff. The US faces a shortage of as many as 139,000 physicians by 2033, due to demographic trends and burnout from administrative tasks, according to a report from the American Association of Medical Colleges.
“Administrative burden and staff shortages are major reasons why clinicians are leaving the profession,” Vidya Raman-Tangella, chief medical officer at Teladoc Health, said in a statement.
ChatGPT, released last November by OpenAI, has created expectations that generative AI technology will transform healthcare, and major companies and institutions are leading the way.
Generative AI refers to a technology based on algorithms that can create new content like text, images and audio from training data. Proponents believe it will revolutionize the healthcare industry, upending long practices in areas like drug discovery, clinical trials, diagnosis and physician decision support, as well as summarizing consultations. A Boston Consulting Group study projected the technology would grow in healthcare at an 85% compound annual growth rate through 2027, the fastest of any industry.
The US health system could use help with paperwork. It ranks last in administrative efficiency among 11 countries, due to excessive time spent on paperwork, redundant medical testing, and insurance disputes, according to a 2020 Commonwealth Fund report. That has a direct effect on care, as US physicians have to spend two hours on administrative tasks for each hour of care provided, according to the American Medical Association.
“We need to transform and digitize our healthcare organization,” said James F. Jordan, Distinguished Service Professor Of Healthcare And Biotechnology Management at Carnegie Mellon University.
One way that AI systems could help, he said, was in addressing the issue of inconsistent terminology in healthcare, identifying different words used for the same situation and suggesting more specific or standardized terms—“making the process more efficient and bringing intelligence to it.”
While providers are trying to use new technology to help with staff shortages, automation could also result in job losses, as AI makes inroads into the health industry.
Teledoc laid off 300 employees—totaling nearly 6% of its nonclinician workforce—earlier this year due to restructuring and a challenging economic environment.
Those layoffs, and the subsequent switch to artificial intelligence, are likely due to accelerated demand for online and technical services that tech companies saw during the influx of remote work during the Covid-19 pandemic, said J. Malcolm DeVoy, partner at Holland & Hart in Las Vegas.
“I think it had to do with overbuilding these companies based on the fact that the demand that was seen in 2021 was pulled forward several years because people were at home and people had no option to do anything else,” DeVoy said. “I think it’s sort of a natural correction.”
Administration or Patient Care?
Despite the enthusiasm for AI, legal risks remain.
The regulatory status of generative AI for transcribing and summarizing patient information is unclear, said Leeann Habte, a partner at Best Best & Krieger LLP practicing law in the health information technology area.
“If we look at it on its face, where it’s doing nothing more than transcribing patient information or summarizing that information, we would say it’s performing a service for a covered entity,” she said. “And under the Health Insurance Portability and Accountability Act, it would be regulated as a business associate, and there’s no need for patient consent or anything of that nature.”
But hospitals using patient data to train their AI tools must address concerns about patient data ingestion and disclosure for product-development purposes, she said.
“I think the question about whether patient authorization is required for use of information is certainly a consideration,” she said.
Most healthcare providers require patients to sign broad disclosures allowing them to use patient data for general use and operations, said DeVoy. Because of this, there might be no need for providers to tell patients when they are using AI.
“I would argue the way that the information would be used to improve operations with an AI tool would fall within the scope of normal healthcare operations,” he said.
Still, healthcare providers should obtain consent before using patient data for AI purposes, considering the risk of a data breach, DeVoy said. Failure to do so could leave providers open to damaging lawsuits.
Lack of Regulations
The use of an AI scribe in patient visits could spare doctors from time-consuming administrative tasks, allowing more direct interaction with their patients, said Nicholson Price, professor at the University of Michigan Law School.
“This strikes me as a less risky situation than using generative AI to determine a patient’s diagnosis,” Price said.
Price said there’s a real concern that the automated notes “get rubber-stamped rather than actually evaluated,” with the danger that physicians may not recognize the AI has generated false information—through a poorly understood phenomenon called hallucination. So far this kind of AI is mostly unregulated by the Food and Drug Administration or any other federal agency.
The FDA regulates software intended for medical purposes under its device regulations and has released an action plan on AI and machine learning.
While the agency has the power to regulate the use of large language models, that authority will depend on whether they are intended to treat or diagnose a condition or disease.
Technology is always “several steps ahead,” of regulators, Odom said, and companies are piecing together what they expect compliance programs to look like while they await official regulations.
In the meantime, Odom suggested, healthcare providers should hire a compliance officer, create an avenue to report concerns and audit the technology, and provide training for employees using the software.
“It’s been a little bit of piecing together what is generally expected in a compliance program, and then applying that to the specific risks that integrating AI entails,” she said.
To contact the reporters on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.