A Food and Drug Administration push to bring more artificial intelligence-based medical devices to market could open the door for product liability lawsuits if the technology changes the device after initial approval, attorneys say.
If medical devices change significantly as they learn, they could technically become a different product than the FDA originally approved. Some manufacturers could then lose immunity from liability that they had with the approved device, giving consumers more leeway to sue if the altered product becomes defective.
“There’s a gray area as to what would be preempted,” Lawrence J. Centola, a products liability attorney and principal of Martzell Bickford & Centola in New Orleans, said. “Once you get out of those lanes, then you open yourself up to state tort liability.”
The FDA is looking to update its regulations to help companies such as GE Healthcare and Medtronic develop innovative health-care technologies that can learn from real-world use to improve performance. Global spending on AI hardware and software applications, including AI-powered medical devices, in the health-care sector is estimated to soar from $2.1 billion in 2018 to $36.2 billion in 2025, according to Sachin Garg, associate vice president of emerging technologies at research firm MarketsandMarkets.
The agency must approve any medical device before it hits the market. The agency has several approval processes, depending on the type of device. Federal law generally preempts state personal-injury suits against medical devices that have gone through the FDA’s most rigorous review, premarket approval.
The FDA currently requires a premarket review of certain devices each time the device’s algorithm undergoes a major change—which could mean a lot of red tape for makers of devices that continuously learn and evolve. But the agency is mulling a “life cycle-based” regulatory framework would allow a device’s algorithms to adapt to actual experiences without undergoing further reviews, as long as the device remained safe and effective, it said in an April discussion paper. The FDA is accepting public comments on the proposal until June 3.
It’s unclear whether the FDA’s eventual regulations would be “rigorous enough to support preemption,” David L. Ferrera, chair of Nutter McClennen & Fish LLP’s product liability litigation practice group in Boston, said.
Choosing a Path
High-risk devices such as pacemakers or breast implants, devices that support or sustain human life, that are substantially important in preventing health impairment, or present unreasonable risks of illness, all must have premarket approval.
Manufacturers of other AI devices, depending on intended use and risk, could apply for premarket approval or one of the FDA’s two other approval designations: clearance under Section 510(k) of the FDA Act for devices substantially similar to other devices on the market, or de novo classification for new, low-to-moderate risk devices.
Device makers that win premarket approval from the FDA rely on a U.S. Supreme Court interpretation of a decades-old law in asserting preemption from product liability claims.
The high court has held that the Medical Device Amendments of 1976 to the federal Food, Drug, and Cosmetic Act preempt state-law tort claims against makers of medical devices that go through the FDA’s stringent premarket approval process. Otherwise, lay juries would end up deciding safety questions.
It’s not clear what type of FDA approval would be needed for non-high-risk AI devices. Centola said it’s unclear whether an AI-based device would be similar enough to another device to qualify for 510(k) clearance, or at what point a machine learning algorithm essentially becomes a new product, requiring a manufacturer to reapply for approval.
Nathan A. Brown, an FDA attorney at Akin Gump Strauss Hauer & Feld LLP in Washington, said preemption should apply to continuously learning devices requiring premarket approval, as long as any changes fall within the scope of the FDA’s approval. The FDA would be approving the methodology of the device as much as the device itself, he said.
“New medical technologies are becoming more of a process than a static thing,” Brown said.
But if algorithmic changes expand a device’s scope beyond that for which it was approved, preemption becomes an “open question,” John Sullivan, a products liability defense lawyer and member of Cozen O’Connor in New York, said.
Evolving Algorithms
The FDA has so far cleared AI-based devices with “locked” algorithms that don’t continually change each time they’re used.
The proposed framework sets the stage for machine learning algorithms that can automatically change to incorporate learning from user data. An algorithm that detects cancer lesions, for example, could learn and evolve from real-world feedback to be more confident in identifying such lesions.
“Being able to iterate quickly, within parameters that assure safety and effectiveness, will enable the best, most up-to-date technologies to be delivered to and potentially benefit providers and patients more quickly,” GE Healthcare Chief Innovation Officer Terri Bresenham said.
Device makers, under the proposed framework, would have to maintain “good machine learning practices” throughout a product’s life cycle. Such practices would include providing algorithmic transparency and ensuring any acquired data comports with a device’s intended use.
The framework would let manufacturers submit a modification plan during the initial premarket review that spells out anticipated changes to a device’s performance, data inputs, or intended use. The plan would also include methods for controlling anticipated changes.
Depending on the type of modification, manufacturers may be able to simply document a change if it falls within the scope of their plan. Under the framework, manufacturers would also commit to collecting and monitoring actual performance data to understand how their products are being used, identify ways to improve them, and respond to safety concerns.
Bresenham supports the FDA’s efforts to create a framework aimed at bringing continuous learning devices to market while ensuring patient safety.
“You want to be able to improve care through continuously learning,” she said. “On the other hand, how do you ensure the algorithm has a surveillance and validation approach so that it doesn’t steer off course?”
To contact the reporter on this story:
To contact the editor responsible for this story: Keith Perine at kperine@bloomberglaw.com
To read more articles log in.
Learn more about a Bloomberg Law subscription.