Artificial intelligence (AI) is “adaptive,” meaning that it continuously learns algorithms. For this reason, it is sometimes referred to as machine learning (ML). Newly designed medical devices that incorporate AI/ML by definition do not have a final “locked” or fixed design capable of a single review by the Food and Drug Administration.
The FDA’s stance on a regulatory framework for AI/ML software as a medical device is continuously evolving.
In April 2019, the FDA issued a white paper that proposed four general principles for a new regulatory approach to balance the benefits and risks of medical devices that continuously change. A broad variety of stakeholders responded to FDA’s request for feedback, including industry groups (e.g., AMA, PhRMA, AdvaMed); software manufacturers (e.g., GE Healthcare, Microsoft, IBM, Intel); pharmaceutical companies (e.g., Novartis, Sanofi); and policy institutes (e.g., Duke-Margolis Center for Health Policy).
As a result, in January 2021 the agency came out with a plan that outlines five action points:
- further developing the proposed regulatory framework, including through issuance of draft guidance on a predetermined change control plan (for software’s learning over time);
- supporting the development of good ML practices to evaluate and improve ML algorithms;
- fostering a patient-centered approach, including device transparency to users;
- developing methods to evaluate and improve ML algorithms; and
- advancing real-world performance monitoring pilots.
Point five raises potential legal issues in cybersecurity and product liability.
Cyber Threats
As we previously discussed, medical device makers must continue to guard against cyberattacks. AI/ML medical devices will need to be especially careful about ensuring proper security, particularly if the devices are connected to the internet or otherwise sharing health data remotely to comply with potential new regulatory requirements of real-world performance monitoring.
The gathering and transmitting of personal data represents a major cyber threat to medical devices. Devices must be designed to ensure proper security at all stages, especially if intercepting or changing information may impact how the devices operate, as will be true for virtually all AI/ML applications.
Product Liability Issues
The framework proposed by the FDA also raises interesting questions about potential impacts on traditional product liability defenses that presume a fixed design which cannot incorporate future realworld performance data.
For example, some medical devices come to market via an FDA determination that its design is “safe and effective.” These so-called “pre-market approved” products enjoy legal preemption, or a bar, against state law tort claims to the contrary.
If a design is constantly changing due to AI/ML, can courts rely on the FDA’s original determination and continue to dismiss claims based on the traditional legal rules governing preemption?
Similarly, a manufacturer’s legal duty to warn of known risks is often fulfilled by providing that warning not to the patient directly, but rather to the patient’s treating physician as “learned intermediary” between the patient and the product manufacturer.
But if a medical device is no longer controlled by the human “learned intermediary” physician, but instead by the AI/ML, does the manufacturer now owe a duty to warn directly to the patient, thus eviscerating the traditional learned intermediary defense?
Courts will need to contend with these and other novel legal questions about AI/ML as the FDA builds out its guidance.
AI and ML present intriguing opportunities for device makers, but come with real risks to security, patient care, and legal liability. Regulators, manufacturers, and health care providers will continue to grapple with these issues in the years to come.
This column does not necessarily reflect the opinion of The Bureau of National Affairs, Inc. or its owners.
Write for Us: Author Guidelines
Author Information
Seth P. Berman leads Nutter’s Privacy and Data Security practice group. He advises corporations on the legal, technical, and strategic aspects of data privacy and cybersecurity risk, and on how to prepare for and respond to data breaches, hacking, and other cyber attacks. He teaches a cybercrime law class at Harvard Law School.
David L. Ferrera leads Nutter’s Product Liability Litigation practice group. His expertise includes presenting complex product liability issues to lay juries and courts on behalf of Fortune 500 medical device and pharmaceutical companies. He also has expertise in many fields of medical science, including orthopaedics, biomechanics, biomaterials, epidemiology, and pathology.
To read more articles log in.
Learn more about a Bloomberg Law subscription.