Artificial intelligence tools permeate the health care landscape, even though many health care practitioners don’t realize that they’re leveraging such tools in their everyday practice.

The American Medical Association, in a report from its 2018 Annual Meeting, described AI as: “a host of computational methods that produce systems that perform tasks normally requiring human intelligence. These computational methods include, but are not limited to, machine image recognition, natural language processing, and machine learning.”

The AMA emphasized in its report, however, that another term used in the health care setting with regard to AI is “augmented intelligence,” given that AI generally is designed to “enhance the capabilities of human clinical decision making,” particularly in the health care industry.

In other words, AI is a tool that, at its best, helps humans make better decisions, and complete tasks more efficiently and effectively. We all use AI tools every day, although we may not recognize them, and AI tools are particularly prevalent in the health care sector. As such, we should understand not only how these tools can assist us in our work, but also the important issues associated with how they handle individuals’ personal information.

How Is AI Used in Health Care?

While many health care providers and employees of health care facilities may recognize that pharmacy or surgery robots use AI tools, these practitioners may not realize that the clinical decision support, claims review, and voice-to-text transcriptions tools that they use also include AI. Health care system IT staff also rely heavily on AI tools to detect and combat cyber threats to the information that health care providers need to provide quality care.

Just as one example, health care providers use clinical decision support tools (CDS) widely and for many different purposes, including to help identify patients that may be at particular risk for certain diseases, or that actually have certain diseases like cancer, by using natural-language processing, including of patient records and medical research, and other AI tools like imaging recognition.

In other words, CDS helps doctors identify patients that may have certain risk factors for diseases like diabetes or heart disease, or who have had tests that may indicate that they have cancer or a genetic disease, based on the information available to the CDS tools, both from patient records and other public sources of information.

Data Regulations and Privacy Trends

Important state, federal, and international legal requirements apply to information that is identifiable to individuals, and that is necessary for AI tools to function properly. Additionally, there are important ethical and policy questions that should also be considered in any discussion about the use of AI tools.

At the end of the day, health care practitioners of all types should understand not only the privacy and security concerns, but also the ethical implications with using AI tools, particularly given how prevalent they are now, and will become, in the health care industry.

First, health care practitioners should consider from whom their AI tools come, and how the tools use data. In other words, there are significant privacy requirements and security controls that are necessary to ensure not only that patient and employee information is used (or disclosed) only as permitted by state data privacy and security laws, HIPAA and other federal laws, and GDPR and other international laws, but also that patients and employees have rights to their information, as provided for by such laws.

As such, health care providers must ensure that the developers of their AI tools build in the necessary security requirements from the beginning, and must understand how these tools use their data, particularly patient and employee information.

Second, AI tools are only as good as their programming and the data they use to achieve their goals. There are many examples of cases where AI tools designed for a particular purpose did not produce expected outcomes, because of flaws in their programming.

For example, there may be CDS tools that identify incorrect individuals as having a particular disease or incorrect treatments for individuals with certain conditions, due to flaws in their programming. Further, many studies have shown that to the extent AI tools draw from limited pools of data, including with regard to age, ethnicity, or gender, such AI tools can produce incorrect results, as well.

Best Practices

The promise of advantages to health care practitioners, health systems, and, ultimately to patients, provided by AI tools should not be underestimated!

However, health care providers and other employees of health care entities must prioritize understanding how their AI tools work, and safeguarding against any risks to the privacy and security of patient and employee information when such tools are used in practice. Some best practices in this regard include:

  • Understanding the security risks to data posed by a particular AI tool, including by undertaking a risk analysis/assessment of such risks and implementing the appropriate administrative, technical, and physical controls to reduce such risks to a reasonable and appropriate level. This is especially important with regard to networked devices, which can be particularly vulnerable, and with regard to access controls for data.
  • Ensuring that AI tools don’t access or collect data that they don’t actually need. Large amounts of data in data repositories create large amounts of risk!
  • Considering who built your robot (or AI tool)! It’s so important to utilize AI tools developed by reputable companies that provide good vendor support and with which you can work on data privacy and security issues and concerns.
  • Safeguarding against any unintended outcomes related to ethical questions about the use of AI, including considerations of how the AI tools are used in practice and with regard to individuals of certain ages, ethnicities, or genders.

This column does not necessarily reflect the opinion of The Bureau of National Affairs, Inc. or its owners.

Author Information

Iliana Peters is a shareholder with Polsinelli in Washington, D.C., where she focuses her practice on health care, health information privacy and security, public policy, health care technology, and data security.

Liz Harding is a shareholder with Polsinelli in Denver, where she focuses her practice on privacy and cybersecurity, licensing, technology, health care, and trademark and copyright.

Lindsay Dailey is an associate with Polsinelli in Chicago, where she focuses on health care, health information privacy and security, health care technology, and privacy and cybersecurity.

Polsinelli provides this material for informational purposes only. The choice of a lawyer is an important decision and should not be based solely upon advertisement.