NIST Framework Can Nudge Companies Toward Trustworthy AI Use

Aug. 30, 2023, 8:00 AM UTC

Companies large and small are assessing how they can harness artificial intelligence to be more competitive and profitable. We have already seen legal briefs submitted with AI-hallucinated case citations, and situations where confidential proprietary code was revealed to the public through generative AI use.

As AI use proliferates worldwide, regulation in the US struggles to put its boots on. Experts talk about aspiring to trustworthy and transparent AI with few details on exactly what that means or how to get there.

On Jan. 24, the National Institute of Standards and Technology, which is a part of the Department of Commerce, unveiled its AI Risk Management Framework to help organizations develop an approach to AI development and operations. Many state, local, and private sector organizations now make use of the RMF, even though it was initially designed for federal authorities.

What is this framework? How can it help companies understand AI risk and ask the right questions to arrive at responsible AI in its development and usage?

The NIST RMF framework is offered as a resource to help organizations manage AI risk as they develop and deploy AI systems.

AI risk is considered from the perspectives of measuring the risk, risk tolerance, risk prioritization—i.e., finding the most important AI risks and ranking them according to importance allows for priority-based risk management—and integrating AI risk into an organization’s wider risk management program.

All AI risks should be tracked as a part of enterprise risk management, and accountability should be set throughout the AI lifecycle. AI risk tolerance and prioritization look very different to a retailer or ecommerce company than to a health-care provider.

In health care, in particular, the data used to train AI raises HIPAA concerns, and treatment based on AI hallucinations could have catastrophic consequences for patient care. That’s why it’s critical to understand how your AI tool works. This may also require creation of new processes or tools to confirm your AI results before they’re used in a real-life scenario.

For example, what metrics does a health-care organization use to measure accuracy, privacy, and security risk? Accuracy, privacy, and security risk may vary greatly with AI use in Medicare billing versus an AI chatbot that interacts with basic patient medical questions.

AI trustworthiness is at the core of the framework, highlighting attributes such as validation and reliability, safety, security, resilience, privacy enhancing, fairness, accountability, and transparency.

These factors help companies appreciate various AI risks and the harmonization required to find the right balance. A transparent and safe AI that’s often wrong but never in doubt demonstrates the consequences when these attributes are out of balance. Although the terms “explainable AI” and “transparency in AI” are frequently used synonymously, the former emphasizes making sure a model is unlocked and noticeable. In a way, it’s like the evolution of the automobile from no safety features to seat belts and being ensconced in airbags.

Based on initial marketplace reaction, current AI algorithms need greater transparency and privacy. The hallucinations we’re seeing also indicate that reliability is an issue. Since human beings are not perfect, how do we measure what our tolerance is for sometimes unreliable answers? Does this mean in a health-care context that a medical chatbot for basic medical questions requires a much lower tolerance for unreliable answers?

We should hope so, since a large portion of the population will rely on the answers they get. These trustworthiness attributes allow an organization to think about the right questions to ask that help guide them in coming up with an organizational approach to managing AI risk.

The four core principles of the framework are govern, map, measure, and manage.

Seventy-two subcategories of steps support the four core principles. Governance is the foundation block. This is where an organization creates its guiding principles, policies, procedures, and practices to help it map, measure, and manage AI risk. This principle requires that the company identify, support, and hold accountable those responsible for AI risk.

Like privacy, an organization will have to consider where this role should sit within the organization. Privacy roles have sat in the legal department, under the chief information officer or as a separate role entirely. But do the people in these roles have the necessary technical background to understand the potential risks?

The governance principle forces an organization to grapple with this question as opposed to making an expedient decision without much thought given to long-term consequences. Understanding and addressing the threats of AI is necessary to promote its acceptance. It’s important to distinguish between social, financial, and moral risks in addition to performance, security, and control concerns.

AI is at the intersection of opportunity and risk. The first step in managing AI risk is to understand the facets of that risk. The NIST AI RMF is a great place to start so your organization can frame the right questions. The purpose is to better understand how trustworthy AI looks and what risk mitigation means to your organization in the context of your use case and industry. This approach beats having to explain to the board why the management team relied on information from an AI tool that was not properly vetted when it was purchased from a third-party vendor.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Author Information

Justin Daniels is a shareholder at Baker Donelson, providing corporate advice to growth-oriented and middle-market domestic and international technology businesses.

Amy Chipperson is general counsel to Axtria, Inc., a global software and data analytics provider to the life sciences industry.

Write for Us: Author Guidelines

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.