Bloomberg Law
Feb. 9, 2023, 9:00 AM

Federal Guidance Offers Framework to Minimize Risks in AI Use

Duane Pozza
Duane Pozza
Wiley Rein

These days, artificial intelligence is capable of a range of tasks, from answering online questions to ghost-writing term papers to helping with critical medical diagnoses.

As AI promises more new benefits, there remain concerns about risks if AI-powered innovations are not properly managed.

The federal government has begun to weigh in more actively on AI risk management—releasing two frameworks outlining key considerations for companies that develop and use the tool.

These frameworks are voluntary, but provide a roadmap for how companies can get ahead of potential issues while maximizing AI’s benefits.

NIST and White House Frameworks

On Jan. 26, the National Institute for Standards and Technology, at the Department of Commerce, released version 1.0 of an AI Risk Management Framework. The AI RMF is meant as a voluntary framework and was developed with input from industry and other stakeholders.

The AI RMF follows on the White House’s release of a framework for an AI Bill of Rights late last year, and shares overlapping foundational principles. Both the AI RMF and Bill of Rights lay out key issues for companies and other organizations to address as they take steps to implement AI.

Both recognize AI’s potential to benefit and improve lives, but also focus on steps that can be taken to address risks.

They lay out key guiding principles and characteristic of “trustworthy” AI, which include reliability, safety, transparency, privacy, and other safeguards. Taken together, the frameworks map an approach to addressing risks in certain categories.

Protecting Against Discrimination

A key consideration when deploying AI is to identify and take steps to counteract potential harmful bias, which can result in discriminatory outcomes.

Suggested approaches in this area include conducting proactive bias assessments, performing ongoing disparity testing, and ensuring that data sets are diverse, robust, and free from proxies for demographic features.

The frameworks also suggest seeking broad and diverse input to help identify and combat potential bias.

Promoting Safety, Security, Resiliency

Companies also need to closely monitor AI systems to ensure they are operating safely and not causing unintended outcomes, and that potential vulnerabilities are identified and addressed.

Recommendations include pre-deployment testing, ongoing monitoring and reporting, use of high-quality data, and evaluations to mitigate risks.

Addressing Transparency and Explainability

Companies using AI should consider how to convey use and outcomes of AI decisions—particularly high-impact decisions.

Depending on the risks involved, they should consider how best to convey answers to questions about “what happened” in the system, “how” a decision was made, and “why” a decision was made by the system and its meaning or context to the user.

This kind of analysis is one way to address other kinds of risks, so that operators and users can gain deeper insights into AI results and address them if necessary.

Protecting Data Privacy

AI often uses large amounts of data, and companies should take steps to protect privacy in data usage, collection, and access. They need to assess what existing laws and regulations might apply to data used in connection with AI.

The frameworks also recommend that AI developers and users promote privacy via methods such as privacy-enhancing technologies and minimizing personally identifiable data through de-identification or aggregation.

Incorporating Human Review and Accountability

The frameworks recognize that humans have a key role in overseeing AI uses and evaluating AI-generated outcomes. Companies should assess the points where human involvement and review is best deployed to mitigate risks.

Other considerations include remedial actions if an AI system fails and adequate training for those administering and reviewing these systems.

Both the AI RMF and AI Bill of Rights are meant to be voluntary approaches, and in the short term are most likely to influence agency use of AI and government contracting. However, they are also intended for private sector use and adaptation.

Looking forward, companies using AI and algorithmic technology will grapple with regulatory efforts at federal agencies, such as the Federal Trade Commission, and states such as California and Colorado.

The frameworks are not meant to substitute for regulatory compliance, but will help any company or organization assess key risks in AI use and get ahead of the curve. And with new uses of AI technology growing by the day, now is the time to implement effective strategic approaches.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Write for Us: Author Guidelines

Author Information

Duane Pozza is a partner with Wiley Rein and co-chair of the Federal Trade Commission regulatory practice. He advises clients on emerging technology, consumer protection, and data governance.

Lauren Johnson, Wiley Rein, contributed to this article.