Bloomberg Law
Jan. 26, 2023, 5:17 PM

New AI Framework Offers Guide on Bias, IP, Data Privacy Risks

Skye Witley
Skye Witley
Reporter

Developers of artificial intelligence should be cognizant of potential biases perpetuated by their systems and address risks of intellectual property infringement, according to finalized AI guidance by the National Institute of Standards and Technology.

The AI Risk Management Framework released Thursday by the non-regulatory agency emphasized the importance of trustworthy AI systems and outlined general considerations that users of the technology could implement in their work to build public trust.

The guidance comes as the use of AI to create art, hire people, and write code has heightened concerns over copyright violations, employment bias, and cybersecurity. The framework is intended to help those creating and implementing AI address the complex and often unique risks posed by the technology.

“AI technologies can drive inclusive economic growth and support significant scientific advances that improve our world, but these same technologies also pose risks for negative impacts,” NIST Director Laurie Locascio said at a launch event for the framework.

“If we’re not careful, and sometimes even when we are, AI can exacerbate biases and inequalities that already exist in our society. The good news is that understanding and managing the risks of AI systems will help to enhance their trustworthiness,” Locascio said.

Four Focuses

The first version of the framework focuses on four “high-level functions:" govern, map, measure and manage.

These broad categories include suggestions on how to evaluate AI for legal and regulatory compliance, collect information about how the technology uses third-party software and data, quantify and track AI risks over time, and allocate resources to mitigate any potential risks highlighted during the processing of assessing the technology.

In the “govern” category, for example, the framework suggests ensuring a diverse team is involved in managing AI, and that policies are in place to evaluate data “incidents” and address risks such as IP infringement.

The framework was accompanied by a draft playbook that offers examples on how to implement the guidance.

NIST plans to release a revised framework in a few months and publish subsequent revisions in six month periods after that, Locascio said.

The institute’s creation of the voluntary framework was mandated by the FY21 National Defense Authorization Act, which included the National Artificial Intelligence Initiative Act as an amendment. The finalized publication follows two draft versions for which the agency sought comments and feedback from the public.

White House

Two US House leaders heralded the agency’s AI work in a letter to President Joe Biden on Jan. 20, but expressed concern that the White House’s own efforts to release AI guidance were “inconsistent and duplicative.”

The Biden administration’s Blueprint for an AI Bill of Rights outlines five core principles to follow when designing and using AI, including providing clear notices when the technology is used in a system and protections against algorithmic discrimination. It was designed by the Administration’s Office of Science and Technology Policy.

“We are concerned that the release of the Blueprint, and subsequent public statements by OSTP, are sending stakeholders, the American public, and the international community, conflicting messages about U.S. federal AI policy,” wrote House Science, Space, and Technology Committee Chairman Frank Lucas and House Oversight and Accountability Chairman James Comer.

The guidance documents conflict on issues such as defining AI and establishing principles for trustworthiness in AI systems, the letter said.

The two pieces of guidance are “complimentary frameworks,” as OSTP provided insight on NIST’s framework and the agency was at the table as OSTP worked on the AI blueprint, said the office’s Deputy Director Alondra Nelson.

“What’s clear from every engagement is that AI presents a set of challenges that is bigger and broader than any one effort or any single agency,” Nelson said.

NIST previously published a critical infrastructure cybersecurity framework in 2014 that outline best practices to reduce cyber risk for private sector companies.

To contact the reporter on this story: Skye Witley at switley@bloombergindustry.com

To contact the editor responsible for this story: Jay-Anne B. Casuga at jcasuga@bloomberglaw.com