- Bipartisan coalition calls for standardized risk assessments
- Commerce Department sought comments as it mulls policies
The US government’s regulatory guidance for artificial intelligence should be risk-based to ensure the technology is reliable and secure, a bipartisan coalition of 23 state attorneys general told the Biden administration Tuesday.
The group urged federal agencies to promote a framework that prioritizes mitigating potential risks AI poses to consumers in a letter to the National Telecommunications and Information Administration in response to its request for comment on its AI policies.
The NTIA is seeking public feedback on how to best audit AI systems and ensure they’re trustworthy as it develops recommendations for responsible AI innovation. The Commerce Department agency’s focus on the topic follows the release of an AI risk management framework by the White House, and comes as the already quickly-growing tech sector is on the precipice of growing even bigger.
“Artificial intelligence is being developed and deployed faster than our ability to regulate and understand, and that’s a danger to consumers everywhere,” said Connecticut Attorney General William Tong in a press release.
“At a minimum, any use of AI should be clearly disclosed—there should be zero confusion as to when and whether we are dealing with real people or AI,” Tong added.
The group—led by the attorneys general from Colorado, Connecticut, Tennessee, and Virginia—called for a standards and oversight mechanism akin to the NTIA’s guidance on consumer data privacy protections, but acknowledged that evaluating risk might require more nuance due to the complexities of AI technology.
As the technology is leveraged to draft lawsuits, generate art, automate hiring, and write code, AI has already raised concerns over legal accuracy, copyright violations, employment bias, and cybersecurity.
The government should establish independent standards that help users and developers of AI systems assess what levels of potential risk exist according to how sensitive the data being processed is, the letter said.
It also suggested that entities relying on high-risk data—such as sensitive medical information—to support their AI tech should be subject to periodic third-party audits.
The attorneys general cited legislation as a key avenue for regulating the AI industry moving forward and asked that any federal attempts to do so provide the states with concurrent enforcement authorities.
To contact the reporter on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.