- Commitments to expire when Congress enacts regulation
- Guidelines follow May meeting with CEOs of leading AI firms
Leading US artificial intelligence companies are set to publicly commit Friday to safeguards for the technology at the White House’s request, according to people familiar with the plans.
Companies including
But the fact the commitments are voluntary illustrates the limits of what President
Friday’s list of commitments from the White House are expected to be matched by pledges from top AI companies that participated in a May meeting with Vice President
“The regulatory process can be relatively slow, and here we cannot afford to wait a year or two,” White House Chief of Staff <-bsp-person state="{"_id":"00000189-75c5-da14-addf-7de5a9b30000","_type":"00000160-6f41-dae1-adf0-6ff519590003"}">Jeff Zients-bsp-person> said in a podcast interview last month.
Read More: <-bsp-bb-link state="{"bbDocId":"RU59FJT1UM0W","_id":"00000189-75c5-da14-addf-7de5a9b40000","_type":"0000016b-944a-dc2b-ab6b-d57ba1cc0000"}">White House Says It Backs New Rules for AI After Harris Meeting-bsp-bb-link>
The companies’ commitments will expire when Congress passes legislation addressing the issues, according to a draft of the White House document. The guidelines are focused on generative AI, such as OpenAI’s popular ChatGPT, as well as the most powerful existing AI models and even more capable future models, according to the draft.
The document is subject to change before Friday, according to the people familiar with the matter. A White House spokesperson declined to comment.
Even the developers of AI technology — while enthusiastic about its potential — have warned it presents unforeseen risks. The Biden administration has previously offered guidelines for its development, including the Risk Management Framework from the National Institute of Standards and Technology that emerged from months of engagement with industry leaders and others.
In the document set to be issued Friday, the White House will suggest eight commitments focused on safety, security and social responsibility, according to the draft document. They include:
- Allowing independent experts to try to push models into bad behavior — a process known as “red-teaming.”
- Sharing trust and safety information with government and other companies.
- Using watermarking on audio and visual content to help identify content generated by AI.
- Investing in cybersecurity measures.
- Encouraging third parties to uncover security vulnerabilities.
- Reporting societal risks such as inappropriate uses and bias.
- Prioritizing research on AI’s societal risks.
- Using the most cutting edge AI systems, known as frontier models, to solve society’s greatest problems.
Spokespeople for Microsoft, OpenAI and Google all declined to comment.
Governments around the world have called for global AI governance akin to the agreements in place to prevent nuclear war. Group of Seven countries, for example, committed to coordinate their approach to the technology in Hiroshima, Japan, earlier this year, and the UK plans to hold an international AI summit before the end of the year.
All of these efforts, however, lag far behind the pace of AI developments spurred by intense competition between corporate rivals and by the fear that Chinese innovation could overtake Western advances.
That leaves Western leaders, for now, asking companies to police themselves.
Even in Europe, where the EU’s AI Act is far ahead of the incipient regulatory efforts of the US Congress, leaders have recognized the need for voluntary commitments from companies before binding law is in place. In meetings with tech executives over the past three months, <-bsp-person state="{"_id":"00000189-75c5-da14-addf-7de5a9f50000","_type":"00000160-6f41-dae1-adf0-6ff519590003"}">Thierry Breton-bsp-person>, the European Union’s internal market commissioner, has called on AI developers to agree to an “<-bsp-bb-link state="{"bbDocId":"RV617RDWRGG0","_id":"00000189-75c5-da14-addf-7de5a9f60000","_type":"0000016b-944a-dc2b-ab6b-d57ba1cc0000"}">AI Pact-bsp-bb-link>” to set some non-binding guardrails.
To contact the reporters on this story:
<-bsp-person state="{"_id":"00000189-75c5-da14-addf-7de5a9fb0002","_type":"00000160-6f41-dae1-adf0-6ff519590003"}">Jillian Deutsch-bsp-person> in Brussels at jdeutsch24@bloomberg.net
To contact the editors responsible for this story:
Mario Parker, Alex Wayne
© 2023 Bloomberg L.P. All rights reserved. Used with permission.
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.