The US government doesn’t have a great track record of keeping up with emerging technology: Look no further than Washington’s stumbling attempts to oversee social media. The complex new field of artificial intelligence raises legal, national security and civil rights concerns already drawing the interest of would-be government regulators. Even ChatGPT creator Sam Altman came out in favor of “regulatory intervention by governments” to “mitigate the risks” of AI. As Congress debates whether and how to impose binding regulation, President Joe Biden has called on leading AI companies to meet voluntary transparency and security standards as a first step.
1. What has Biden done?
Seven leading US artificial intelligence companies, at Biden’s request, agreed to put new artificial-intelligence products through internal and external tests before their release. Executives from the companies, including Amazon.com Inc., Alphabet Inc. and Meta Platforms Inc., also promised to allow outside teams to probe for security defects and risks to consumer privacy. Biden’s team is separately developing a list of more actions his administration can take to minimize AI’s flaws before Congress acts.
2. Where do things stand in Congress?
Senate Majority Leader Chuck Schumer is laying the groundwork for legislation to regulate AI. Any bill should promote US innovation, ensure national and economic security, tackle copyright concerns and set transparency standards for AI companies, he said in June. Schumer is also hosting a series of learning sessions for lawmakers to hear from AI experts to inform legislation. Congressional committees have held hearings on AI issues, including one featuring Anthropic CEO Dario Amodei and another on artists’ concerns about how the technology might be used, especially with copyrighted material. Among bills proposed by senators, one would prohibit the US government from using an automated system to launch a nuclear weapon without human input; anotherwould require that AI-generated images in political ads be clearly labeled.
3. Do any US regulations apply to AI currently?
Some do, on a piecemeal basis. The Federal Trade Commission says existing antitrust laws can promote fair competition among AI developers and current advertising laws can be used to punish exaggerated claims about what AI-based products can do. The Securities and Exchange Commission wants to restrict how brokerages and money managers use AI in recommending trades, managing assets and lending. The Department of Health and Human Services says it can regulate AI applications that “result in discriminatory outcomes” or involve the exchange of health information. The Justice Department warned that companies that sell algorithms to screen potential tenants are liable under the Fair Housing Act if they discriminate against Black applicants. But no single agency, nor any specific law, governs AI in a holistic way.
4. Which agency might take charge?
The Commerce Department, which calls itself “the voice of business in the federal government,” is taking steps toward regulating AI. In January, Commerce Department scientists proposed voluntary standards for organizations designing or deploying AI.
WATCH: Some problems with ChatGPT.
https://trib.al/lD0n2EO
5. What’s happening on the state level?
Generally speaking, state officials are working faster than national leaders in applying limits to AI, particularly with regard to civil rights. New York in July began enforcing requirements to tackle racial bias in the algorithms employers use to filter job applicants. Connecticut officials that use AI systems must verify they don’t put women at a disadvantage, under a new law enacted in June. In California, lawmakers have slowed their effort to put guardrails around artificial intelligence, in part because of a poor fiscal climate and opposition from business interests. As of late July, at least 275 measures mentioning AI had been introduced in 36 states and the District of Columbia in the most recent legislative session, according to Bloomberg Government. New York and Virginia had the most pending bills referencing AI.
6. How are other countries approaching this task?
In June, the European Parliament approved a draft version of a proposed law that would set boundaries on how AI technology can be used and require companies that build it to perform risk assessments. The so-called AI Act is backed by serious penalties: Violations could bring fines worth 6% of a company’s annual sales. European Union member states, the European Commission and the parliament are negotiating the specifics of the law, and rules could impact companies by 2026. Europe’s biggest companies have warned that too-strict regulations might smother innovation; Altman, too, has expressed reservations about Europe’s approach even as he calls broadly for government regulation. In China, AI operators will be required starting Aug. 15 to complete a security review and register their services with the government. Foreign companies that provide generative AI tools aimed at Chinese residents must also comply.
7. Why does AI need regulating, anyway?
Americans are already using AI to write speeches, plan workouts and (sometimes incorrectly) answer questions on professional exams. Other uses of AI have had broad consequences. A fake AI photo of an explosion near the Pentagon spread on social media, briefly pushing US stocks lower. Scammers are using AI to impersonate distressed grandchildren and con older Americans out of thousands of dollars. And child predators are exploiting generative AI technologies to share falsified child sexual abuse material online. “We have seen a really rapid advancement in capabilities of large language models and other foundation models that both can do a lot of things but also pose the potential for serious harm if misused,” said Daniel Ho, a Stanford University professor who studies AI.
The Reference Shelf
- A cheat sheet on AI buzzwords and their meanings.
- Bloomberg Opinion columnist and law professor Noah Feldman on what AI regulation should look like.
- Bloomberg Law’s video explainer on laws that apply to AI.
- One concern: AI is making it easierto disseminate misinformation in political campaigns.
--With assistance from Jillian Deutsch, Sarah Zheng and Anna Edgerton.
To contact the reporter on this story:
Courtney Rozen in Arlington at crozen4@bloomberg.net
To contact the editors responsible for this story:
Bernard Kohn at bkohn2@bloomberg.net
Laurence Arnold
© 2023 Bloomberg L.P. All rights reserved. Used with permission.