- Tech industry skeptical of reporting requirements for AI models
- Defense Production Act use spans national security, pandemic
President Joe Biden’s sweeping executive order establishing the first set of rules governing development and deployment of artificial intelligence applies a 73-year-old national security law to oversee the booming industry.
The order invokes the Defense Production Act to require companies building especially large, widely applicable AI models to disclose details to the government about their training and safety testing process. That means companies like OpenAI Inc., Microsoft Corp. and Alphabet Inc.'s Google will have to disclose activities relating to model training, cybersecurity protections, and the results of safety tests for future large AI models.
Although most of the president’s order is directed at the federal government’s own use of AI, the reporting requirements under the DPA represent the Biden administration’s most significant step toward regulation of the AI industry.
That law, first passed at the start of the Korean War in 1950, provides the president with broad authority to require private industry to prioritize certain production orders during wartime and national emergencies. The emergency in 2023—Biden’s cause for invoking the DPA—is the threat of attack against American infrastructure posed by criminals and nation-state hackers using AI-enabled cyberweapons.
The president’s use of the DPA in this case is certainly novel. The law is most often used to procure vital military equipment. But Congress has expanded the law’s authority over the decades to include nonmilitary situations. Biden has used it to combat the Covid-19 pandemic, produce more infant formula, and boost green technology manufacturing.
The recent AI order’s use of the security law has drawn criticism from the tech industry, which has called the reporting requirement a major overreach. The Information Technology Industry Council, a tech industry trade group that counts
Biden’s executive order also comes as Congress is considering legislation to provide clearer guardrails on the quickly evolving technology. Sen. Chuck Schumer (D-N.Y.) has hosted a series of AI information forums, and lawmakers are likely to unveil AI legislation next year.
“Maybe we should wait on this issue and not try to shoehorn a broad new technology regulation into this old statute,” said Megan Brown, a cybersecurity attorney at Wiley Rein LLP.
Evolving Law
The DPA, which grew out of the War Powers Acts of World War II, originally allowed the president to nationalize parts of the economy to prioritize defense preparedness.
Congress has reauthorized the law more than 50 times and has amended it to include nonmilitary threats such as cybersecurity, critical infrastructure, and emergency preparedness. President Donald Trump invoked the law to step up production of ventilators and personal protective gear during the Covid-19 pandemic, and Biden used it to ramp up vaccines and tests.
Biden’s AI order requires companies developing or intending to develop large AI models to report their cybersecurity measures and activities around model training. They also have to disclose the results of AI safety testing, called “red-teaming” in the order.
The red-teaming tests will be based on future guidance developed by the National Institute for Standards and Technology. Before that agency releases guidance, companies must report on any internal safety tests relating to the development of biological weapons and cybersecurity threats.
The order defines an initial computing threshold for large AI models that would trigger the reporting requirement, but it directs the Commerce secretary to create an updated set of technical conditions within 90 days. The current threshold would include models trained with a level of computing power that is greater than 10 to the power of 26 floating point operations per second, or FLOPS, a metric used to measure the performance of a supercomputer
“The next generation models will most likely meet that threshold; they’re using more and more processing power as we speak,” said Alexei Klestoff, an attorney at the technology-focused law firm ZwillGen LLP.
The order doesn’t explain how it will police the reporting requirement, but the DPA does say that violations of the law could result in a maximum of a $10,000 fine and one year in prison.
Novel Use
Although the DPA is most often used to direct contractors to prioritize orders from the federal government, Section 705 of the law provides broad authority to collect reports and records from the private sector to perform studies of the country’s defense base. Those records would remain confidential if requested by the company.
The AI order “falls squarely within” that section, said James Baker, a security policy professor at Syracuse University and former chief judge of the US Court of Appeals of the Armed Forces.
“The order is not directing the government to use the DPA to gather miscellaneous information, but to provide insight and understanding regarding matters of potential vital national security concern,” Baker said.
If the Department of Defense and other government agencies intend to acquire powerful commercial AI models, they would want to know the results of safety testing and other technical information, said Jeff Gordon, a Yale Law School research fellow who studies industrial policy.
“If that’s the intent, this is actually pretty similar to traditional uses of the DPA, except in the software realm as opposed to the physical goods realm,” he said.
Tech Industry Criticism
The largest AI developers, including OpenAI Inc., have welcomed regulation in general, but the Biden executive order’s reporting requirement has stirred controversy in an industry that is continuing to debate the merits of developing open or closed AI systems.
Kristian Stout, director of innovation policy at the International Center for Law and Economics, said the burden of complying with reporting requirements could ultimately discourage companies from using open-source development.
AI companies now have an incentive to withhold information from the public about their models until the technology enters the development phase, he argued.
“I don’t see how startups, small organizations, and individual developers could possibly comply with what’s required under this order,” Stout said.
NetChoice, another big tech industry group, criticized the AI order as a “back-door regulatory scheme” that would stifle competition in the market. The group’s vice president Carl Szabo said invoking a law meant only for a real crisis will undermine the DPA and could potentially be shot down by the courts.
“In a universe where everything is an emergency, then nothing is an emergency,” he said.
But Baker, the former judge, said the DPA can be used to strengthen the relationship between government and industry. Whether that can happen depends on the details of implementation and the response by private companies, he said.
“If you get to the point of issuing legal documents back and forth, neither the government or companies are going to get to a place where we should all want them to be,” Baker said. “Litigation is a terrible way to make policy.”
— With assistance from
To contact the reporter on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
