US States Target AI With a Medley of Regulatory Measures

Aug. 2, 2023, 9:00 AM UTC

Artificial intelligence has no desire to buy property or sue anyone anytime soon, but North Dakota is not taking any chances.

A bill banning legal personhood for AI became law in April. A month later, lawmakers in the Peace Garden State approved subsequent legislation empowering them to examine how the burgeoning technology might affect everything from grade school education to the 2024 elections.

“I don’t think we have to look very far to see how AI has expanded in the past year to be something that nobody could have anticipated,” state Rep. Cole Christensen (R), who sponsored the personhood bill, said in an interview. “AI is going to be the future,” he said, and if steps weren’t taking to protect the rights of human beings, “then all the other partisan stuff doesn’t even matter.”

North Dakota’s AI-related measures were among more than a dozen that have passed legislatures across the US in 2023, covering matters ranging from road maintenance to examining potential racial discrimination by automated systems. While state-level regulations remain in the nascent stage, some experts say that is a good thing—at least for now.

While the EU is moving forward with plans to put an AI regulatory framework into place as early as this year, the US government—wracked by partisan squabbling and cautious about taking action that might hobble national tech champions—seems unlikely to make any significant rules in the near term. That gives states free rein to experiment.

“Even though we don’t have a lot of national or state legislation addressing AI head-on, I think these issues do take time to fully understand,” Bloomberg Law analyst Peter Karalis said in an interview.

‘Keep AI in Its Lane’

A California bill demonstrated the difficulty of confronting issues like digital bias and privacy before the proposal stalled in July.

Tech lobbyists claimed the bill was too broad. Business groups opposed allowing people to sue companies over algorithmic discrimination. The projected $20 million cost of implementing the proposal further slowed the bill with the Golden State facing a $32 billion budget deficit.

The expected failure of the bill to pass stands as a cautionary tale for the 25 states that pursued AI proposals this year, especially Connecticut, Louisiana, Minnesota, Rhode Island, Texas, and Washington.

All six of those states, along with Puerto Rico, approved measures this year directing state officials to study AI and its potential effects to varying degrees. Similar proposals are before legislatures in states as varied as Missouri and New York, according to a July analysis of AI-related legislation by the National Conference of State Legislatures.

“There’s going to be many issues to using AI to replace human decision-making and judgment in every field of human endeavor,” said New York Assemblyman Clyde Vanel (D), who sponsored legislation to establish an AI commission that awaits approval by Gov. Kathy Hochul. “We have to try to be prospective in this stuff.”

Meanwhile, a handful of states enacted laws this year are embracing the new technology for a variety of niche purposes.

Maryland added AI to the list of technologies eligible for its manufacturing grant program. Mississippi lawmakers earmarked $15 million for an innovation hub at the University of Mississippi. West Virginia is getting ready to pilot the use of AI for monitoring and maintaining roads.

Ongoing fears about the potential threats of automated systems, however, have inspired lawmakers across the US to propose to preemptively limit the use of AI in political advertisements, gambling, hiring, law enforcement, auto insurance, medical decisions, and even subsidies for Hollywood.

The flurry of ideas reflects widespread suspicions that the technology will prove ever more disruptive and difficult to regulate, as has been the case with social media over the past decade.

“States need to make sure that good policies are put into place to keep AI in its lane, to keep it from going rogue, and to make sure bad actors don’t take advantage of it,” Idaho state Sen. Tammy Nichols (R), who sponsored the first enacted ban on AI personhood last year, said in an interview. “Once the monster becomes the master, it is very difficult to switch that back.”

‘Take a Deep Breath’

AI experts argue that policymakers should see the technology less as an alien force gunning to dominate humanity and more like a very useful—but potentially dangerous—tool.

“Unfortunately, academics as much as many others, including Silicon Valley folks, are culpable for spreading this kind of fear and anxiety in the society,” said Hamid Ekbia, director of the Autonomous Systems Policy Institute at Syracuse University. “Let’s stop for a second, take a deep breath, and see what is really possible in both directions, in terms of risks, but also in terms of the promises.”

There are wide concerns that AI will replace people in a variety of professions, with unforeseeable consequences for society. More extreme fears include the chance that an AI system could start a war, if given enough independence. For now, addressing matters like cheating in schools with ChatGPT is arguably a more pressing matter than worrying about super-human intelligence conquering the world a la “The Matrix.”

AI systems as a whole are nowhere close to approaching human intelligence, according to Lisa Shay, an associate dean and AI expert at the Cooper Union.

“If we consider intelligence to be more than just knowledge, but understanding and wisdom and empathy and the ability to create new things, then these systems are not intelligent, and it’s unlikely in my mind that they ever will be,” she said in an interview.

Simon Johnson, a professor at the MIT Sloan School of Management, said in an interview that AI would only get more difficult to control absent action by states, the federal government, and the international community, but these are familiar challenges where AI could help if it receives the right guidance from humans.

“There’s been a real pickup in speed in the past 12 months and we’re not ready for it in terms of many, many applications,” he said. “The industry wants to go faster. It’s going to really be a struggle to catch up.”

To contact the reporter on this story: Zach Williams at zwilliams@bloombergindustry.com

To contact the editor responsible for this story: Bill Swindell at bswindell@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.