Confront AI Threats With More Consistent Federal Legislation

Oct. 19, 2023, 8:00 AM UTC

Artificial intelligence has become a hot-button issue for policymakers and legislators in Washington. This focus coincides with a rapid rise in publicly available generative AI models such as ChatGPT and Stable Diffusion.

Many experts concede the US is at the beginning of a long road toward establishing a concrete set of rules that would shape and govern AI. Those rules may never become concrete, but it won’t be for lack of trying.

Although businesses have used AI tools since the 1990s, the growth of these companies and coinciding development of large language models—which mimic human intelligence by synthesizing vast quantities of text and data to generate novel responses—have raised significant concerns. These challenges range from intellectual property ownership, cybersecurity and data privacy risks, permission of use, workforce displacement, and harmful algorithmic biases.

Policymakers and industry experts now have the difficult task of mitigating and confronting these risks without hampering the innovative potential these technologies can provide to the US economy.

There’s no comprehensive federal AI legislation in the US. However, Congress, the states, and the Biden administration have proposed regulatory frameworks, policy guidelines, and draft legislation that could shape the discussion around AI in months and years to come.

The White House

The Biden administration has taken numerous steps to address AI challenges. In October 2022 it released an AI blueprint outlining guidelines for design, use, and deployment of automated systems to better protect consumers.

Following a White House event in July, seven major tech companies committed to additional principles to make their AI technologies safer, including third-party security checks and watermarking AI-generated content. A few months later, eight more companies that deploy AI technologies signed on to the commitment.

However, some advocacy and consumer groups have expressed disappointment because these actions are voluntary and already employed by many major AI developers.

Following up on these efforts, the White House indicates it’s preparing an executive order that addresses AI later this fall. Although details are unclear, Arati Prabhakar, director of the White House Office of Science and Technology Policy, said the order would look to existing law to better manage risks and use of the technology. She also emphasized that it has become apparent that “there will be places where new legislation is required.”

Recently, some details of the executive order—although unconfirmed by the White House—have emerged. According to these reports, the order will likely use the National Institute of Standards and Technology to strengthen industry guidelines when evaluating AI systems. It’s also expected to require cloud computing companies to monitor users that might be developing influential AI systems, attempt to control the export of critical chips that are used to run AI programs, and streamline the recruitment and retention of AI workers from overseas.

The States

As is often the case with complex emerging technologies, Congress can be slow to act, resulting in a patchwork of state laws that vary in stringency and scope. AI monitoring appears to be following the same path as states begin taking a hands-on approach.

According to the National Conference on State Legislatures, at least 25 states, Puerto Rico, and the District of Columbia introduced AI bills in 2023, and 15 states and Puerto Rico adopted resolutions or enacted legislation this year Some examples include:

New York City. A local law addressing automated employment decision tools will regulate the use of AI in hiring, specifically requiring employers to notify candidates about use of AI tools in the hiring process and conduct an annual audit of those tools for bias.

Connecticut. Legislation passed in June to create a working group to study AI and develop an AI bill of rights.

Delaware. The Personal Data Privacy Act, effective Jan. 1, 2025, would allow consumers to opt out of profiling that facilitates solely automated decisions. It also requires a data protection assessment for activities that pose a “heightened risk of harm.”

Washington, D.C. Legislation was introduced to prevent algorithms from making decisions based on an individual’s protected personal traits.

Illinois. Legislation has been introduced that would require certification for any algorithms used to diagnose patients. It would give those patients the right to know when an algorithm has been used for diagnoses and the option to opt out of its use.

This momentum from the states will continue into 2024. As more states enact legislation to regulate AI, companies that employ these technologies must be prepared for an evolution of compliance costs and requisite metrics.

Policymakers’ desire to regulate this rapidly evolving technology will only continue to grow. Consistent federal guidance is needed to ensure effective AI implementation across industries.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Author Information

Jane Lucas is a partner with Alston & Bird’s Legislative & Public Policy team.

Evan Collier is a policy adviser with Alston & Bird’s Legislative & Public Policy team.

Write for Us: Author Guidelines

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.