Agentic AI Is New—But We Already Have the Tools To Regulate It

Sept. 23, 2025, 8:30 AM UTC

Agentic artificial intelligence has exploded into headlines this year. Proponents predict that autonomous AI agents will revolutionize industries and our daily lives, including how we work, shop, and otherwise engage with the world. Skeptics question whether AI technology is ready for that autonomy—and whether we are.

But although the phrase “agentic AI” may be new, the risks presented by this upcoming technology aren’t. It would be a fool’s errand to attempt to draft new bodies of law for every iterative step forward in AI technologies—at least if we want laws to be coherent and capable of being implemented and followed as a practical and technical matter, across jurisdictions.

Instead, legislators and regulatory enforcers can best focus their efforts on determining how to apply existing laws to prevent and remedy potential harms to their constituents from agentic AI.

Defining Agentic AI

Agentic AI refers to AI systems that perform tasks (including potentially complex tasks) and pursue goals either autonomously or with limited human intervention. Agentic AI is indeed new, but it’s best understood as the newest developmental signpost in a decades-long path of computerized automating of functions previously done without computers.

Autonomous computing at scale has long been available to pursue human ends, which range from laudatory to malicious to incomprehensible. Inherent in all computing at scale is the possibility for disastrous consequences if the technology isn’t well-designed and well-deployed.

Agentic AI moves the needle on risk, because the absence or minimization of human judgment means these systems may use unforeseen methods to pursue their assigned goals, increasing the range of unanticipated and potentially harmful outcomes.

Regulating AI

Both existing laws of general application and those tailored specifically to address AI provide rights and remedies to combat agentic AI’s risks.

Consumer protection laws prohibiting unfair and deceptive acts or practices, privacy and data security laws, IP laws, eavesdropping and wiretapping laws, products liability law, tort law, antidiscrimination laws, and more all continue to apply with equal force to agentic AI.

Further, states have been moving quickly to pass laws on AI that aren’t limited to agentic AI.

State legislatures in California, Colorado, Utah, and most recently Texas have enacted legislation focused on regulating AI technology. Many more states have also enacted omnibus privacy laws that provide consumers with enumerated rights if companies seek to use their personal data in furtherance of decisions that produce legal or similarly significant effects. These include decisions involving financial or lending services, housing, insurance, education enrollment or opportunity, criminal justice, employment or independent contracting opportunities or compensation, health-care services, or essential goods or services.

Although these laws don’t specifically use the term “agentic AI,” many of their provisions still regulate risks from agentic AI, including:

  • Algorithmic discrimination
  • The absence of human judgment and transparency in consequential decisions
  • Consumers being deceived into believing they are interacting with a human when they are interacting with an AI system
  • Manipulation of human behavior to encourage harm or criminal activity

Other state legislatures are likely to follow suit.

The larger question on the table is who is responsible for the actions of agentic AI in a business model often spanning many companies across the AI stack—from original development to an interaction on behalf of or with an end user. But this question will largely be answered through contractual allocations.

Allocating Risk

A key thread in the fabric of legal developments from the pre-industrial era to now is the shifting of responsibility for avoiding harm from the end user to the companies that create, make available, or deploy technologies. For agentic AI, the existing legal framework defines harms and risks and provides remedies, but it doesn’t necessarily address who will shoulder the responsibility for that risk.

Many different entities are likely to be involved in the development, deployment, and use of agentic AI. How those entities allocate risk is—and will continue to be—a hotly negotiated issue. In addition, the black-box nature of agentic AI may make it difficult to understand factually what happened if harm occurs.

For example, say a consumer uses an agentic AI tool that exceeds the consumer’s instructions and purchases out-of-scope products or too many of a given product. Does the consumer, the merchant, the company offering the agentic AI shopping tool, or the upstream developer of the underlying AI model bear the risk?

In the first instance, the answer as to who is responsible for harms in such situations (and others) will likely be the contract between the entities involved in developing and deploying the agentic AI technology. And regulators concerned about harms to downstream consumers have tools available to encourage or require companies to include protective terms in those contracts and/or develop internal processes for vetting contractual partners.

Given how quickly technological developments outpace the law, it makes little sense for lawmakers to craft a new “law of agentic AI”—particularly while the technology is still nascent. Instead, many of the key agentic AI questions related to how entities involved in the agentic AI tech stack will allocate existing legal risk among themselves are issues that arise in private contractual negotiations, not legislation.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.

Author Information

Erin K. Earl is partner in Perkins Coie’s privacy and data security group.

Rebecca S. Engrav is co-chair of Perkins Coie’s AI and machine learning practice.

Sumedha Ahuja is co-chair of Perkins Coie’s AI and machine learning practice.

Write for Us: Author Guidelines

To contact the editors responsible for this story: Max Thornberry at jthornberry@bloombergindustry.com; Daniel Xu at dxu@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.