Use Agentic AI Thoughtfully for Greatest Benefit in Legal Ops

December 8, 2025, 9:30 AM UTC

Most in-house attorneys are familiar with predictive artificial intelligence and generative AI. However, agentic AI represents the next evolutionary step in AI technology.

Unlike generative AI, which typically is confined to a chat interface, agentic systems can autonomously develop plans, retrieve data, use tools, and execute tasks across integrated applications.

Navigating the landscape of agentic AI can be challenging. The term may invoke apocalyptic imagery from science fiction, where humans are subjugated by robots with catastrophic results.

Complicating matters further is the overuse and misapplication of agentic AI in marketing materials. Many tools branded as “agentic” are advanced automation systems that lack true autonomy. This creates confusion for legal teams attempting to evaluate risk accurately and can slow or prevent adoption.

The current handwringing around agentic AI resembles the general sentiment in early 2023 regarding generative AI—distrust, skepticism, and blanket prohibitions. Yet legal departments— perpetually constrained by limited resources but asked to do more with less—stand to benefit significantly from thoughtful automation strategies.

While more than 30% of US legal professionals now use generative AI to support their work—with even higher adoption from respondents at law firms with more than 50 lawyers—this technology has its limitations. Research conducted earlier this year shows that nearly 80% of companies report that their adoption of generative AI has failed to materially impact their bottom line.

Agentic AI, however, has the potential to deliver return on investment in legal operations by moving beyond drafting documents to autonomously executing multistep workflows, such as contract review, compliance checks, and case management. Agentic AI can build on existing capabilities to drive efficiency gains where generative AI alone can’t.

To fully leverage these benefits, legal departments should look to build, buy, or partner with agentic AI systems. Agentic AI presents distinct legal risks compared to generative AI because it doesn’t just create content—it takes autonomous actions in the real world.

This raises questions around liability and contract formation that go beyond traditional concerns involving intellectual property, privacy, and hallucinations. These risks include new regulatory compliance obligations tied to automated and consequential decision-making, new data security vulnerabilities, and potential liability for unauthorized, unexpected, or inadequately supervised autonomous actions.

It creates a paradox for legal departments. The staff and budget constraints that make agentic AI assistance most valuable also limit their capacity to properly understand the technology and pilot appropriate tools. The answer lies in understanding the autonomy spectrum and asking the right questions, so risk assessments are grounded in the actual functioning of these tools rather than the labels.

To assess the potential risk associated with an agentic AI system, legal teams should anchor evaluations in two fundamental questions.

How much autonomy does the agent have? Does it merely suggest actions, or can it execute multistep workflows across integrated systems without human approval?

How much control does the human retain? Are there guardrails, override mechanisms, and audit trails? Can humans intervene before irreversible actions occur?

These questions enable legal teams to map a system’s position on the autonomy spectrum. This ranges from assistive AI, which is characterized by low autonomy and high human control, to fully agentic AI, which is marked by high autonomy and minimal human oversight.

As an agentic AI system’s autonomy increases, particularly its ability to execute actions without human intervention, the potential magnitude and scope of adverse outcomes grows disproportionately. A system that merely recommends contract terms presents minimal risk; one that autonomously executes binding agreements introduces substantial liability exposure.

By understanding where a tool sits on this spectrum, legal departments can calibrate their risk management strategies, allocate resources more effectively, and avoid both overregulation of low-risk tools (your organization’s AI-powered email spam filter) and underestimation of high-risk ones (your organization’s AI interview bot).

This mapping is also critical for AI risk assessments, which form a core component of any responsible AI governance program and are increasingly mandated under the growing body of data privacy and AI-specific laws.

By anchoring evaluations in the autonomy spectrum and asking fundamental questions about agent autonomy and human control, legal teams can make informed decisions that balance innovation with risk management.

The future of legal operations doesn’t mean avoiding these technologies, it means thoughtfully integrating them where they can deliver the greatest value.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.

Author Information

Goli Mahdavi is a partner at Bryan Cave Leighton Paisner, a founding member of the firm’s AI working group, and co-leader of the AI service line.

Write for Us: Author Guidelines

To contact the editors responsible for this story: Melanie Cohen at mcohen@bloombergindustry.com; Jada Chin at jchin@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.