Over the past year, we trained more than 3,000 lawyers from in-house legal teams and law firms through a generative artificial intelligence upskilling program built around real legal work, including contracting, compliance, and litigation. A clearer picture emerged of what “AI fluency” looks like in legal—and what still holds many teams back.
Early insights from Factor’s 2026 Benchmarking survey of 200 in-house and law firm professionals reveal a persistent gap between access and confidence. More than 80% of legal teams report broad access to AI tools, but less than a third say they are very confident using AI for legal work.
In practice, that gap shows up as uneven adoption, isolated experimentation, and lawyers who are unsure when AI fits, how to verify outputs, and how to use it responsibly on real matters.
That gap is why we launched this training initiative across regions and practice groups spanning in-house teams and law firms. Organizations included Atlassian Corp., Workday Inc., Nasdaq, and CrowdStrike Holdings Inc. Participants completed pre- and post-program assessments to assess judgment: knowing where AI helps, where it doesn’t, and how to supervise it in day-to-day legal work.
The pattern was consistent across cohorts. Lawyers didn’t just learn a set of prompts. They built intuition, the ability to spot where AI could help without being told step by step, and the discipline to verify outputs before relying on them. The data and feedback point to six lessons that help explain why AI adoption in legal remains uneven and what changes behavior at scale.
1. Use cases must be concrete. Engagement rose sharply when training was anchored on real legal work lawyers recognized immediately as relevant to their roles, using familiar documents and tasks such as drafting a first-pass nondisclosure agreement, redlining clauses, and analyzing regulatory guidance. Abstract messaging about AI “transforming legal” did far less to change behavior.
Before training, many participants framed AI as useful mainly for summaries or rewriting. After practicing on realistic documents, one participant at a global bank described using AI to turn an unstructured client request into a first-pass legal analysis, helping them identify key issues and pressure-test their initial response before refining it.
Relevance drove adoption. When lawyers saw AI working on problems they already cared about, skepticism fell away.
2. Trust comes from knowing AI’s limits. One of the most consistent surprises was how limited many lawyers’ understanding was of where generative AI struggles in practice, particularly with hallucinations and numeric reasoning. While participants knew outputs needed to be checked, few had a clear sense of when models were most likely to fail or how to interrogate those weaknesses effectively.
Confidence increased when limits were made explicit, and lawyers were given practical ways to verify outputs. Instead of treating AI as something to either trust or avoid, participants learned how to supervise it. They learned to cross-check assumptions, review outputs critically, and build verification into their workflows.
This resulted in more disciplined use rather than looser risk tolerance. Lawyers remained accountable, and trust became a function of judgment.
3. Most lawyers are underusing AI’s capabilities. Time savings on routine work were real, but the bigger shift came when lawyers began to use AI as a thought partner rather than a basic drafting assistant. Many participants arrived thinking of AI as an “intern,” useful for first drafts and summaries. Hands-on practice expanded that view.
Lawyers began using AI to brainstorm negotiation positions, identify weak points in draft agreements, and structure analysis around competing arguments. The gains weren’t only about speed; they were about improving how legal problems are framed and tested.
4. Leadership signaling it’s safe to experiment drives adoption. Adoption accelerated most where leaders clearly signaled that experimentation was expected rather than optional or risky.
In teams where partners or general counsel endorsed hands-on use, uptake was higher and more consistent. Where leadership stayed silent, hesitation lingered even when tools were available.
Psychological safety mattered because the fastest way to build capability is practice, and practice requires permission to try, get it wrong, and iterate. In legal, that represents a cultural shift as much as a technical one.
5. Training is a journey. One-off sessions can spark interest, but sustained capability only emerges when teams treat AI learning as ongoing.
In many teams, the instinct is to jump quickly to advanced use cases, hackathons, or prompting tips. But without a shared understanding of what generative AI is, where it struggles, and how it fits into legal judgment, those tactics rarely stick.
The strongest progress came from pattern recognition rather than memorizing isolated use cases. Lawyers learned how to translate small examples into broader applications and adapt as tools evolved, creating shared language and standards that can scale.
6. A new “AI-fluent” lawyer profile is emerging. Across cohorts, these lawyers began to show intuition for where AI does and doesn’t fit, rather than having come in with specific technical expertise.
These lawyers learned how think about legal work when AI is available and were comfortable experimenting without fear. They integrated AI into workflows rather than treating it as an extra step.
Most importantly, they treated AI as support for judgment rather than a substitute for it. There’s a practical implication for organizations trying to scale adoption: Identify these lawyers early, give them room to model good practice, and reward them for raising the baseline across the team.
Generative AI is no longer a niche experiment in legal; it’s becoming a core professional competency. Training at scale has allowed us to move beyond anecdotes and see what reliably changes behavior: practical use cases, a clear understanding of limits, leadership signals that reduce fear, and learning paths that build durable judgment over time.
The next phase is already visible. As confidence grows, teams want to move from individual experimentation to consistent ways of working. The organizations that succeed will focus less on tool access and more on capability and the human skills that allow AI to be used productively, safely, and at scale.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Alex Denniston is director of insights and innovation at Factor, where he focuses on generative AI’s impact on legal operations.
Peter Duffy is founder and CEO of TITANS, where he advises leading law firms and corporates on AI strategy, technology investment decisions, and large-scale transformation.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.

