The legal field is no longer just watching artificial intelligence from the sidelines. Eighteen months ago, concerns about AI errors and confidentiality were common, but now AI is changing the profession’s core systems. The American Bar Association’s Task Force on Law and Artificial Intelligence focuses on this shift in its “Year 2 Report.”
The report marks the end of the task force, which will pass its work on to the ABA Center for Innovation. This change reflects the shift in the conversation. The question is no longer whether AI should be used in law, but how to manage its role in the justice system.
Agentic Workflows
A year ago, state bars rushed to issue ethics alerts about the most obvious risks of generative AI, such as fake citations and accidental loss of privilege. Recent reviews show that regulations are struggling to keep up with fast-changing technology. The task force notes that while early AI use focused on summarizing and extracting documents, the field is now moving toward “agentic” AI, which can handle complex tasks with little human help.
For lawyers, this shift means moving past simple chat tools to more automated workflows. This could make work much more efficient, but it also raises tough questions about supervision. If lawyers are expected to spot AI misuse by others, which is becoming a new standard, the complexity of these systems makes it hard to do so without strong technical skills.
Widening Digital Divide
One of the report’s more serious findings is the growing divide in the legal field. Big firms can afford secure, advanced AI systems and have staff to review them. By contrast, solo lawyers and small firms may not be able to pay for these tools and might have to use less reliable, consumer-level products that don’t offer the same protections.
This gap isn’t just about competition; it’s also a regulatory problem. If state bars require certain AI skills or security standards, they could exclude many lawyers who can’t afford the best legal technology.
Judicial Integrity
The judiciary is perhaps the most vulnerable frontier. The report details the newly minted “Guidelines for U.S. Judicial Officers Regarding the Responsible Use of Artificial Intelligence,” which lean heavily on a single maxim: AI may assist, but not replace, judicial judgment.
However, the threat to the courts extends beyond the judge’s bench. The rise of AI-generated disinformation and deepfakes presents a serious challenge to the rules of evidence. US Supreme Court Chief Justice John Roberts has already flagged this as a direct danger to public trust.
If a litigant can plausibly claim that an incriminating video or audio recording is a deepfake, the burden on judges to authenticate evidence becomes a Herculean task. The task force suggests that our existing rules of evidence may be insufficient to handle a world where seeing is no longer believing.
Access to Justice
The report does offer some hope for access to justice. Generative AI is already used in over 100 legal aid projects, helping people who represent themselves with complex paperwork. Programs such as Everlaw’s “Everlaw for Good” initiative show how companies and public groups can work together to close the gap.
Still, the ABA’s report adds a warning to this optimism. If the best AI tools are too expensive, the justice gap won’t close; it will just look different. The task force says that making AI affordable for the access-to-justice community should be a main goal, not an afterthought.
Educational Lag
For those entering the profession, the landscape is even more volatile. While over 80% of law schools now offer hands-on AI labs or clinics, the curriculum is in a state of perpetual obsolescence. The report quotes educators who admit that half of what is taught in a 1L AI course may be irrelevant by the time the student takes the bar exam.
This leads to a key question for state bars: How can you test for competence in a field that changes every six months? Some schools—such as Vanderbilt and Stanford—now require AI certifications, but there’s still no single national standard.
Governance and Liability
One of the biggest challenges the report looks at is responsibility for when things go wrong. If an AI process causes a mistake or violates someone’s rights, who is at fault—the developer, the data provider, or the lawyer who approved the work?
The task force believes these problems will be solved step by step through court decisions, not by broad federal laws. But this wait-and-see approach could be risky. As AI becomes more independent, it gets harder to use old ideas of fault and intent.
The New Normal
The task force’s dissolution doesn’t mean the work is done. It just means the job is now too big for one group. By assigning the task to the Center for Innovation, the ABA is indicating that AI is no longer a special project. It’s now an integral part of everyday legal work.
For a modern lawyer, AI has become a core utility that requires immediate, rigorous governance. The transition from “whether” to “how” is complete. The profession must figure out how to live with the consequences of that integration.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Joe Stephens is director of legal solutions at Steno and chief public defender and clinical professor at Texas Tech University School of Law.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.