ANALYSIS: How to Keep AI From Upsetting Counsel–Client Harmony

Oct. 16, 2025, 1:41 PM UTC

The marriage between corporate legal departments and law firms has always been founded on trust—a trust reinforced by contracts, billables, and shared professional duties. But the rise of generative artificial intelligence has introduced a new challenge to that trust: the question of accountability for a technology that can be easily misused by lawyers, with costly results for their clients.

The challenge is multifaceted. Law firms must have and enforce robust AI governance frameworks of their own, but they also must ensure that their lawyers comply with the specific AI usage standards, policies, and requirements set forth by their clients. Corporate legal departments, meanwhile, must determine whether oversight responsibility for AI should fall primarily on law firms to self-police or on the corporations themselves, through active monitoring and enforcement.

Acceleration of AI-Related Risks

Recent results from Bloomberg Law’s State of Practice Survey show that law firms and in-house legal departments are increasingly adopting AI into their workflows. According to the survey data, 63% of lawyers now use generative AI in some capacity. Half of the surveyed lawyers said that their organization is already developing AI systems internally, while 20% said that their organization has plans to develop or purchase AI in the future.

As adoption accelerates, legal teams are sprinting to develop AI governance policies to effectively balance the technology’s benefits against its risks, which include threats to data privacy and cybersecurity, violations of regulatory and professional duties, and ethical pitfalls.

Amid this race, a worrying trend of unsanctioned use of AI has emerged. A survey conducted in June by security company CalypsoAI asked approximatelyUS-based office staff about their AI usage behaviors in the workplace. The results revealed a concerning willingness by more than 50% of workers to disregard company AI governance policies if the AI tool made their job easier. What’s more, 28% of surveyed employees admitted to submitting proprietary company information into AI tools to complete a task, despite company policies. The study also found that one-quarter of the respondents admitted to using AI without verifying whether the tools were permitted by company policy. This trend was also common among employees in highly regulated industries, such as healthcare and financial services.

While this survey was not specific to lawyers, the results reinforce a critical point that AI-related risks often stem from internal actors and are not solely attributable to external threats. The growing number of sanctions and reported instances of AI misuse by legal professionals, despite well-established rules on ethical and professional conduct, makes these findings equally relevant and instructive to the legal sector.

How It Goes Wrong

A recent Arizona federal district court case highlights the nightmare scenario that unfolds when the law firm in the attorney-client marriage strays from its AI vows. The plaintiff’s law firm hired an outside contract attorney with “strong qualifications” to author the plaintiff’s opening brief. The contract attorney acknowledged receipt of a court announcement, forwarded by the plaintiff’s law firm, warning attorneys about the use of AI in drafting filings. The announcement also reminded attorneys of their Rule 11 obligation to ensure that legal contentions made to the court are supported by existing law.

However, in clear contravention of Rule 11 and established firm policies, plaintiff’s counsel signed and filed the opening brief drafted by the contract attorney without properly verifying citations or conducting a thorough review. The plaintiff’s brief contained 19 citations, 12 to 14 of which were misleading or unsupported Al-generated fabrications. The court responded by striking the filing, revoking the counsel’s pro hac vice admission, and referring the signing attorney to disciplinary authorities.

As this case cautions, merely having an AI policy is not enough; there has to be proper oversight and compliance enforcement. The pursuit of AI efficiency at the expense of ethical duties and policy guidelines, whether by law firms or corporate legal teams, can quickly lead to sanctions and malpractice claims and trigger reputational damage, financial loss, and the erosion of trust.

To Have and to Hold Accountable

Corporate legal teams, under pressure from boards, regulators, and consumers to ensure responsible AI use across the business enterprise, are demanding greater transparency about external counsel’s use of AI. The parallel expectation on law firms is to leverage AI and to reduce legal costs as a condition for retaining corporate clients’ business. This emerging practice is inevitably going to lead to a proliferation of AI in legal workflows—and with it, a surge in “client AI policy” violations.

Even though the AI-related challenges are similar for both parties, the obstacles that each side face are markedly different and contribute to difficulties in maintaining a trusting relationship. For instance, corporate clients often have limited visibility into law firm operations and are unable to monitor law firm compliance in real time. The twin risks of overburdening firms with conflicting requirements and straddling the tension between cost control and compliance assurance also make enforcement by clients difficult. On the other side of the equation, law firms must keep abreast of varying AI expectations and requirements across their client base. Client requests to pre-approve all AI-generated work may also cause operational inefficiencies and delays for firms.

To illustrate the complexity of AI governance in practice, consider a scenario where an in-house team demands that the firm submit all AI-generated work product for pre-approval. The firm pushes back, citing workflow disruption. The standoff delays a critical filing. Or, consider a law firm that uses a generative AI tool to draft a litigation memo without disclosing its origins. The client later discovers this and claims a breach of their AI policy, but the firm argues that the tool was used only for formatting and posed no risk. What might the balance between oversight and operational autonomy look like in such situations?

In the absence of clear industry standards, legal teams and law firms both must recognize AI governance and policy enforcement as a shared responsibility. Corporate legal departments, in particular, have an obligation to safeguard their organization’s financial resources and to ensure that spending on outside counsel does not lead to reputational harm or increased legal exposure. Their purchasing power carries both the muscle to enforce and responsibility to protect the organization’s financial interests by limiting further liability. But law firms have a lot at stake as well. They have professional duties and risk, judicial sanctions, malpractice claims, loss of client confidence, and reputational ruin.

What Law Firms Can Do

To build trust, there are actions that both sides can take that will improve their accountability in the time of AI.

Here are eight ways that law firms can meet their challenges and put their clients at ease.

  • Embed AI policies into workflows, onboarding materials, and trainings. Law firms, can for example, design AI Playbooks that help lawyers quickly assess compliance and include:
    • structured intake and classification rules for client materials and role-based permissions and access controls to prevent unauthorized submission of sensitive data;
    • pre-use approvals with defined sign-off requirements including client consent protocols before client data is processed with AI;
    • escalation procedures for when unusual or novel use cases fall outside standard policy;
    • a living inventory of approved AI platforms, including their permitted use cases; and
    • verification protocols.
  • Invest in monitoring systems that send automated alerts for attempted uploads of restricted content into AI platforms.
  • Continuously monitor and audit—in real time—AI tools, research logs, and reviews of any AI-assisted work product to catch compliance gaps early.
  • Embed version tracking systems that automatically keep a document history log to maintain audit trails.
  • Appoint technically proficient compliance officers, AI governance committees, or designated “AI champions” to monitor AI use, approvals, and exceptions.
  • Incorporate AI terms into engagement letters with clients and certify which AI tool the firm uses, what safeguards exist, and how clients can audit the process.
  • Ensure dedicated oversight and incident response protocols and put in place clear, immediate reporting channels in the event of any incidents caused by AI use. Firms can integrate legal-specific or practice-management AI tools that provide built-in compliance protections.
  • Partners can provide clients with quarterly certifications on AI use or periodic reports on compliance. These reports can provide clients with the assurance that policy adherence is not just aspirational but verifiable.

What Corporate Legal Departments Can Do

To ensure compliance on both sides of the relationship, corporate legal departments and in-house teams can consider these seven actions.

  • Require outside counsel to disclose and certify all AI use, list approved tools, and agree to audits or periodic review of compliance documentation in engagement letters or contracts.
  • Request copies of law firms’ AI policies, training protocols, and AI oversight committee structure.
  • Request specific quality assurance checklists or similar documentation for any AI-generated deliverable relevant to its matter or matters.
  • Consider requesting external audits or certifications where high risk or regulatory sensitivity exists.
  • Build in reporting requirements for any AI-related error or data breach, with remedies specified in advance.
  • Implement an escalating scale of corrective actions for AI-related violations, beginning with fee write-offs and then moving to acts such as suspending the firm from panels.
  • Maintain an open and continuous dialogue with outside counsel on AI use, risks, and mitigation steps.

AI governance is no longer a theoretical concern but an urgent issue shaping the client-counsel relationship, and it requires practical solutions from both sides. Clients and law firms must collaboratively address the complexities of AI compliance and construct a new model of accountability that reflects the realities of an evolving digital age.

For corporate legal departments, this shift means reframing compliance as an essential ethical and operational priority, not just a procedural formality. This involves codifying AI compliance expectations, integrating these standards into core functions, and leveraging purchasing power to enforce compliance.

For law firms, it requires operationalizing AI policies and embedding them into every aspect of matter management, implementing sophisticated risk mitigation tools and playbooks, regularly updating workflows to reflect best practices, and providing clients with verification reports.

By taking these steps toward accountability, counsel and clients alike can build and maintain trust as they demonstrate their ongoing commitment to compliance and responsiveness, even as AI continues to evolve—for better and for worse.

Bloomberg Law subscribers can find related content in our AI for Legal Ops & Firm Management resource.

If you’re reading this article on the Bloomberg Terminal, please run BLAW OUT to access the hyperlinked content or click here to view the web version of this article.

To contact the reporter on this story: Linda Masina at lmasina@bloombergindustry.com

To contact the editor responsible for this story: Robert Combs at rcombs@bloomberglaw.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.