Bloomberg Law
Sept. 26, 2017, 1:39 PM

What Are the Ethical Implications of Artificial Intelligence Use in Legal Practice?

Artificial intelligence has long been a source of fascination. However, interest in the application of AI to legal services has spiked in recent years. The legal trade media has published a series of stories debating whether AI will end the legal profession as we know it, whether regulators should take steps to safeguard the public/the profession from this future, or whether regulators should take steps to facilitate its arrival on the scene.

With no definitive answer to these questions on the immediate horizon, AI technology continues to progress, and we step ever-closer to AI no longer being an academic issue, but a reality. As we will see, several ethical duties are implicated by the use of AI in the law. This article will examine the principal ethical duties involved and discuss how lawyers can prepare themselves for compliance in an AI future.

AI Definitional Concepts and Industry Posture

We begin with a baseline definition of AI. Broadly, AI is the ability of a machine to perform what normally can be done by the human mind. AI seeks to use an automated computer-based means to process and analyze large amounts of data and reach rational conclusions–the same way the human mind does. It is projected that AI will eventually outpace the human mind in terms of speed, accuracy and consistency.

That day, however, has not yet arrived. Today, sophisticated software can help lawyers cull through massive amounts of electronic information in e-discovery, assist with creating forms, compare contract language in contract review, facilitate due diligence in transactions, assist in legal research in natural language, predict outcomes based on comparisons with past results, and even contribute to sophisticated brief writing. Today’s software can even learn: as a human user interacts with initial results identifying what is helpful and what is not, the computer then takes that information and further refines the data set to return an even more pinpointed result. That cycle can occur more than once, and the results improve over time.

Some describe such software as AI, even though it does not, yet, fully think like a human. For the purposes of our discussion, these distinctions do not make a large analytical difference. The ethical challenges a practitioner faces may become more contextually complicated when we have invented machines that can equal or surpass human thinking, but the underlying policy and concepts are likely to stay the same, at least in the immediate future.

The Duties of Competence and Independent Judgment

A lawyer’s duty of competence is set forth in Model Rule 1.1, which states:

A lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.

This black letter competence rule has existed in its current form since 1983. The practice of law, however, has changed dramatically since 1983. Computers and the internet have revolutionized the way lawyers practice. Today, mobile devices offer multiple methods for access to tools to practice law on the go. Legal research that used to take days can now be accomplished in a fraction of the time; computer networks and files can be accessed remotely; large amounts of information are transmitted through the internet; and communications are routinely handled electronically. All of these factors have reduced the overall number of lawyers and staff needed to do the job. In a sense, the legal industry has already undergone one massive technology transformation.

In recognition of the impact of technology on the practice of law, in 2012, the ABA adopted the technology amendments to the Model Rules of Professional Conduct. Comment 8 to Model Rule 1.1 now reads:

To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education requirements to which the lawyer is subject.

(Emphasis added). Today, 27 states have adopted amended Comment 8. Erika Kubik, Tennessee Becomes 27th State to Adopt Ethical Duty of Technology Competence, (Mar. 22, 2017).

These amendments demonstrate that while context changes over time in how we provide legal services, the core ethical duty of competence has remained the same. What specific form of technology a lawyer chooses to use has no effect on the lawyer’s ethical duties with respect to technology. This approach is necessary, due to rapid changes in technology. Tying competence regulations to specific forms of technology risks the rule becoming anachronistic very quickly, if not immediately. It is the steps that lawyers must take to meet this same competency standard that will evolve as new technologies develop.

Further, Model Rule 2.1 requires that lawyers exercise independent judgment in representing a client.

Due to its nature, AI is likely to be significantly more transformative than prior forms of technological innovation, and this raises unique challenges for practitioners. As Ed Walters, CEO of Fastcase says, humans often overestimate what a machine can do, and have historically placed inflated expectations of what AI should be able to do, even when the technology is not there yet. Ed Walters, Sorting Through the Hype: A Practical Look at the Impact of Artificial Intelligence on the Legal Profession, Legal Malpractice & Risk Management Conference, Mar. 3, 2017, Chicago, Illinois. When faced with a largely invisible process going on within the machine, lawyers can easily fall into an over-reliance on the technology and a false sense of security that the software will get it right. And herein lies the ultimate danger of the failure of competence in AI use.

Take, for example, the lawyer who enters the wrong data into a machine or asks the wrong question. A wrong answer will result. But will the lawyer know the answer is wrong? The wrong answer and the right answer can look and feel the same at the endpoint when the software returns it.

The duty of technology competence means that lawyers who use technology must be able to understand the technology enough that they can be confident they are using the technology in a way that complies with their ethical duties, and that the advice the lawyer gives to the client is the result of the lawyer’s independent judgment. This obligation does not require that a lawyer become an IT expert if a lawyer wants to use technology in the practice of law. Competency is determined by many factors, which can include the lawyer’s own general experience, training, and experience. Model Rule 1.1, Comment 1. Sometimes, this is adequate. Comment 1 also permits a lawyer to acquire necessary learning, or to associate or consult with experts.

Whatever factors the lawyer relies on to meet competency will be sufficient if the result is that the lawyer is able to recognize anomalies in results, test answers, ask a different question, adjust the data, or make other refinements where required. The lawyer needs to understand enough to recognize what she knows or what she does not know, and then take necessary action to fill any gaps, so that the ultimate legal advice given is ethically compliant and independently hers. These duties cannot be abdicated to a machine.

The Duty of Confidentiality

The other core ethical duty of a lawyer is the duty of confidentiality. Model Rule 1.6 states:

  • (a) A lawyer shall not reveal information relating to the representation of a client unless the client gives informed consent, the disclosure is impliedly authorized in order to carry out the representation or the disclosure is permitted by paragraph (b)
  • .…
  • (c) A lawyer shall make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.

Model Rule 1.6(c) was another technology-driven amendment. Not all AI will need to interface with client confidential information. When it does, a lawyer must make reasonable efforts to protect client confidential information from inadvertent or unauthorized disclosure or access.

The duty of confidentiality requires due diligence in vendor selection, review of how the AI in the system works, and what security measures are in place to protect client confidentiality. In a world of increasing cyberthreats to lawyers, the risk of unauthorized or inadvertent access or disclosure of client confidential information must always be a paramount concern in every lawyer’s mind when using technology. (Note: specifics of what a lawyer should do to address cybersecurity is beyond the scope of this article.)

The duty of confidentiality would also require attention to any unique terms in a vendor’s contract that may threaten client confidentiality, such as: vendor claims of ownership or possession of client data; what happens to client information after termination of the business relationship with the vendor; and what triggers termination. Just as we would not hand over client confidential information to another human vendor without addressing how that vendor handles with the information going forward (even after the relationship ends), lawyers should not do so with a machine-based vendor.

Equally important, lawyers must train themselves on how to use security measures within the software so that security protections that are available are actually employed. It is of little use for software to have complex security capabilities if a lawyer never implements them.

The Duty to Supervise

A partner in a law firm, and a lawyer who individually or together with other lawyers possesses comparable managerial authority in a law firm, is required to make reasonable efforts 1) to ensure that the firm has in effect measures giving reasonable assurance that all lawyers and staff in the firm conform to the ethics rules; and 2) to ensure that lawyers and staff under their direct supervision conform to the ethics rules. Model Rules 5.1, 5.3.

Reasonable measures in this context encompasses reasonable efforts and due diligence to select appropriate software, hardware and vendors consistent with a lawyer’s ethical obligations, and then make reasonable efforts to train lawyers and staff in safe and competent use. This can be one of the most challenging of management’s responsibilities. Training is non-billable, new technology can be intimidating, and lawyers are busy. Lawyers should consider implementing policies and procedures that denies AI technology access to those who refuse the training or those who, after training, refuse to implement necessary steps to comply with their ethical duties. Firms should also consider policies and procedures to block the ability of lawyers to unilaterally download outside programs onto firm systems, so lawyers cannot bypass the firm’s due diligence protocols in vendor selection. Lawyers should also monitor compliance with all of the above, and to take remedial action as necessary.

The Unauthorized Practice of Law (UPL)

UPL is one of the most hotly debated of the ethics issues in AI. AI in legal services could take two forms: one form brings AI legal services directly to the consumer; the other where AI is used as a tool for lawyers in their practice. Model Rule 5.5 mandates that a lawyer shall not practice law in a jurisdiction in violation of the regulation of the legal profession in that jurisdiction, or assist another in doing so. However, significant confusion currently exists in how UPL is defined across American jurisdictions due to the lack of a uniform definition. (The American Bar Association’s list of state definitions of UPL is 31 singled-spaced pages.) This confusion makes it difficult to ascertain clear lines between what is UPL and what is not. And this lack of clarity only gets murkier when applied in a software setting.

Legal AI Direct to Consumer

For legal AI that goes directly to a consumer, the UPL debate is vigorous. On one side of the debate, AI is a lifeline to address a vast and growing access to justice gap by providing easily available, understandable, and affordable access to justice. Advocates argue consumers are fully capable of making appropriate choices whether they want to accept the risk of a non-lawyer legal service, and that it is harmful and patronizing to deny them desperately needed access to justice all in the name of protecting them from themselves. These consumers cannot afford lawyers in today’s market and are often left with no recourse. Some proponents even argue that it is impossible for a computer to commit UPL, because it is not human.

The other side of debate focuses on public protection. Non-lawyer consumers, untrained in the law, may not know what facts are needed to be fed into the computer and may not know how to ask the right questions, let alone recognize or fix potential mistakes. They may just rely on what they think is a promise that the computer will apply the law to their factual situation and provide an answer to their legal problems at a price they can afford. When the consequence of taking a wrong legal step may prove disastrous for a consumer victim of UPL, regulators intervene in the human world. By definition, AI mimics a human. When a machine mimics a human to provide legal services, why should it not be considered capable of practicing law, just like a human?

When a non-lawyer human purports to promise that she will apply specific facts to the law, we unreservedly call it the unauthorized practice of law. We do not tell the consumer who chooses a UPL human that he or she assumed the risk of problems by choosing a nonlawyer practitioner. Why should a computer purporting to mimic a human be handled any differently? No consumer will ever accept the loss of significant legal rights as a trade-off for cheap advice. And no regulation should abandon client protection in the name of convenience.

However, these dangers to public protection do not mean that regulators should ignore the access to justice problem or turn a blind eye to the promise of AI technology to alleviate the gap. Similarly, innovators should not ignore the very real dangers to the public they are seeking to assist. There are too many real life instances of the victimization of vulnerable communities by UPL to say with impunity there is no risk facing the public. Ultimately, there are no clear answers. But the legal industry and/or the AI industry, and preferably both, need to address the issues going forward, to provide clear guidelines, quality control, public protection, and commercial certainty–which would benefit everyone.

Case law demonstrates how the courts have struggled with applying UPL definitions in a software setting. Many cases draw a distinction between the sale of a product or where a computer simply fills in what the consumer tells it to on the one hand, and the sale of a service, such as where the computer is applying the law to specific facts and making choices for the consumer, on the other. See, e.g., Janson v., Inc., W.D. Mo., Case No. 2:10-CV-04018-NKL., 8/2/11; LegalZoom.Com, Inc. v. N.C. State Bar, N.C. Super. Ct., 11 CVS 15111, 3/24/14.

Another approach was that taken in Texas, where the Texas Legislature revised its UPL statutory definition to state that the practice of law does not include the design, creation, publication, distribution, display or sale of computer software or similar products if the products clearly and conspicuously state that the products are not a substitute for the advice of an attorney. The Unauthorized Practice of Law Comm. v. Parsons Tech., Inc., 5th Cir., 99-10388, 6/29/99. This struggle in the case law suggests that strong guidance is unlikely to come from the case law or the legislatures in the near future.

AI as a Lawyer’s Tool

Model Rule 5.5 states that a lawyer may not assist others in UPL. Many states have statutory prohibitions on committing and/or assisting UPL as well. A host of ethical authorities on lawyer outsourcing permit a lawyer’s use of non-lawyer human outsourcing without violating UPL laws/rules, so long as the lawyer ultimately reviews the final work product, supervises the work, and takes responsibility for it, and as long as no one holds the non-lawyer out as able to practice law. (See, e.g., ABA Formal Ethics Opinion 08-451; New York City Bar Formal Ethics Opinion 2006-3 (foreign lawyers); Los Angeles County Bar Association Formal Ethics Opinion 518 (out-of-state brief writing company); Orange County Bar Association Formal Ethics Opinion 2014-1 (ghostwriting by out-of-state lawyer).

When a lawyer uses AI as a tool, takes the steps necessary to review and test the AI tool’s results as her duty of competence requires, and reviews the final work product and supervises the work of the AI tool, as her duty of independent judgment and supervision requires, many UPL concerns should fall away. This is good news. AI as a tool can potentially be exactly what the profession needs as it enters into its next transformative phase. The past decade or so has seen an increasing reduction in the amount of outside legal services needed. Today, many clients demand that lawyers do more with less–better, faster and cheaper being the mantra. Some clients refuse to pay for routine tasks. Some clients also seek predictability in legal spending through alternative fee arrangements.

Lawyers and law firms have been struggling to adjust traditional practice models to meet these new demands, while still maintaining profitability. AI technology is projected to allow lawyers to provide more efficient services with increasing predictability, to take care of some of the more routine tasks, and also to free up lawyers to use those skills that are unique to individual lawyers and focus on more complex legal issues that are not amendable to automation. Because AI technology should ultimately be affordable, using it should help lawyers meet client demands, maintain profitability, and provide a competitive advantage.

AI could also enable lawyers to provide services at less cost—which could make lawyers more accessible to the thousands of clients out there who cannot currently afford lawyers and are often denied meaningful participation in the justice system. AI could help lawyers access new markets to offset any loss suffered by tasks being shifted to machines, and at the same time, address that access to justice gap.

Despite these potential benefits, the big question remains. Does facilitating AI mean we will help usher in the day where a host of AI sentient robots eventually replaces lawyers altogether? This is unlikely, at least in the immediate future. Anyone who has ever experienced the difference among a bad lawyer, a good lawyer, and a great lawyer knows that there are distinguishing qualities, unique to lawyering, that are not easily replicable, even if everyone has passed the same bar exam and has access to the same resources. Humans can reason, empathize, exercise judgment, make inferences, exert interpersonal skills, and immediately react to observed nuances in a client, a witness, a judge, or a jury. AI technology cannot currently take on these tasks with the agility of the human mind, and may not ever be able to achieve some of them. AI technology still needs human programmers and trainers, and someone to ask the right questions and give it the right data. Thus, in the immediate future, an AI tool, correctly and ethically used, will ultimately function most effectively as a tool to a skilled lawyer.

As helpful a tool as AI may be in the short term, there are long-term implications for the industry that we would do well to plan for now. If routine tasks currently being used to train new lawyers are delegated to an AI tool, how does a new lawyer acquire the expertise that clients will eventually pay for? This is a long-term question that needs to be addressed as the profession marches into the AI era.


AI technology is poised to be the tool that can help the profession address the needs of the future. Yet, if used incorrectly, a lawyer could find herself violating some of the core ethical rules that govern us, rules that exist for good reason. Lawyers should take reasonable steps to familiarize themselves with the AI technology that is coming, and make necessary adjustments to bring themselves into ethical compliance for any technology that the lawyer takes on.