Welcome
Bloomberg Law Analysis

ANALYSIS: AI, Few Guardrails—The Lawyer’s Response

Nov. 4, 2019, 11:30 AM

Results from Bloomberg Law’s 2019 Legal Operations & Technology survey showed that 23% of law firm and in-house counsel are using artificial intelligence or machine learning technology in their practices.

The other 77% of lawyers must be unaware of the AI behind much of the technology they’re using every day. A simple natural language search in a legal research tool uses AI.

Increased deployment of AI has led to concerns about the ills and implications of unchecked algorithmic bias: companies determining ad displays based on users’ race, for instance.

Some memorable disasters have occurred when neutral algorithm meets skewed data. Tay, Microsoft’s chatbot launched in 2016, was designed to answer questions on Twitter and other social platforms in a human-influenced, conversational style. After fewer than 24 hours of exposure to the Internet’s raw humanity, however, Tay turned into a vile, racist troll-bot and Microsoft had to take her offline. Microsoft now restricts what chatbots will absorb and constantly monitors the changing behaviors of existing bots.

Regulatory guardrails around the use of AI are not yet well-developed. This should be particularly concerning for lawyers, who have ethical obligations to vet the quality of AI outputs and discuss legal applications of AI technology with clients.

Developments in AI Regulation

2019 saw some movement on the regulation of AI.

In February, President Trump signed an executive order directing the National Institute of Standards and Technology (NIST) to issue a plan for developing technical standards in support of “reliable, robust, and trustworthy AI systems.” NIST responded in August with its plan, which outlines nine areas for AI standards, including trustworthiness. “Trustworthiness standards include guidance and requirements for accuracy, explainability, resiliency, safety, reliability, objectivity, and security,” according to NIST.

Some pending U.S. legislation would address algorithmic bias: The Algorithmic Accountability Act of 2019 would direct the Federal Trade Commission to conduct assessments of “automated decision systems” for fairness, bias, and data protection impact. The Commercial Facial Recognition Privacy Act of 2019 would require consent for the use of facial recognition technology and prevent its use for discriminatory purposes.

Harry Surden, a law professor at the University of Colorado Law School and affiliated faculty at the Stanford Center for Legal Informatics (CodeX) who focuses on law and technology, doesn’t expect these bills to gain much steam.

“People are hesitant to regulate in the AI space, in part because they don’t really understand it, especially on the regulatory and policy end,” Surden said. “There is also this rhetoric that AI is an engine for the economy, so there’s some hesitancy to disrupt that.”

Privacy Regs’ Impact

The California Consumer Privacy Act, which goes into effect Jan. 1, regulates the collection of personal data, which could affect the large datasets that AI needs to function, according to Surden.

Europe’s General Data Protection Regulation similarly has impacts on AI: it governs the collection and use of data, and Article 22 specifically limits companies’ use of automated decision-making about individuals.

In April, the EU released Guidelines for Trustworthy AI. The guidelines set forth seven requirements: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, societal and environmental well-being, and accountability. The European Commission is conducting a piloting process and seeking feedback through Dec. 1 on how the guidelines could actually be implemented in an organization.

Lawyers Navigating AI

For U.S. lawyers, the American Bar Association this year called for courts and lawyers to address emerging ethical and legal issues with AI.

The ABA resolution is both an endorsement of AI and a warning.

“[I]t is essential for lawyers to be aware of how AI can be used in their practices … . AI allows lawyers to provide better, faster, and more efficient legal services to companies and organizations. The end result is that lawyers using AI are better counselors for their clients,” the resolution states.

“There are some tasks that should not be handled by today’s AI technology, and a lawyer must know where to draw the line. At the same time, lawyers should avoid underutilizing AI, which could cause them to serve their clients less efficiently.”

The ABA also notes several potential ethical issues triggered by AI, summarized below.

The ABA Science & Technology Law section has said it will study and possibly create a model standard for ethical AI usage by courts and lawyers.

When Will AI Regulation Get Real?

It is unclear exactly how AI will be regulated going forward: Should the focus be on the AI data inputs, the design and function of the algorithms themselves, or the fairness of the AI’s outputs?

“All of those are important,” Surden said. “You wouldn’t want to regulate one and not the others.”

In the next couple of years, according to Surden, regulations could focus on “opening up the black box” and making algorithmic decision-making more transparent.

There’s also the potential for increased private enforcement, in the form of lawsuits employing new legal theories that would hold algorithms—or the humans who deploy them—responsible. There is a growing landscape of litigation over autonomous vehicles, for example. And a Hong Kong real estate tycoon’s lawsuit against investment firm Tyndaris in U.K. commercial court over $23 million lost in trades made by a supercomputer could be a harbinger of more litigation over investment losses attributable to algorithms.

State regulation could also lead to litigation. An Illinois law that takes effect in 2020 would require employers using AI analysis of video interviews to obtain consent from applicants and prevent sharing of applicant videos. The law is aimed at technologies like HireVue, which analyzes candidates’ facial expressions, gestures, and vocabularies.

In 2019, AI use and AI regulation both gained momentum. In 2020 and beyond, lawyers will need to track both the substantive developments in AI regulation and their evolving ethical obligations to clients.

Read about other trends our analysts are following as part of our Bloomberg Law 2020 series.

With assistance from Mindy Rattan and Tom Shen.