High ethical standards and trustworthiness must be heeded when implementing artificial intelligence-infused tools in legal settings, law firm, government, and business leaders said.
The 80-plus participants of the “AI and the Rule of Law Roundtable” in Greece debated how to ensure that AI tech works as intended—and that those who implement and operate such systems are competent to do so.
The Sept. 21-22 Athens roundtable came as law firms are increasingly adopting legal tech software systems powered by AI to help streamline operations and save money.
International groups are now also beginning to develop standards for the use of such tools.
The goal is “to move from principles to practice,” said a roundtable organizer, Nicolas Economou, chairman and CEO of e-discovery and data analytics consultancy H5.
The roundtable was co-hosted by the Institute of Electrical and Electronics Engineers, known as IEEE, law firm Covington & Burling, the Future Society, and the European Law Observatory on New Technologies.
Regulators in the U.S. and Europe are recognizing that with the advent of a far-reaching tech like AI, protections need to be in place to protect people from misuse.
In recent months, the Council of Europe and the IEEE set standards for using AI in law. The effort aims to recognize how “smart” legal tech can affect privacy and human rights, including while processing and analyzing personal data relating to criminal proceedings.
The IEEE concluded that four principles should be upheld when adopting AI in a legal setting. It said AI systems and tools should be effective; those who design and maintain them should be competent; people must be held accountable when the work of AI tools results in biased or other “undesirable outcomes"; and AI-enabled processes need to be transparent.
With conferees largely agreeing that certain standards need to be met, the Athens roundtable more broadly addressed the degree to which governments, or legal oversight groups like national or state bar associations, should issue regulations—or whether something closer to free-market, “self-certification” systems should be allowed.
“Although the Roundtable participants came from different geographic regions and different sectors, there was a lot of consensus around the trustworthiness principles and the need for practices and an appropriate legal framework to implement them,” said Covington partner Lee Tiedrich, co-chair of the firm’s global AI practice.
“There also was general consensus that collaboration among stakeholders is important for developing these practices and for considering the legal frameworks,” she said.
Mark Lyon, chair of Gibson, Dunn & Crutcher’s AI and automated systems practice group, had a slightly different takeaway.
In general terms, Lyon said, “some EU stakeholders seemed to want to legislate or regulate AI-based systems now, without waiting for the technology to mature further, whereas some U.S. stakeholders seemed to think that premature regulation was potentially worse than no regulation and that we should allow the technology and standards processes to move closer toward realization before defining the legal framework.”
Tiedrich and Lyon did agree on one thing: the need for those affected by AI in the legal arena to have a basic understanding of how the technology works.
Efforts should continue to implement the “fairness principle” designed to protect individuals, Tiedrich said, including those subject to criminal proceedings, “against harmful or unintended bias in the use of AI.”
Fairness also means full explanations of AI to those affected by its use, she said.
There was also clear consensus among roundtable participants, which included representatives of about 20 countries, that such conferences should continue as public and private adoption of AI tools in legal settings moves forward, said Economou.
“There was enormous energy that this should be annualized,” he said.