- Task force says bar should educate judges, lawyers on AI
- Attorneys should ensure AI tools are secure, used judiciously
Attorneys who use artificial intelligence should be careful about sharing information and data that could accidentally breach client confidentiality, the New York State Bar Association’s AI task force wrote in its report released Saturday.
The report, approved by the House of Delegates, lays out guidelines for attorneys using AI and recommends the bar focus on education and advocating for comprehensive AI legislation to close gaps in regulating the technology.
AI has broad benefits, including that it can help reduce human errors, increase efficiency, and augment human intelligence, but it also poses data privacy and cybersecurity risks, the report says.
AI has potential to increase access to justice for underserved communities, but the Task Force on Artificial Intelligence warns there are still myriad availability and education gaps that need to be addressed to make sure AI tools can be used by everyone. The task force also warns increased use of AI could further burden an already-overwhelmed court system.
Privacy Risks
Confidentiality concerns arise when information is entered into chatbots and then used to train the AI, the report says. Lawyers should follow the rules of professional conduct and be mindful of a client’s privacy when using AI tools and get client consent before using them. Even with informed consent, attorneys should make sure client information will be protected.
Attorneys also shouldn’t rely solely on content generated by AI tools and should make sure any work produced by AI is accurate and complete.
“Further, you should periodically monitor the Tool provider to learn about any changes that might compromise confidential information,” the report says.
Using closed AI systems, which aren’t accessible to the public but do learn from public data, can also help alleviate client confidentiality and privacy concerns, according to the task force.
Attorneys need to be knowledgeable about the technology they’re using or ask for help from lawyers or IT personnel. If that’s not an option, “then the attorney should not utilize such technologies until they are competent to do so per the duty of competency,” the report says.
“AI can enhance the delivery of legal services,” said New York State Bar Association President Richard Lewis. “It obviously has enormous potential because it can already draft documents, conduct research, predict outcomes, and help with case management. However, we have an obligation as attorneys to be aware of the potential consequences from its misuse that can endanger privacy and attorney-client privilege.”
Education, Legislation
Laws and regulations have failed to keep pace with AI development, according to the task force. Among other things, the law currently struggles with who should be held liable when AI causes damage or harm. The courts are also grappling with issues involving intellectual property, including disputes over using copyrighted data and information to train AI.
The task force says the bar should prioritize educating judges, lawyers, law students, and regulators on the use of AI and how to apply existing law to regulate it.
“Furthermore, many risks are mitigated through understanding the technology and how AI will utilize data input into the AI system,” the report says.
Legislators and regulators should also identify risks associated with AI that aren’t addressed by existing law and regulations, and examine how the law can be used to govern AI.
The task force has reviewed but doesn’t endorse any specific pending legislation on AI.
Access to Justice
AI can help facilitate greater access to justice, the report says, but attorneys must resist viewing AI tools through a lens of technosolutionism—that is, the belief that it can solve every social, political, and access problem.
Courts, the report says, would have to spend additional time and resources researching, verifying, and challenging incorrect AI-generated legal opinions, and that could lead to even longer wait times for litigants.
“Coming at a time when many courts are already stretched thin with unacceptably long waiting times in some jurisdictions for a hearing, adding to this strain could lead to more injustice,” the task force wrote.
Underserved populations might not have access to computers or the internet and have limited understanding of how to use AI. They might also distrust government institutions, the law, and legal professionals.
AI could potentially “broaden the availability of legal services to the ‘haves,’ leaving the ‘have nots’ worse off than they are now,” the report says. For example, in a dispute between a landlord and a tenant, the landlord likely could use AI to increase enforcement action against tenants while the tenants might not be able to access AI to respond.
“While many proclaim that AI is the solution to democratization of justice, an equally powerful contingent claim AI may create a ‘two-tiered legal system,’” the report says. “Some anticipate that individuals in underserved communities or with limited financial means will be relegated to inferior AI-powered technology.”
To contact the reporter on this story:
To contact the editor responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.