ANALYSIS: Five Ways AI Will Abate Lawyers’ Cyberdefense Optimism

Nov. 6, 2023, 2:00 AM UTC

Lawyers have become increasingly confident in their organizations’ ability to handle cyberattacks from bad actors, despite the number of these attacks steadily growing. But that certainty could prove a bit premature, as cybercriminals employ emerging generative artificial intelligence technology to boost creativity in their assaults and cause lawyers’ cyberdefense confidence to wane.

Trust in Cybersecurity Gains Momentum

Certainty among both in-house counsels and law firms about their readiness for digital onslaughts from online bad actors is rising, according to results from Bloomberg Law’s Legal Ops & Tech 2023 survey.

Nearly half of all lawyers (46%) said they strongly agree with the statement, “My organization is prepared to respond to a confirmed cybersecurity threat (e.g., ransomware, data breach).” This reflects a 10 percentage point increase from the previous year’s survey, when just over one-third of lawyers (36%) strongly agreed to a similar statement. This suggests an expanding number of lawyers becoming more confident in their respective organizations’ cyberdefense infrastructure.

The discrepancy between 2022 and 2023 is even more stark among in-house counsel. In 2023, more than two-thirds of in-house repsondents (68%) strongly agreed that their organization was prepared for cyberthreats, up from less than half (46%) the year before–a 22 percentage point increase. The percentage of law firm lawyers who chose “strongly agree” rose by 13 percentage points, from 30% to 43%.

These statistics are startling, considering that successful cyberattacks are becoming routine. To be frank, cybercrime is nothing new. The first generally-regarded ransomware attack occurred in 1989, with large scale cyber assaults on law firms and corporations going back nearly a decade—just ask Anthem Inc. and DLA Piper about the digital security breaches they faced in 2015 and 2017, respectively.

Just this year, we’ve seen law firms such as Kirkland & Ellis, K&L Gates, Orrick, Herrington & Sutcliffe, LLP and Bryan Cave Leighton Paisner fall victim. And while we’re at it, add companies like Harvard Pilgrim Healthcare, T-Mobile, Rite Aid and Clorox to the list. If online criminals’ success in hacking law firms and corporations is steadily becoming business as usual, where does the growing optimism in cyberdefense amongst lawyers spring from?

Perhaps becoming more fluent about threats in the cyberattack landscape, and about the strategies used to circumvent them, is simply a natural progression for lawyers, as they embrace their obligation to abide by Rule 1.1 of the American Bar Association’s Model Rules of Professional Conduct to maintain a baseline level of technical proficiency.

However, the more likely reason is a growing sense of trust in their organizations’ tech. They could feel that their IT department has erected the necessary digital infrastructure protections—firewalls, encryption software and the like—to effectively check the cybersecurity box. An alternate explanation could involve their organization simply outsourcing cybersecurity to a company providing a cloud-based solution in the form of Security as a Service (SECaaS), which serves to remove the fear of successful ransomware and malware attacks from the equation.

It’s not necessarily a bad thing that strong confidence in cybersecurity can allow lawyers more focus to serve clients. But it may prove costly if they aren’t factoring in the expansion of generative AI, an evolving form of AI that creates original text, code, audio, images and video from pre-existing data.

AI-Powered Cyberthreats Looming on the Horizon

Lawyers believing their organization is cyberattack-ready should reconsider their defensive posture. This non-exhaustive lineup of ways that criminals either currently or will soon weaponize generative AI for future attacks that could potentially expose rifts in their organization’s digital bulwark merits further consideration:

Password-Cracking

Guessing a password to gain access using generative AI algorithms can be a potent weapon for cybercriminals to digitally infiltrate law firm and corporate targets, and the methods are getting creative. One AI-powered approach guesses passwords with more than 90% accuracy by learning to identify and distinguish the unique sound “fingerprint” of various keyboard keys. Although multi-factor authentication (MFA) provides added layers of protection, it’s by no means failsafe, and will be even less so when cybercriminals become adept at leveraging AI to attack each layer.

More Powerful Malware

The very thought of malware, an invasive software program designed to infiltrate data networks and wreak havoc, is sobering enough. Now envision cybercriminals deploying generative AI-powered malware with a large language model (LLM) that acts independently, acclimating itself to its digital surroundings and evading surveillance to steal a law firm’s client data or sensitive corporate information such as trade secrets or ePHI (electronically protected health information).

Ransomware on Steroids

Ransomware, a form of malware where bad actors use software to encrypt files followed by demands to tender payment (often in cryptocurrency) in order for them to relinquish control back to the rightful owner or release of the victim’s data to the public for non-compliance–is a time-honored favorite of criminals. Automating ransomware using generative AI would allow them to turbo-boost their online extortion efforts on a larger scale, widening the net of law firm and corporate victims whose data they could hold hostage for hefty sums.

Phishing Attacks

Generative AI-powered phishing attacks can incorporate perfect English in emails, which could prove disastrous if mistaken for one written by a human after being opened. Unsuspecting victims, such as a distracted intellectual property paralegal organizing various exhibits for multiple clients or a tired litigation associate drafting a motion to compel discovery late into the night, could inadvertently release a figurative pandora’s box, costing hundreds of thousands of dollars (if not millions) to rectify.

Vishing Attacks

Vishing, the voice method of phishing using phones instead of emails, can potentially deceive an in-house attorney into thinking they’re on a phone call with someone in the finance department two floors down, and not a cybercriminal in a foreign country.

A Momentary Feeling of Safety?

The widespread use of artificial intelligence will undeniably alter the landscape of law practice, leading to added efficiencies, more streamlined workflows, better training and education, elevated profits and ultimately, greater client satisfaction. But the sad prognostication that the legal industry must brace for is that cybercriminals will likewise gain more access to this cutting-edge technology and will use it to advance the nefarious exploits they already have set into motion against law firms and corporations with heightened aggression and no remorse.

Lawyers’ ascending cybersecurity confidence is by no means etched in stone, and future Bloomberg Law surveys will undeniably track this important metric. And as the roll of law firms and companies assailed by online criminal syndicates continues to escalate in 2024, the rising confidence that many lawyers now enjoy—feeling safely protected behind their organizations’ respective cybersecurity protocols—will decline, followed by a heightened sense of apprehension and vigilance.

Special thanks to the lawyers and legal technology experts who shared their ideas with me on this subject: John Brewer of HaystackID, Bruce Markowitz of Evolver Legal Services, Kristin Meister of IDEMIA, Corey Brooks Pace of the American Cleaning Institute, Hitesh Patel of Cooley LLP, and Stephen Reynolds of McDermott Will & Emery.

Access additional analyses from our Bloomberg Law 2024 series here, covering trends in Litigation, Transactions & Contracts, Artificial Intelligence, Regulatory & Compliance, and the Practice of Law.

Bloomberg Law subscribers can find related content on our In Focus: Artificial Intelligence (AI) resource.

If you’re reading this on the Bloomberg Terminal, please run BLAW OUT <GO> in order to access the hyperlinked content, or click here to view the web version of this article.

To contact the reporter on this story: Robert Brown in Washington, DC at rbrown@bloombergindustry.com

To contact the editor responsible for this story: Robert Combs at rcombs@bloomberglaw.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.