By the Numbers: Six AI Questions for In-House Counsel in 2024

Jan. 2, 2024, 10:00 AM UTC

Just over a year after generative AI entered the mainstream, in-house counsel are continuing to come to terms with its use inside their companies, while they prepare for the growing number of regulations and requirements aimed at the technology.

The EU planted the flag in December, when negotiators reached political agreement on the landmark AI Act. Full details are still forthcoming, but the legislation will establish broad regulations and requirements for developers and deployers of the technology, largely based on the riskiness of use cases. It will affect many US companies.

The US doesn’t yet have federal AI legislation, but in a sweeping executive order in October, President Joe Biden called for standards and testing for AI models. Laws are also cropping up at the state and local level, some state bar associations are writing ethics guidelines for lawyers using the technology, and courts are weighing AI’s copyright implications.

In-house counsel are also continuing to wrestle with questions about AI’s use that cropped up in 2023: figuring out who will run their AI governance efforts internally, and how they and their firms should use the technology. They’re looking for 2024 to bring answers to some of the biggest questions about AI and the law.

1. How will the US regulate or legislate AI?

President Joe Biden’s Oct. 30 Executive Order calls on federal agencies to establish standards and testing, looking at safety issues like cybersecurity and biosecurity, and with attention to foundation models—which are trained on enormous amounts of data and underpin the more specific uses of the technology. Companies across industries will start “red-teaming” to test security, working toward identifying AI-generated material, and fixing biases.

Federal agencies are already acting on AI, including the Equal Employment Opportunity Commission crackdown on employment bias.

Many US lawmakers are also calling for Congress to act. Without legislation, the effects of the executive order could be short-lived if Biden loses office in 2024.

2. How will state and local AI regulation evolve?

More than half of state legislatures have introduced AI-related bills in the last year, and legislation is cropping up at the local level creating an increasingly complex regulatory landscape.

For example, companies in New York City using AI in employment decisions must test for bias. The state is also contemplating legislation that would limit how employers use AI in hiring decisions. Michigan in November joined a group of states cracking down on AI-generated deepfakes ahead of the 2024 election.

California is moving to regulate AI on multiple fronts. In November, its privacy agency released a draft of far-reaching rules that would allow consumers to opt out of having their data processed.

3. How will the IP law questions around generative AI be resolved?

Generative AI’s implications under intellectual property law remain among the technology’s biggest legal questions for companies. They’re concerned they could be sued when they use material generated partially or wholly by AI that inadvertently infringes on copyrighted material, as well as the threats to their own IP.

A growing number of authors are suing the companies that develop AI models for copyright infringement, saying the models were “trained” on data that includes copyrighted works. Microsoft, Adobe and others have indemnified customers against copyright infringement claims for some AI outputs. But companies could still face copyright risks to their own IP.

The US Copyright Office is exploring another fundamental question: How does the principle that only human-authored works are copyrightable apply to human-AI collaborations? A 2023 call for feedback, which also included questions about training data and fair use, elicited over 9,000 responses.

4. Who should lead companies’ AI governance?

Companies are looking for the person or team to lead their generative AI governance efforts internally—a need that’s only going to become more pressing as regulatory and compliance requirements grow. The role calls for someone who’s well-versed in the technology and its legal implications, and can work across the organization. That person isn’t always easy to find.

“Here’s the challenge: The domains of risk created by AI are so broad as to really make it a unicorn hunt to try and find a single person or a single set of skills that can be expert in all of them,” J. Trevor Hughes, president and CEO of the International Association of Privacy Professionals, told Bloomberg Law in October.

Many companies have tapped their chief privacy officer to take the role. But AI poses challenges far wider than the privacy sphere, and almost anyone stepping into an AI governance role must acquire new knowledge and skills.

5. How can AI help in-house counsel do their job?

Although many lawyers are still wary about AI’s risks, including its threat to confidential information and penchant for inaccuracies, the industry is increasingly embracing the technology. For in-house counsel, AI promises to help them save time and money by working more efficiently and even bringing some work in-house.

According to a Bloomberg Law survey conducted from September to October, lawyers across the profession are already using AI for many common tasks—with 53% reporting they used it for legal research, 42% for summarizing legal narratives, 34% for reviewing legal documents, and 21% for due diligence. Those are increases across the board in AI use since the summer.

In-house lawyers are ahead in adoption rates: 25% of in-house respondents to the Bloomberg Law survey said their company had purchased or invested in a generative AI tool, while only 12% of attorneys at firms said the same.

6. Do in-house counsel want law firms to use AI, and disclose its use?

Generative AI is still new, and potentially risky. Some clients are putting the brakes on how their outside counsel use generative AI, and others say they want their firms to disclose its use.

On the other hand, the technology’s time-saving efficiencies—potentially resulting in lower bills—may lead some clients to demand their firms deploy generative AI. And as generative AI is increasingly baked into many legal technologies including popular legal research tools many lawyers touch every day, disclosing every use of such technologies may not be feasible.

Some state bar associations are giving guidance on the disclosure question as part of ethics rules they’re developing for generative AI. Terms of a firm’s AI use may eventually appear in engagement letters.

“Ultimately, you follow your client’s instructions,” Katherine Forrest, a partner at Paul Weiss and former US District Court judge, told Bloomberg Law in November.

To contact the reporter on this story: Isabel Gottlieb in New York at igottlieb@bloombergindustry.com.To contact the editor on this story: Alessandra Rafferty at arafferty@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.