Licensing Generative AI Tools Gains Momentum But Pitfalls Abound

December 29, 2023, 9:30 AM UTC

As the year of generative artificial intelligence comes to a close, a recent survey found that more than half of organizations are in the pilot or production stage with generative artificial intelligence tools.

Given the clear productivity benefits, it could be considered irresponsible not to leverage generative AI for certain business functions. Yet, procurement and licensing of third-party generative AI technology presents unique risks that must be considered and, in some cases, mitigated.

Even absent new laws regulating AI, contracts that are silent on allocation of risks and responsibilities for things such as bias testing and ensuring adequate rights in training data could leave businesses vulnerable to litigation and regulatory enforcement.

Unique Licensing Risks

There are several underlying reasons why licensing generative AI from third parties is riskier than licensing other technologies, such as software-as-a-service platforms or cloud services.

For one, because the foundational models for generative AI are trained on such vast amounts of data, there are immense due diligence challenges when it comes to ensuring that the vendor has adequate rights to the data; that the data is a true or appropriate representation of the context or intended use of the AI system; and that the AI system’s use of such data doesn’t violate third-party intellectual property rights or applicable privacy laws.

Second, generative AI learns and evolves over time, which requires additional layers of oversight, monitoring, and auditing. Most software is routinely updated—when you license a SaaS product, updates are likely automatically installed. But because of its scale and complexity, generative AI systems may require more frequent maintenance due to data, model, or concept drift.

These issues are now playing out in courts and conversations between regulators and industry leaders. Cases filed this year include claims that companies used “data lakes” containing unlicensed works to train AI tools, as well as those from several visual artists that allege copyright infringement.

Best Practices

In the absence of federal regulation, the Federal Trade Commission’s letter to OpenAI provides some useful guidance for deploying generative AI. The letter asks the company to explain how it sources its training data, vet it, and test whether the models generate false or misleading statements, or personally identifiable information.

To that end, businesses looking to license generative AI should consider the following best practices.

Understand specific use cases and desired outcomes. Given the buzz surrounding generative AI, it can be tempting to rush into an investment. Yet being thoughtful about how your company intends to use it will inform which tool you license, and contract terms you’ll want to make sure are included in the agreement.

For instance, if you’re using generative AI technology for voice authentication and fraud prevention, you’ll want to ensure the vendor complies with biometric privacy laws—and that there are contractual remedies in the event of noncompliance.

Similarly, if generative AI technology is being used for data analytics, any personal or confidential information put into the AI system shouldn’t be shared with the vendor—or any other users of the AI system—or become part of the training data set of the foundational model.

Conduct thorough due diligence. Due diligence processes must recognize the scale and complexity of generative AI systems, and the rapid emergence of AI laws and regulations. Depending on the type of AI system and proposed application, due diligence areas could include AI governance and oversight, intellectual property rights, data privacy compliance and privacy by design, cybersecurity risks and mitigation, the potential for bias or disparate impacts, and litigation and regulatory enforcement risks.

Assign responsibilities and allocate liability. It’s important for both licensors and licensees of generative AI technology to mitigate risk by entering into a robust written agreement governing the relationship between the parties.

At a minimum, the contract should address restrictions on external data sets and other inputs used within the AI system—including those that may be restricted due to privacy or IP considerations. It also should address requirements around transparency and explainability of the AI system, security and resiliency standards, and responsibility for ongoing testing and monitoring.

The contract should also establish the rights and responsibilities of the parties with respect to the customer’s inputs into the generative AI system. For example, the terms and conditions for Microsoft Corp.’s generative AI services expressly provide that Microsoft doesn’t use the customer’s input to train, retrain, or improve the foundational models used by the generative AI technology, and that Microsoft doesn’t own the customer’s output content.

Apply senior management level oversight and governance. Given the complexity of AI systems, governance should start at the highest level within a company, such as the board of directors.

The board (or equivalent governing body) should oversee the effective implementation of policies, procedures, and processes to manage AI risk, including independent reviews and assessments, as well as internal controls and accountability procedures consistent with industry protocols, such as the National Institute of Standards and Technology’s AI Risk Management Framework.

The governance framework should include a written AI policy that establishes guardrails for use and deployment of AI technology. This could include formation of a cross-functional committee comprised of representatives from appropriate disciplines to review and approve use cases.

Looking Ahead

We can expect more regulation around the use of generative AI, plus new licensing and royalty models (particularly as smaller players in this space gain traction).

Whatever the case, there are steps businesses, vendors, and their counsel can take now to mitigate licensing risks—and put generative AI to good (and legal) use.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Author Information

Rachel Reid is partner in the data privacy, security, and technology practice at Eversheds Sutherland.

Write for Us: Author Guidelines

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.