AI Offers an Emerging Yet Challenging Path for Patent Prosecution

Oct. 23, 2023, 8:00 AM UTC

In the legal world, patent prosecutors may be uniquely poised to benefit from artificial intelligence use. AI language models generate human-like text using deep learning algorithms. These models, commonly referred to as large language models (LLMs), are trained to generate text that’s responsive to user prompts. These models “predict” the correct response based on a large set of documents from which the model learns about human language.

LLMs have the potential to improve patent drafting when used as content generation and technical discovery tools. When first launched almost a year ago, usage of LLMs within patent prosecution was met with skepticism regarding both their accuracy, the legal standing of their output, and the privacy implications of engaging with third-party entities. However, with the proper mindset, professional standards, and ethical constraints, the use of LLMs as part of a practitioner’s toolbox can reap a wide variety of benefits.

As tools, LLMs can aid patent practitioners in researching the underlying concepts related to an invention. This ensures a practitioner is familiar with a technological area before starting a patent application. This can range from researching topics to fully understand the underlying technology to developing questions to facilitate engagement with inventors.

LLMs can also assist when drafting boilerplate or background materials of an application. Patent prosecutors can realize these benefits without the risk of disclosing confidential information and provide rapid feedback and intelligence that can elevate the quality of a patent application due to a deeper understanding of the supporting technical concepts.

While LLMs offer the promise of increased knowledge at lightning speed, practitioners should approach such tools with open eyes.

Importantly, the output of LLMs can’t be presumed correct. Patent practitioners must take due care in confirming whether the generated answers are accurate—both in isolation and in the context of an entire document. Errors, sometimes referred to as “hallucinations,” may occur. Further, some LLMs are retrained based on chat sessions, and operators of such models may review chat sessions at their discretion.

When using a public LLM or an LLM not confined to a secure workspace, practitioners should be careful not to include confidential or otherwise non-public information to avoid inadvertent disclosure—including the risk of being incorporated into the model’s training.

In addition to these general concerns, thornier issues regarding prior art and inventorship are unique to the use of LLMs in patent prosecution.

First, LLMs are trained on existing data and can be viewed equivalently as a well-read but uninventive expert. In this context, the output of a LLM can be viewed as an assemblage of potential prior art, for example, written training data. The text may be wholly or partially identical (that is, derivative) to prior art documents used for training. If a patent practitioner includes such an output in a patent application, this may be arguably the equivalent of incorporating known prior art into an application.

A more challenging problem arises with the opposite viewpoint: that AI language models may be capable of processing existing datasets and arriving at potentially novel and non-obvious answers to prompts.

Consider, for example, an invention of a system that uses a machine learning (ML) algorithm as one step of a larger inventive process. If prompted to provide specific examples of this ML algorithm, AI language models may provide viable and inventive options not considered by the inventors. Should one or more of these alternatives be claimed, the question then is, who “invented” the use of this specific ML algorithm?

This raises gray areas of inventorship; however, many jurisdictions have found that AI can’t be an inventor. If an LLM contributes to an inventive element, there may be no means to identify the “inventors” of the combination.

LLMs can play a valuable role in supporting patent practitioners; however, one must tread lightly and consider the impacts of their usage on patent prosecution.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Author Information

Nicholas Martin is of counsel at Greenberg Traurig.

George Zalepa is a shareholder in the intellectual property and technology practice at Greenberg Traurig.

Write for Us: Author Guidelines

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.