- Legality of AI training will influence industry growth
- Jurisdictions divided over copyrightability of AI works
Government regulators will be tasked in 2024 with pursuing definitive answers to questions posed by powerful generative artificial intelligence technology, including a pair of issues that could dictate the next phase of AI adoption and the intellectual property they use and produce.
Namely: Will the likes of US-based OpenAI Inc. and UK-based Stability AI Ltd. require licenses to train their models on copyrighted material? And will the outputs of these models receive their own copyright protections?
The answers will shape the technology industry’s investment priorities in the booming global AI market, and help to determine where creative industries, such as film and graphic design, put their new offices to protect their IP. So far, the global spectrum of answers is expansive and includes ongoing copyright litigation and explicit legislative solutions.
One matter is clear though: The AI industry is keenly awaiting formal regulations to establish guardrails. Perhaps none have been as explicit in tying regulation to progress as Microsoft Corp., which told the US Copyright Office that there will be no next phase of AI without regulations.
“Countries that provide the greatest clarity to support AI development will enable the greatest adoption of responsible AI technology,” Burton Davis, Microsoft’s Deputy General Counsel for IP wrote to the Copyright Office in October. “Without clarity, no company will be able to confidently develop AI systems.”
The comment was in response to the US office’s ongoing rulemaking and plan to release a study in the first half of 2024 examining the relationship between copyright law and AI.
Here is a look at the current state of where regulators around the world stand on the intersection of AI and intellectual property law:
Copyrighting AI Content
With only a few keywords and the click of a button, AI text-to-image models such as Midjourney and Stable Diffusion can produce intricate and sophisticated artistic renderings that would ordinarily receive copyright protection if they were produced entirely by a human artist.
Some argue that AI art generators are more akin to the advent of the camera, technology that is now recognized as a tool used to capture creative human expression. Others take the view that generative AI tools are more like autonomous programs that make artistic decisions without human involvement.
In the United States, courts have determined that only human authors can receive copyright protection for their work. A federal district court in Washington, D.C., this summer ruled that an AI-generated image called “A Road to Paradise” couldn’t receive a copyright registration because the computer scientist who prompted the AI, Stephen Thaler, provided no artistic decisions. Thaler has appealed the ruling, which will be heard in the new year.
The US Copyright Office has taken a strict stance, requiring applicants to disclose the use of AI in their registrations. It has rejected at least three applications for AI-generated images, even when the human artist described using dozens of text prompts and additional edits. In another application of an AI-generated comic book, the office rejected registration for the AI-generated images but granted registration for the author’s arrangement of the images into a book.
Other countries have taken a more lenient approach. The Beijing Internet Court in China ruled in November that an AI image of a woman created with Stable Diffusion could receive copyright protection, meeting requisite originality and intellectual achievement standards under Chinese law.
“This stands in direct contrast to our US policy right now and I think this puts the US at a major disadvantage in terms of its creative industry,” said attorney Ryan Abbott of Brown Neri Smith & Khan LLP, who represents Thaler.
In the United Kingdom, a decades-old law could inadvertently support the copyrightability of modern AI-generated works. The Copyright, Designs and Patents Act of 1988 provides explicit protection for computer-generated works.
The law was a result of concerns that any creative work touched by a computer would lose copyright protection, according to Andres Guadamuz, who researches intellectual property at the University of Sussex.
“It’s a curious provision of the law,” said Chris Mammen of Womble Bond Dickinson. “It’s been sitting there for almost 35 years and has never really been litigated but it has new salience in this era of generative AI.”
Guadamuz said the US’s seemingly stricter policy toward the AI copyrightability question stands out and is partially a result of the US’s requirement that artistic works be registered before suing for infringement. That adds an additional layer of gatekeeping that could slow down creative uses of AI by artists, he said.
“I’m worried that lots of works that have a lot of human creativity are now being rejected just because there is a whiff of artificial intelligence,” Guadamuz said.
Training AI
Regulators are similarly divided over whether training AI models on copyrighted content is legal.
US courts haven’t reached a definitive conclusion about whether that falls under the “fair use” standard that allows copying without permission. With litigation still in the early stages, a final answer could be years away. AI companies and legal experts have argued that existing case law on fair use supports training in most cases.
In the UK, the High Court of Justice ruled in early December that Getty Images’ copyright case against Stability AI over model training could proceed to a trial overseen by a specialist judge.
Other countries with competitive tech industries have been more proactive in creating exemptions for AI training.
Japan amended its copyright law in 2018 to explicitly allow data scraping for the purpose of training machine learning models. The country has continued to endorse copyright policies that would favor AI developers.
Singapore adopted a new copyright law in 2021 that creates a similar exemption when copying for the purpose of “computational data analysis.”
The Israeli Ministry of Justice released guidance this summer recognizing that training a machine learning model would generally fall under the country’s fair use standards, citing US fair use court cases. The exemption wouldn’t apply in cases where an AI model was trained exclusively on the work of a single artist in an attempt to compete with the artist, the opinion said.
A draft version of the European Union’s forthcoming AI Act adopts the EU copyright law that grants exemptions for text and data-mining operations, but does include opt-out provisions for rights holders.
Shelley McKinley, chief legal office at Microsoft’s Github Inc., said she is optimistic that jurisdictions will recognize the principle that AI training is about extracting “uncopyrightable elements” from data. “I think the prevailing thought is to protect the ability for training on public data,” she said.
It’s not entirely clear whether an AI program trained in a country with lenient copyright rules can be deployed in a country with stricter rules. Regardless, AI companies will be keeping close watch on jurisdictions with favorable laws.
“Countries that have more lenient rules may become more welcoming to some AI companies,” Guadamuz said. “All it takes is a couple of rulings in favor of the companies.”
— With assistance from
To contact the reporter on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
