Copyright Office Seeks Public Input on AI Protections, Liability

Aug. 29, 2023, 8:24 PM UTC

The US Copyright Office is asking for comments on policy issues raised by artificial intelligence, including the legal status of AI-generated works and whether AI systems infringe when they use copyrighted works to train models.

The office is also seeking input on how copyright liability principles could apply to material created by generative AI systems, according to a notice of inquiry and request for comments scheduled to publish in Wednesday’s Federal Register.

The notice comes within weeks of a district court decision affirming the office’s refusal to register artwork created by artificial intelligence, holding such work isn’t eligible for copyright protection because it lacks human authorship.

While the office said it believes “the law is clear that copyright protection in the United States is limited to works of human authorship,” questions remain about “where and how to draw the line between human creation and AI-generated content.” Answering such questions will affect future registration decisions, the notice said.

Comments will inform the office’s study of AI issues and be used to advise Congress on potential areas for legislative and regulatory action, the notice said.

Further, the copyright office mentioned the possibility of compensation when copyrighted works are used to develop datasets for training AI models. It’s seeking views on what kind of remuneration system or systems would be feasible and effective.

Written comments are due by Oct. 18, and reply comments are due by Nov. 15.

Training AI Models

The copyright office acknowledged that there is disagreement about whether and when the use of copyrighted materials to train AI models is infringing.

The notice cited Getty Images’ ongoing lawsuit against Stability AI Inc., where the AI company is accused of using 12 million Getty photos without permission or compensation to train one of its tools. Stability, which also faces a potential class action from a group of artists over similar claims, has argued the training qualifies as fair use and isn’t infringing.

The office is seeking information about the collection and curation of AI datasets, how they train models, the sources of materials, and whether permission by or compensation for copyright owners should be required when their works are included. It also asked if developers of AI models should be required to retain training materials, and whether such materials should be disclosed.

Liability of AI Users, Developers

The office is also interested in how copyright principles could be applied to infringing material created by AI systems.

“If an output is found to be substantially similar to a copyrighted work that was part of the training dataset, and the use does not qualify as fair, how should liability be apportioned between the user whose instructions prompted the output and developers of the system and dataset?” the notice asked.

The notice asked respondents to consider whether the substantial similarity test is adequate to address claims of infringement stemming from AI-generated content, and how a copyright owner could prove copying if the AI model’s developer doesn’t maintain or make available its training materials.

Other issues identified included the labeling or identification of AI-generated materials and potential protections for artists whose artistic style can be imitated by AI systems.

To contact the reporter on this story: Annelise Gilbert at agilbert1@bloombergindustry.com

To contact the editors responsible for this story: James Arkin at jarkin@bloombergindustry.com; Adam M. Taylor at ataylor@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.