AI ‘Authorship’ Muddies the Waters of Copyright Law Claims

Aug. 3, 2023, 8:00 AM UTC

Millions of people this spring enjoyed a new single featuring Canadian musicians Drake and The Weeknd titled “Heart on My Sleeve.” But the song came from a pseudonymous TikTok user who had generated the viral vocals using artificial intelligence.

The video was pulled from social media sites on a copyright claim from their label, even though technically nothing in the deepfake track had been copied from existing recordings. Rather, the audio had been trained on many hours of recordings owned by the label.

Is it infringement to use copyrighted works to train AI? Does an AI-created work even have an author, as the Constitution requires for copyright protection? Must an author be a person, and what if the author uses a machine to create?

An author writing a book or musician writing a song is pouring something unique and personal onto the page or screen but also hopes to earn a living from this work. At the base of copyright law is the moral intuition that there’s something unique about human creation that is worth incentivizing and protecting, and copyright serves this dual role of economic engine of creativity and as a way of allowing authors to choose how their work is used by others.

Artificial intelligence is already unavoidable. Microsoft Inc.’s Copilot system is already being integrated into Microsoft Office, and a sizable percentage of documents created using that ubiquitous software suite will presumably include some form of AI-generated content within a few years.

As these innovations continue and accelerate, they’ll provide financial windfalls to those who generate them, but rarely (at least so far) to the creators whose works are the raw materials for training these AI systems, and whose works may ultimately be supplanted by AI generated output.

The AI revolution risks further gutting the ability of creators to earn a living while making others rich with their creative work. The rapidly evolving AI sector requires a prudent and cautious approach to regulation, and we should be cognizant of its benefits but also ensure that it isn’t done on the backs of uncompensated creators.

The author of a work is the “mastermind” behind the work, not necessarily the person literally putting pen to paper, or light to film. And photographs, added to copyright law in 1865, have long been a point of contention.

Photographer Napoleon Sarony sued Burrow-Giles Lithographic in 1884 for using an unauthorized reproduction of one of the publicity photos he took of the playwright Oscar Wilde, who was embarking on a US lecture tour. Burrow-Giles argued that photograph wasn’t subject to copyright because it was produced by a camera rather than by a person. The Supreme Court ruled for Sarony, stating that he made the photograph “entirely from his own original mental conception,” and thus was the author of the photograph.

The general assumption of AI developers is that using copyrighted material for AI training, and the output, is a “transformative,” and thus fair, use, pursuant to cases including one involving Google Images. However, the Supreme Court’s recent decision in Warhol v. Goldsmith limited the scope of transformative use, and made clear that even if the use for training is a fair use, the content output from the AI may not be.

The US Copyright Office has remained consistent with its guidance for decades, including through its 2023 guidance on AI and copyright. It faced questions in 1965 about what we now call computer-generated works.

Its position reflects the Burrow-Giles decision and states: “The crucial question appears to be whether the ‘work’ is basically one of human authorship, with the computer merely being an assisting instrument, or whether the traditional elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were actually conceived and executed not by man but by a machine.”

This position is bolstered by a case involving a monkey, which climbed wildlife photographer David J. Slater’s tripod in 2011 and managed to take a selfie. Slater argued that he deserved copyright protection because the way he arranged the camera settings and equipment made the selfie possible, and in fact had been planned for that purpose.

The Copyright Office stated that because it wasn’t taken by a human, the photo wasn’t subject to copyright. This doesn’t bode well for robot wannabe copyright holders, nor those who would purport to speak for them.

As for new AI works, it remains to be seen if the intuitions about our uniqueness as humans which underpin our approach to authorship will evolve in the face of maturation of AI technology. Who owns the photograph nobody took?

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Author Information

Zvi Rosen is an assistant professor of law at Southern Illinois University School of Law with a focus on intellectual property and copyright law.

Write for Us: Author Guidelines

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.