Adobe senior vice president Karen Robinson says generative AI can be a boon for creators, but only if the policies regulating it can protect intellectual property.
Creators around the world have emerged as economic powerhouses, fueling cultural movements with a single post and reshaping how brands connect with their customers. As creative expression becomes central to how we work, learn, and play, another force is simultaneously transforming the creative landscape: generative artificial intelligence.
Generative AI holds extraordinary promise. It can supercharge creativity, accelerate productivity, and unlock entirely new economic frontiers. But for many creators, it’s also setting off alarm bells. In today’s digital landscape, where content can be replicated, remixed, and monetized in seconds, the absence of thoughtful safeguards means creators risk becoming collateral damage in a technological gold rush.
Creators face a growing threat of unauthorized reproductions of their style, likeness, or work, along with misattribution and loss of control once their content goes online.
As a transformative technology, generative AI is reshaping the creator economy—and it’s important that policies evolve in tandem.
Innovation and Oversight
As part of the broader tech industry, we share the collective responsibility to build transformative technology ethically. The scale and speed of generative AI’s impact make it critical to consider how policies can complement responsible innovation. If we want to ensure that generative AI strengthens the foundation of creative industries, governments have an important role to play in supporting innovation through clear, balanced policies that protect creators while encouraging progress.
Regulation doesn’t have to mean overregulation. Rather than narrowly tailored policies that struggle to keep pace with rapid technological change, the goal should be to establish clear, flexible frameworks that promote fair competition, protect creators, and evolve alongside innovation.
What’s needed is a forward-looking approach—one that provides guidance and guardrails while remaining responsive to new use cases and modalities. This isn’t only in the interest of innovation; it’s also critical to ensuring a level playing field for companies of all sizes.
Addressing Inputs, Outputs
As more countries explore writing and setting AI policy, many are developing frameworks that focus heavily on either inputs (such as training data) or outputs (such as content impersonation), but rarely both. In practice, effective legal frameworks must consider both dimensions and take a balanced approach.
On the inputs side, we need clear guidance around how AI models are trained—starting with the data.
A major unresolved legal question is whether using copyrighted content to train generative AI constitutes “fair use” under current US copyright law. While broad data access fuels innovation, creators deserve meaningful agency over how their work is used.
For example, the UK government and the EU are currently exploring opt-out mechanisms in the context of text and data mining exemptions that would allow creators to prevent their content from being used in training data.
Since 2019, Adobe has advocated for the widespread adoption of “Content Credentials”: secure metadata that anyone can attach to their work to provide transparency into how a piece of content was created, who created it, and how it was edited. More importantly, Content Credentials can be used by creators to signal to generative AI models that their content shouldn’t be used for training.
Supporting the widespread, cross-industry adoption of technologies such as Content Credentials would provide creators with a powerful mechanism to assert control over their work. Governments have a key role to play here in encouraging enforceable standards and ensuring that creators’ preferences are respected across the AI development lifecycle.
On the output side, existing copyright laws in the US today don’t cover unauthorized imitation of artistic styles or digital likenesses.
As the company that powers creativity for billions around the world, we know that creators’ unique artistic styles aren’t only their expression but also integral to their livelihoods. When AI is used to replicate those styles without consent for profit, it undercuts the very people at the heart of the creative economy.
While generative AI outputs may not constitute direct copyright infringement, they can still undermine a creator’s livelihood when used to mimic their work without consent. Here, governments must act to modernize the legal framework for creator protection in the AI era. Legislation such as the Preventing Abuse of Digital Replicas Act, which aims to curb unauthorized digital impersonation, is an important step toward protecting the individuals and industries that fuel the creative economy.
Law to Policy
In the US, efforts to establish legal precedent around generative AI are largely playing out in courtrooms. While litigation can take time, these cases play an important role in clarifying how current laws apply to new and emerging technologies—and in shaping future policy discussions.
Recent rulings—such as Andy Warhol Foundation v. Goldsmith—underscore both the potential and limitations of relying solely on courts to define fair use in the context of generative AI training.
As new generative AI modalities continue to emerge—first images, then language models, now video—litigation alone can’t provide the speed or scope needed to keep pace. Courts are an important part of the process, but meaningful progress will require proactive policy frameworks that protect creators and enable the transformative innovation AI can deliver.
Generative AI has the potential to become a foundational part of how we create, collaborate, and communicate. But its success depends on trust, transparency, and fairness.
Creators deserve clear pathways to assert control over how their work is used by generative AI. Businesses, in turn, need clarity on what responsible development looks like. Only through balanced, adaptive, and forward-thinking governance can we ensure that generative AI delivers on its full potential—while protecting the people who make creativity possible.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Karen Robinson is the senior vice president and deputy general counsel at Adobe, where she manages global litigation, patent, trademark, copyright, anti-piracy, fraud, and trust & safety.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.