Imagine a software engineer at a bank is finishing typing the final lines of code for a new system that launches the next day. The build has taken months and involved a team that iterated and tested the system multiple times.
That picture is now ancient history. Today’s creators are, and are expected to be, fusing human talent with artificial intelligence. That fusion may be a talent itself, but do these creators deserve the same intellectual property protection?
Copyright law has long shown a degree of flexibility, adapting from a framework centered on individual human creation to protect less traditional, functional works. To manage this evolution, legal systems have tended to operate on a sliding scale: rewarding deeper human creativity with long-term protection, while granting shorter, specialized terms of protection to works rooted more in capital investment or utility.
AI, however, represents a more fundamental shift. While IP law in some jurisdictions has “flexed” to accommodate new types of works, it traditionally has maintained a tether to the human creator. As AI begins to outperform human scientists and creators, that tether is being tested.
High Stakes
Three areas demonstrate where the stakes are highest for intellectual property law.
Life Sciences: AI has evolved from supporting research to driving it. In drug discovery, AI is playing a central role in identifying targets, designing molecules and optimizing clinical research, compressing drug development timelines.
Beyond the lab, AI optimizes clinical trials, excels at medical diagnostics and has moved into personalized medicine, integrating genomic sequences with real-time patient data to create bespoke treatments. This machine-led shift threatens the industry’s core model. The pharmaceutical industry relies heavily on market exclusivity, primarily through long-term patent protection.
Yet patent law still requires a human inventor. If an AI automatically generates a compound, and no human can honestly be named as inventor, the resulting patent application may be rejected. This might leave a drug exposed to generic competition as soon as regulatory exclusivity runs out.
Software: AI is now embedded across the software lifecycle, from code generation to testing, monitoring and incident analysis, often with limited human involvement. This type of shift raises some serious IP protection questions.
A rogue employee who walks out with, or a competitor who obtains, an AI-written trading algorithm or proprietary codebase may face limited infringement liability. To the extent copyright falls away, business may seek to rely on trade secrets and contracts to fill the gap—but trade secrecy provides little protection once a work is made public or lawfully reverse-engineered.
Marketing: The marketing and advertising industry has pivoted to an AI-first strategy, leveraging automated data analytics for better targeted campaigns and predictive analytics to retain or draw in customers. In parallel, the industry has been using AI for content, from taglines, social media posts and blogs, images, and campaigns.
In many jurisdictions, this creates legal vulnerability. AI-generated content used in marketing campaigns may lack copyright protection if created without substantial human creative input. The risk is compounded by prompt similarity: AI tools may generate comparable outputs from similar prompts, so competing brands may inadvertently deploy near-identical assets, leading to brand dilution.
No Easy Fix
Courts and patent offices in the US, UK, Europe, China, and Japan all agree that AI can’t be named as an inventor, and a patent application with no human inventor fails. But inventions developed with AI—where a human can still be identified as having made the inventive contribution—remain patentable, and the courts have been careful to preserve that distinction.
With copyrights, the US, EU, and Japan all require a human author or human creative contribution. The US Supreme Court indicated its agreement with this position in 2025 by refusing to hear a challenge to a pro-human-authorship ruling in Thaler v. Perlmutter.
Beyond that baseline, a few legal systems are pulling in different directions:
- The UK offers a statutory safety net: a 50-year copyright assigned by legal fiction to the person who made the necessary arrangements for the creation of a computer-generated work. Jurisdictions such as New Zealand and Hong Kong have similar systems. In an AI context, this could benefit a person who wrote a prompt, defined parameters, or curated key training data. The provision has yet to be tested before the UK courts in a generative AI setting, with a possible repeal on the table.
- Ukraine, by contrast, sidestepped the authorship question entirely in 2023 by creating a standalone sui generis right for AI-generated outputs lasting 25 years—a niche solution, but a potential model for others.
- China’s courts, in certain cases, have recognized copyright subsistence in AI-generated outputs when the human user proved that they exercised sufficient creative control, such as through iterative prompting, parameter tuning, and selection.
These differences mean a protection strategy that works in one country may fail elsewhere, complicating cross-border exploitation and enforcement and creating challenges for intellectual property practitioners.
Looking Ahead
For businesses, part of the answer is pragmatic: keep humans visibly—and evidentially—in the creative and inventive AI-enabled loop.
Contracts require close attention. Employment terms, agency agreements, and AI platform licenses should address ownership of AI-assisted outputs expressly. Similarly, warranties and similar protections regarding ownership and protection of rights should be scrutinized.
Businesses shouldn’t assume that legacy work‑for‑hire or IP assignment clauses will automatically cover machine‑generated material. Where copyright and patents fall short, alternative protection strategies become more important. Trade secrets, trademarks, and design rights all can play a role, but each has limits, and each fits some outputs better than others.
For policymakers, IP and AI sit at a crossroads. Given the rate at which AI is being adopted, it seems unlikely to be sustainable for governments to take the purist (human-only) approach to protecting new creations. But while legal systems catch up, businesses and their legal advisers need to bridge the gap through contractual and governance arrangements while keeping one eye firmly on the reform horizon.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Giles Pratt heads Freshfield’s global IP, data, and technology practice and co-heads the firm’s cross-practice international AI group.
Anna Gressel is Freshfields’ global co-head of AI.
Theresa Ehlen co-leads Freshfield’s German data and tech team and German media practice.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.