AI Copyright Hacking Exemption Would Boost Trust, Advocates Say

Jan. 5, 2024, 10:06 AM UTC

Allowing independent hackers to legally circumvent digital security measures to probe artificial intelligence models for bias and discrimination would boost transparency and trust in the nascent, polarizing technology, policy groups told the US Copyright Office.

Allowing circumvention of digital copyright protections for AI models in limited circumstances is being considered as part of a triennial Copyright Office process to review proposed exemptions to the 1998 Digital Millennium Copyright Act. That law prohibits bypassing access controls like encryption, password protection, and digital locks put in place by copyright holders unless the circumvention is exclusively for an exempted, non-infringing use.

Past exemptions have included allowing display of short portions of copyrighted movies for educational purposes or breaking into computer programs for security research. But now the office is contemplating a new exemption—proposed by a college graduate student—to allow researchers to circumvent safeguards and access secure AI models, solely for the purpose of studying bias.

If adopted in some form, the proposal could ultimately allow researchers to access the models underlying generative AI products like the chatbots and image generators offered by OpenAI Inc., Microsoft Corp., Google, and Meta Platforms, Inc. Hackers could manipulate AI systems to see if they can be drawn into engaging in racial discrimination or producing synthetic child abuse material, one group, the Hacking Policy Council, said in its comments.

The question is “whether or not we want to rely entirely on the providers of AI systems to ensure that our systems are trustworthy and fair, and don’t produce harmful content,” Venable LLP counsel Harley Geiger said. He framed the office’s consideration of the proposal as “whether it should be a hacking crime, to forbid independent researchers from testing those AI systems to help ensure that they are trustworthy and fair.”

The Copyright Office advanced seven proposed exemptions in October in the first step of the review process. In addition to the AI bias proposal, others would allow access to repair computer programs within commercial products and to preserve video games. Three groups submitted comments in support of the generative AI exemption ahead of a Dec. 22 deadline, and the office will hold virtual public hearings in the spring after further comment periods close in March.

The exemption would align with Biden’s Oct. 30 executive order on AI, which included “red teaming”—structured testing to find flaws and vulnerabilities—as a key safeguard in AI development, according to the comments in support of the exemption.

“Given President Biden’s recent Executive Order on AI, this exemption may be something that the Copyright Office will seriously contemplate, especially when you consider that the President specifically mentioned issues of bias in the Executive Order,” Saul Ewing LLP partner Darius Gambino said in an email.

Generative AI Research

The proposed AI exemption would allow “researchers” to access “copyrighted generative AI models, solely for the purpose of researching biases” and permit sharing research and methodologies that “expose and address biases.”

Jonathan Weiss, a graduate student at University of California, Berkeley, said he was encouraged to submit the petition after attending a workshop at DEFCON, an annual hacker convention.

“If these models are going to be used to make more and more important decisions as they become more powerful and as we place more trust in them, it’s going to be important that they aren’t biased in any certain direction,” he said in an interview in October after his proposal was advanced.

The Copyright Office initially noted issues with the petition—it doesn’t define “researchers” or discuss how protection measures prevent researchers from accessing the software within the generative AI models. But the comments in support of the proposal have “added some legitimacy to the initial request by fleshing out the issues, and the need for an exemption, in more detail,” Gambino said.

Security measures include automatic blocks on certain inputs that could result in harmful outputs, and terms of service that prohibit bypassing security measures that users must agree to in order to create an account and use the system, according to the Hacking Policy Council, which submitted comments supporting the proposal. HPC “aims to make technology safer” according to its website, and members include Google and Microsoft Corp., among others.

Cybersecurity company HackerOne and OpenPolicy, a platform that describes itself as working to “democratize and simplify access to policy engagement,” also submitted comments in support of the exemption.

The exemption would empower independent, good-faith researchers to expose AI systems’ susceptibility to bias and discrimination and lead to changes that would result in more trustworthy algorithms and systems, the groups said. Without it, they warned there could be a chilling effect on researchers who may fear lawsuits.

The exemption shouldn’t be limited to “bias,” OpenPolicy argued, but should encompass “broad sets of undesirable social impacts, and other harmful or undesirable unintended outputs in AI systems, from discrimination to ‘untrustworthy’ behavior.”

“We want to work with all sorts of outsiders, whether they’re researchers or academia, civil society” and “leverage their ability to test the AI models for not just security vulnerabilities, but other types of unintended consequences,” said Amit Elazari, a lecturer at UC Berkeley and founder of OpenPolicy.

Some corporations have already hired outside parties, such as law firms like DLA Piper, to red team AI models for bias. OpenAI recently established a “preparedness” team to “protect against catastrophic risks posed by increasingly powerful models.”

Possible Opposition

The office hasn’t yet received comments opposing the exemption—those responses are due Feb. 20.

“I would expect to see some opposition to the proposed AI exemption,” Benjamin Marks of Weil, Gotshal & Manges LLP said. “There’s a divide in the world of generative AI between those who are releasing open source models and those who consider their models highly confidential and proprietary,” said Marks, who represents Getty Images in its copyright lawsuit against Stability AI. “Some members of the latter group may oppose the proposed exemption, notwithstanding the guardrails included to limit misuse of it.”

The exemption differs from the majority of other requests the Copyright Office received, Gambino said. Other proposed exemptions, such as for creating archival copies of motion pictures and video games, are more akin to traditional concepts of fair use.

Geiger said he didn’t want to speculate whether companies with large language models would oppose the petition, but pointed to previous opposition to the security research exemption by the US Justice Department, among others. “If that pattern holds, then there may well be some opposition,” he said. The Justice Department later came to support the exemption for security research.

Representatives for OpenAI, Google, and Meta didn’t respond to Bloomberg Law’s requests for comment.

Weiss acknowledged in October, after the Copyright Office advanced his petition, that it “will definitely need to be elaborated upon to ensure that people who are bypassing security controls on the models for malevolent purposes aren’t protected.”

Geiger, who authored HPC’s comments in favor of the exemption, said it can’t be confused with a get out of jail free card for independent hackers.

“There are still other laws that apply like the Computer Fraud and Abuse Act,” he said.

Though independent security research used to be viewed as more of a threat, Geiger said, companies later embraced it.

“The AI hacking space is going to go through a similar process of maturity that security research went through,” Geiger said, “where they come to accept, embrace, and then leverage independent research into the fairness and trustworthiness of the systems.”

To contact the reporter on this story: Annelise Gilbert at agilbert1@bloombergindustry.com

To contact the editors responsible for this story: James Arkin at jarkin@bloombergindustry.com; Adam M. Taylor at ataylor@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.