Ask Che Chang, the legal chief for OpenAI, what keeps him up at night, and his first answer follows a general counsel’s boilerplate script: how best to advise and guide other teams to meet the company mission and goals.
But then, after reflecting, Chang adds something weightier, capturing the more cosmic and pressing questions about a company, and an industry, that aims to reshape generations to come.
“To me, that is the overarching goal here,” Chang told Bloomberg Law in an interview last week from a conference room in OpenAI’s unassuming San Francisco office. “It’s not like, ‘How do we build the strongest model?’ It’s, ‘How do we make sure our generation and future generations and our grandkids and so on are in a world that we are proud of and we don’t feel bad about in hindsight?’”
Chang joined
Chang said OpenAI operates in an “interesting middle ground” in the broader AI ecosystem. At just 3,000 employees, it’s much smaller than its Big Tech competitors and possesses the high risk tolerance of a startup. But it’s also at the epicenter of an era-defining industry, with a $500 billion valuation and legal threats equal to its status as a Silicon Valley behemoth.
To balance that tension, Chang said he tries to instill in his team a culture of acknowledging they’re not going to get everything “a hundred percent perfect.”
“We move quickly,” Chang said. “This entire industry changes every three months.”
New Products, Old Stories
Perhaps no example better exemplifies OpenAI’s tension between growth and legal risk than the release of its latest product, Sora 2.
The app, featuring a TikTok-like feed of AI-generated short-form videos, initially required copyright owners to “opt out” of sharing their intellectual property to prevent content using their protected characters appearing in the app. The launch generated swift backlash, including from groups representing movie studios who’d recently filed lawsuits against AI image and video generators. The Motion Pictures Association criticized OpenAI, saying “it remains their responsibility—not rightsholders’—to prevent infringement.”
Three days after the debut, OpenAI CEO Sam Altman said Sora 2 would switch to an “opt-in” framework for copyright holders.
Chang insisted there was a “thoughtful, deliberative process” that involved consulting industry rightsholders both before and after Sora 2’s release.
Several lawyers said after the app’s release that they were struck by the decision not to unveil Sora 2 with an opt-in framework, especially considering OpenAI’s position as a defendant in myriad infringement suits. Ed Lee, a law professor at Santa Clara University, wrote that Sora 2’s launch, like other OpenAI product launches, was accompanied by “highly orchestrated gaslighting to maximize the media coverage.”
Chang laughed off the criticism that releasing the app with the opt-out posture was a PR strategy and said it was “not intentional in the slightest.”
“It was absolutely not a press stunt,” he said. “If we were running a press stunt, I would not have done it this way.”
“We had internal conviction that Sora would be good,” he said. “But you never know, right? Just like we had no idea that ChatGPT was going to be big right when we released it.”
It’s not the only time an OpenAI product introduction sparked a frenzied reaction from the copyright community. In May last year, the company released voice options for ChatGPT, including one that sounded eerily like movie star Scarlett Johansson, which it pulled after she objected. In March, it spawned criticism when a new version of its AI-image generator GPT-4o quickly produced a torrent of images in the style of a popular Japanese animation studio, Studio Ghibli. Animation styles, though, unlike a copyrighted character or an actress’ likeness, aren’t protectable IP.
Responding to those reactions as a legal team requires relationships with OpenAI’s technical staff, Chang said. He noted that lawyers who understand the technology are embedded in product and research divisions.
“They’ll listen to us,” Chang said of technical teams and product leaders. “They know we’re going to be practical on things, and we’ll put our foot down when we need to.”
Powering Up
Building out OpenAI’s legal team has been an iterative process that started off as reactionary, Chang said.
The first issues facing the company weren’t IP problems, but privacy-related investigations, he said. For example, Garante, Italy’s privacy regulator, said OpenAI processed users’ personal data to train its model, forcing it to take ChatGPT offline in the country in March 2023.
That year, the company was hit with the first wave of copyright lawsuits—six within roughly six months—filed by groups of authors and
“The first year was kind of just reactively just trying to survive and keep our head above water,” Chang said. “After that was being a little more intentional about, ‘Okay, here’s where problems are going to pop up.’”
OpenAI is continuing to push its legal team into new areas, Chang said—the company has openings for associate general counsel roles in antitrust and its hardware portfolio, for example.
Renny Hwang, a deputy general counsel who also spoke with Bloomberg Law, was the first litigation attorney the company hired in 2023, just after the first authors’ lawsuits landed.
“It’s foresight by Che and the team, but I think there’s an expectation that this was a place that we’d have to focus energy on, and that’s turned out to be the case,” Hwang said, referring to the copyright litigation.
When it came to selecting firms to represent OpenAI in those lawsuits, Hwang offered a straightforward rule: “We don’t hire jackasses or jerks.”
Latham & Watkins, Morrison & Foerster, and Keker, Van Nest & Peters are the three main firms representing OpenAI in its copyright suits. Hwang worked with Keker when he was at
“We’ve both been around, fortunately, long enough in the industry. It’s a small set of lawyers, you kind of know who’s good for what,” Hwang said.
While OpenAI insists training AI models on protected material is fair use, it’s nevertheless negotiated a range of licensing deals with content companies including The Washington Post and Shutterstock.
“We have a vision for where the world is going to be with AI and AGI and all these different things,” Chang said. “We don’t know how that’s going to impact everybody, but we want to make sure to take as smooth a journey as possible.”
To contact the reporter on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
