AI’s ‘Unicorn Hunt’ Gives Privacy Officers a Key Governance Role

Oct. 10, 2023, 9:00 AM UTC

As companies navigate the murky legal implications and risks of artificial intelligence, they’re also facing an immediate practical problem: Who should take the lead?

There’s no playbook, but corporate AI oversight calls for someone versed in technology and the law who’s able to spot looming risks and tap expertise across the company. It’s often the privacy officer, who has “an obviously transferable set of skills,” who’s tasked with the role, said J. Trevor Hughes, president and CEO of the International Association of Privacy Professionals. But companies can’t rely on their existing privacy structures alone to tackle AI.

“Here’s the challenge: The domains of risk created by AI are so broad as to really make it a unicorn hunt to try and find a single person or a single set of skills that can be expert in all of them,” Hughes said.

According to a January IAPP report, more than half of the companies surveyed that were building new AI governance approaches were doing so on top of existing privacy programs. Separately, an Ohio State University team surveyed 75 companies about who is leading their AI governance this year, and the primary answer was the privacy team, said Dennis Hirsch, professor of law and director of the Program on Data and Governance at The Ohio State University, who led the research.

But generative AI raises such an array of legal and ethical questions that looking solely through a privacy lens leaves a company vulnerable to missing important risks. Those risks include cybersecurity threats, dangers to intellectual property, uncertainty regarding the IP law status of generative AI’s outputs, and avoiding bias.

Privacy officials are well-positioned to take on AI governance because they have a deep understanding of data, balance different rights and interests, and work across their organizations, said Caroline Louveaux, chief privacy and data responsibility officer at Mastercard.

“It’s a great starting place, but effective governance cannot rely just on the knowledge of privacy,” said Louveaux, who is on IAPP’s board of directors. “Where privacy professionals can really play a role is to be the convener, the orchestrator,” bringing together different functions across the company.

Larger companies, and those that have been thinking about AI the longest, often have cross-disciplinary teams evaluating risks and considering legal and ethical implications.

But even for those companies, the question of what to emphasize in governance may remain only a best guess until regulation and legal precedent draw clearer boundaries.

Now companies and institutions are figuring out how to find or create people and teams with the skills to oversee AI. The IAPP is moving to fill the need for AI governance professionals with a new training and certification program. The group expects to train 1,500 people in the coming months and thousands more next year, Hughes said.

Similar Skills

Combining privacy and AI roles makes sense for organizational efficiency, said Robin Andruss, chief privacy officer at Skyflow.

Top privacy officials are embedded in and have relationships across the entire organization, including human resources, marketing and engineering, she said.

Managing personal data calls for privacy officers to understand and create relationships across their organizations, said Christina Ayiotis, associate general counsel, cybersecurity and privacy, at Lumen Technologies. “So it’s logical they’d look at all the other data governance issues.”

Privacy officers also have experience building privacy law compliance frameworks, which can be adapted for AI governance, Hughes said. For example, a privacy impact assessment can be adapted into an AI impact assessment or fundamental rights impact assessment, data inventories and data flow audits can become AI inventories and audits, and data subject access rights “start to look like explainability and transparency,” he said.

And there are significant privacy risks with generative AI, which is built on massive quantities of data. For example, it’s not yet clear how European privacy law will view how platforms scrape public data from across the web to train their models, but data protection authorities are already scrutinizing generative AI.

A ‘Different Beast’

Kimberly Zink, global data strategist at Applied Materials, noted that privacy’s established frameworks for thinking about risks, particularly to personal data, make a useful starting point for building an AI governance program.

“But you have to be cognizant that generative AI is not like personal information,” she said. “It is a completely different beast.”

A straightforward compliance approach could lead companies to overlook some of generative AI’s biggest dangers, Zink warned.

“If generative AI becomes a check-the-box compliance function, like privacy is for some organizations, then I can almost guarantee that that company is going to run into unforeseen risks,” she said. “That’s why the vision really has to be much broader. Get people with the right qualifications, get them all in a room together, ensure quality in the conversation, and start the conversation early.”

Companies shouldn’t rely on privacy frameworks alone, said Hirsch at Ohio State University. He’s also on the advisory board of the AI Governance Center that IAPP launched this year.

“The privacy framework is largely about notice, consent, and purpose limitation,” he said. “Those privacy protections by and large don’t protect people in the world of AI. If you try to make this a privacy compliance issue, you’re going to be missing the point.”

Cross-Disciplinary Approach

Companies need a cross-disciplinary approach to AI, combining privacy, data analytics and legal into a “more holistic viewpoint that encompasses governance, risk management, model management, legal judgments, etc.,” Hirsch said.

Microsoft, one of the biggest players in generative AI thanks to its investment in ChatGPT maker OpenAI, “recognized that a single team or discipline tasked with responsible AI would not be enough,” Natasha Crampton, the company’s chief responsible AI officer, said in an email.

The company has nearly 350 staff working part- and full-time on responsible AI, she said, with roles embedded in the product, engineering, and sales teams. That number includes about 25 people in the Office of Responsible AI who report to Crampton and coordinate across the company. There’s also responsible AI oversight from the company’s executive leadership, including policy and technology leaders, and the board, she said.

Microsoft also brings together a multi-disciplinary team to review high-impact, “sensitive uses” of AI, Crampton said.

Mastercard has published internal principles around data responsibility, and has an AI governance team reporting to Louveaux and an AI governance council comprising the chief privacy, chief data and chief security officers, and other executives, Louveaux said.

Regulation’s Impact


Forthcoming AI regulation—including the EU AI Act currently under negotiation—may raise awareness of AI’s risks among corporate leadership and boards, as the General Data Protection Regulation did for privacy, Louveaux said.

Which regulators will take on AI—whether data protection authorities or another body—may also shape corporate AI governance, she added.

AI regulation could also create clearer boundaries in a space many companies currently refer to as ethics or responsibility.

“I think it actually will be a help to many organizations to have some guardrails, have some lines on the field, so that they know when they’re in play and out of play when it comes to AI,” Hughes said.

To contact the reporter on this story: Isabel Gottlieb in New York at igottlieb@bloombergindustry.com

To contact the editor responsible for this story: David Jolly at djolly@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.