- Companies designate leaders specifically to oversee AI
- Hiring for AI jobs spiked in 2024, data shows
Many large companies are on the hunt for a top executive with artificial intelligence chops to integrate the tech into their business operations. How they tap into that expertise, however, runs the gamut.
Dell Technologies Inc. and PricewaterhouseCoopers hired chief artificial intelligence officers to oversee how they use and build AI tools. Biogen Inc. brought on a board member with AI expertise. Other businesses are turning to consultants from firms such as IBM to help write AI policies and plans.
Responsibilities vary widely depending on the company.
“It’s really important for management to have clarity as to what are they trying to do with AI,” said Jesús Mantas, global managing partner at IBM Consulting. “What are you using AI for?”
Mantas has a unique vantage point leading a team that advises clients about emerging technology. He’s also held a board seat at global biotech firm Biogen since 2019. The company’s other board members and executives have looked to him to be the AI expert, Mantas said.
AI governance is intended to help companies and their employees guard against misuse while mitigating the risk of issues like bias and privacy breaches.
In some ways, companies are using the same playbook deployed for other risks they face, like cybersecurity and fraud. But while governance guidelines for executive misbehavior and digital threats are well-established, strategies for AI are complicated by rapid changes in the technology and how companies are using it.
“You may have to keep remodeling or redesigning your processes, your controls, and governance because it’s changing that fast,” said Frank Fenello, who helps companies develop AI governance strategies as national managing director of UHY Consulting.
Oversight is especially important as companies add new AI roles to their workforces. More than half of 2,500 chief executive officers surveyed in April said their businesses are hiring for generative AI-related roles that didn’t exist in 2023, data from IBM’s AI consulting group shows.
Many businesses are adopting AI to improve existing processes and stay competitive. The technology brings the potential for companies across industries to unlock cost savings and productivity gains.
AI is an increasingly popular topic in discussions of corporate financial results on earnings calls, Bloomberg data for the S&P 500 index shows.
AI Leaders
On the headhunting front, there’s a global craze to find people with AI expertise who are dually qualified to lead organizations, said Katie Tucker, a consultant for executive search agency SpencerStuart.
Tucker said the “adoption curve"—or the pace at which companies are using AI—has been one of the shortest she’s seen in using new technology. Some companies are ready for “moonshot” ideas, while others are just looking to optimize their operations. Tucker said she fields calls every day from companies asking for C-suite- or board-ready candidates. But it’s a “scarce” combination of skill sets that’s beginning to emerge, she said.
“You don’t often see someone who has led enterprise technology,” and also brings experience with business infrastructure and an ability to “get the whole organization educated around using technology overall,” Tucker said.
For some companies, a chief AI officer may be a good fit if they use the technology across many business lines, according to Paul Pallath, vice president of applied AI at tech consulting firm Searce.
Almost a quarter of companies were looking to hire a chief AI officer last year while 15% already had one, according to a 2023 survey of global technology leaders by tech, marketing, and research company Foundry.
Some companies are installing a director with AI expertise to provide board-level oversight or help develop oversight guidelines. But this approach appears less common so far, corporate disclosures show. About 15% of companies in the S&P 500 index provide some disclosure about board oversight of AI, according to an Institutional Shareholder Services analysis of companies’ 2024 proxy statements. Only 13% of S&P 500 companies have a least one director with AI-related expertise.
Risk Oversight
Corporate oversight of AI is likely to focus on identifying and mitigating risks stemming from use, or misuse, of the technology.
Data privacy has emerged as a potential concern already. Generative AI tools like OpenAI’s ChatGPT are trained on troves of data, which can include personal information gathered online. Users of generative AI tools may also share information in prompts that could feed into the dataset and become discoverable to other platform users.
Without guardrails, businesses may risk leaking client information or other sensitive details.
“It’s very important that there’s consistent, responsible use of data,” including client or proprietary data, said Dan Priest, who was named PwC US’s first chief AI officer in July. The firm has policies and controls around how data is used, he said.
Another risk is ethnic bias, where AI can unintentionally exclude or misrepresent people of color.
Alphabet Inc.'s Google, for example, rolled out a critical software upgrade for its Pixel phone to better detect people of all skin tones in photos.
The phone camera’s software had a tough time recognizing people with brown and black skin tones, a problem the company quickly realized and tried to fix. The software, known as Real Tone, trained on thousands of artificial intelligence-generated images of darker-toned people to patch the hole in the camera’s complex functions.
Google turned to Washington, DC-based Creative Theory, a Black-owned and Black-operated digital marketing and creative agency to make the software.
Artificial intelligence needs to be trained with diversity in mind in order to be effective, said Gary Williams, Creative Theory’s chief creative officer. If the teams training the software don’t have a diverse perspective on what a person’s face typically looks like, the software won’t know who to recognize as people
“Being able to use a tool responsibly means the tool has to be made responsibly,” Williams said.
To contact the reporter on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
