- Shareholder bids highlighted misinformation, labor risks
- First-time proposals secured rare double-digit backing
Companies from social media giant
Meta recently updated its relatively-new AI labeling policy to be even clearer about content generated by the technology to tackle concerns about the potential spread of misinformation across Facebook and its other platforms. Microsoft released an inaugural responsible AI report in May. And Apple announced that it would disclose more about its AI plans after a proposal seeking more AI-related business and ethics information received 37.5% support from shareholders in February.
The businesses are among a half-dozen targets that shareholders have pressed to divulge the risk the AI tools they’re developing to remain competitive in their industries pose to their finances and operations, as well as to their employees and society more generally. In addition to technology companies, the entertainment industry has become a focus of shareholder efforts after use of the emerging technology galvanized labor concerns during last summer’s Hollywood strikes.
An AI bid at streaming company
The pressure is not going to stop companies from moving forward with AI. Businesses are banking on AI as a monumental financial opportunity and touting their AI focus in filings to investors: Bloomberg Law reported in February that over 40% of S&P 500 companies mentioned AI in their most recent annual report—an uptick since 2018 when AI was rarely mentioned.
The campaigns are, however, starting to prompt some businesses to modify their behavior. Earlier this year, the AFL-CIO said it withdrew AI-related bids at
Pressure from the investors and others is going to continue to push companies across industries to divulge more information about their AI use, said Beena Ammanath, global and US technology trust ethics leader at
“There is enough awareness now that we’re going to see that shift to be more transparent,” Ammanath said. “I get to speak to a lot of boards and CEOs and their leadership teams, and I can tell you that the level of awareness or activity that is happening at a board level—something like this hasn’t happened in a long time.”
Big Tech
Microsoft released its inaugural responsible AI report in May explaining how it builds generative AI systems to mitigate misinformation and disinformation. The tech giant committed to producing the report to the US government one year ago.
That commitment wasn’t enough to satisfy investors. Microsoft was the first of several companies to face a shareholder proposal late last year urging it to detail its AI risk and plans to remediate any potential harms. Even though it had already promised its responsible AI report for the US government, 21.2% of investors still supported the bid from Arjuna Capital in December asking the company’s board to produce an additional report.
“We believe Microsoft’s multi-faceted program to address the risks of misinformation and disinformation is longstanding and effective,” the company said in its proxy statement.
The height of AI investor pressure came at
Any result in the double digits is considered enough to potentially sway company behavior, even if the proposal does not pass.
Alphabet’s AI products include the Gemini chatbot, formerly known as Bard, which can be used for writing, research and other language-related tasks. The tech giant told shareholders that it’s committed to “applying Alphabet’s resources responsibly as it continues to unlock the growth potential of AI across its products and services.”
Ultimately, investors want big tech to be more transparent and careful about how rapid AI development could pan out in the long run.
“How fast is too fast, and how much are you willing to sacrifice society for profit?” asked Jonas Kron, chief advocacy officer of Trillium Asset Management, which brought the AI governance proposal at Alphabet.
Meta updated its AI policy in July to warn users about manipulated media. The social media giant launched a “Made with AI” label in April, but it recently changed the tag to say “AI info” instead, which users can click to get more information. Meta said the update—which it originally rolled out after pressure from an independent oversight board that called for a revamp of its policies—is intended to provide more context, because the previous label wasn’t always aligned with users’ expectations.
Meta, which has launched a new digital assistant feature that can answer user questions and generate images, faced a proposal from Arjuna Capital in May that received 16.7% of investor support. Arjuna pointed out that the vote result was significant considering Mark Zuckerberg controls over half of the company’s voting power, and voted against the bid.
Arjuna is going to continue to urge more businesses to make changes. “The risks aren’t going away, so our engagements aren’t going away,” said Julia Cederholm, senior associate of ESG research and shareholder engagement at Arjuna.
Meta said in its proxy statement that it has already “made significant investments” in safety and security to tackle misinformation and disinformation.
Entertainment Pressure
The proposal that almost passed at Netflix raised concerns about potential hiring discrimination, mass layoffs and facility closures, and argued that ethical guidelines for AI use could help avoid labor disruptions. The investor effort followed entertainment industry worker concerns that AI could take credit from or replace writers or be used to replicate actors’ likenesses.
Netflix said in its proxy statement that it’s already subject to collective bargaining agreements with entertainment industry unions that include AI provisions. Netflix also said the type of report the proposal sought “may require disclosure of strategic initiatives, confidential research and development activities, and other information that may harm our competitive position.”
Carin Zelenko, director of capital strategies for the AFL-CIO, said the entertainment industry strikes demonstrated what happens when businesses don’t engage workers in thinking through how the use of technology could impact jobs and the future of the industry.
“I really believe it’s important that, as companies are introducing these technologies, that they engage the workforce in how the technology can be used,” Zelenko said.
Some employees feel the same. Ylonda Sherrod, an AT&T sales consultant in Ocean Springs, Mississippi, and a member of the Communications Workers of America, is speaking up about her AI concerns about worker empowerment and transparency.
“I feel like we should have a say in how it’s implemented in the workplace, because it could be implemented better,” she said in an interview, adding that there should be restrictions and policies in place to make workers feel more secure across industries.
No Fixed Playbook
As the AI race ramps up, companies are going to continue wrangling with how best to navigate ethical, legal and regulatory issues covering a range of topics from data privacy to the environmental impact of the technology.
AI risk is heightened by new rules like the EU’s AI Act that will take effect on Aug. 1. That law—like the shareholder proposals—aims to make sure AI systems are governed by safe and ethical principles. The Act bans what it deems “unacceptable risk” like using AI systems that could manipulate individuals or exploit them because of their age or disability, for example.
The law will apply to providers and developers of AI tools that are used in the EU even if the companies are based elsewhere.
The US has been slower to adopt any laws on AI use, but the White House issued an executive order late last year with sweeping security and privacy measures and other directives, including a requirement that developers share their safety test results with the US government. Securities and Exchange Commission Chair Gary Gensler also gave companies a stern warning in December about misleading investors about their AI capabilities, a phenomenon that he called “AI washing.”
With inconsistent and incomplete guidance from regulators so far, some companies are working to set up their own risk mitigation infrastructures.
“I think organizations are experimenting, right now there is no fixed playbook for it,” said Deloitte’s Ammanath.
Some companies have created new high-level roles to tackle the mammoth issue, including chief AI ethics officer, chief tech ethics officer, and even chief trust officer. They’re also setting up committees on AI or tech ethics either with internal or external members.
But as businesses divulge more about their AI plans, it’s important to be clear and tailor the release of information to the right stakeholders, Ammanath said.
“The way you communicate about or explain how a model works to a data scientist would be different to how you explain it to your board or customer or investor,” she said.
To contact the reporter on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
