Brown Rudnick attorneys Matthew Richardson and Joel Todd say companies have to adopt AI technology to stay competitive, but must be careful to let regulators know how they’re using it.
As artificial intelligence transforms the way businesses operate—from process automation and personalized customer experiences to predictive analytics and fraud detection—corporate boards and regulators are scrambling to provide proper oversight and regulation to avoid danger.
Public companies should pay close attention to their public disclosures regarding their use of AI and the financial and operational results attributable to its use.
Companies that have their stock traded on a national securities exchange, such as the NYSE or Nasdaq, or are otherwise subject to the Securities Exchange Act of 1934 must provide ongoing public disclosures on Form 10-K (annual report), Form 10-Q (quarterly report), and Form 8-K (current report), among other required filings.
These reports update investors and potential investors on the financial results and operations of a business; assess and convey the risks associated with an investment in a company; and make statements about corporate goals, future possibilities and plans, and assessments of strategy.
Public companies must provide updated corporate information on a defined schedule set by regulation, so it’s important for companies that are subject to these regulations to represent current corporate information accurately and explicitly address the items or conditions that may change such information.
AI is in vogue with investors, and the Securities and Exchange Commission has stated it will be watching how companies disclose their relationship with AI to the public.
Business Disclosures
Direct integration and application of AI in business operations should be disclosed where required, and where helpful to supplement other disclosures, in corporate reports filed with the SEC. Disclosure of indirect AI exposure, such as a third-party service provider’s increased efficiency attributable to AI as a passed-through benefit realized by the company, also may be helpful.
Unlike cybersecurity incidents, which have their own Form 8-K disclosure item, AI disclosure may be placed in different sections of corporate reports. Context and accuracy are important when discussing AI’s benefits to a company in public disclosures. Context can help the company avoid claims that the disclosure was misleading, while accuracy can help the company avoid claims that the disclosure was false.
Risk Factor Disclosures
Due to AI’s broad scope and the ways it can affect a company’s business and financial performance, all public reporting companies should review AI’s potential effects.
Risk factors provide investors with important insights into the company’s operations and can help identify areas where performance may be negatively affected by internal and external forces. Risk factors also provide an opportunity for the company to protect itself from liability by fully disclosing known risks.
Drafting sufficient risk factors related to AI may be particularly challenging, given the rate of change in the space combined with general opacity of its features and flaws. A good risk factor should strike the balance between being broad enough to cover unforeseen circumstances as AI changes but specific enough to inform the reader of known issues.
AI Drafted Disclosures
While there is some risk involved, a company can contemplate using AI to write about itself in corporate reports.
Consider using data analytics technology to compare the disclosures of peer companies. Benchmarking against peer companies is a common practice for quarterly and annual reporting of risk factors and for reporting executive compensation data in proxy statements. Data analytics tools may also allow companies to learn from their peers’ mistakes by analyzing and comparing SEC enforcement actions and comment letters.
During the drafting process, companies can use large language models, such as ChatGPT or Microsoft Copilot, to draft disclosures. These applications shouldn’t be relied on exclusively, as they are prone to mistakes. And their use won’t protect a company from liability arising from incorrect or incomplete disclosure.
Exclusive or overly reliant use of such an application also might be viewed as a sign of insufficient disclosure controls. However, large language models can assist by creating an efficient first draft then to be reviewed by adequately trained personnel.
Data analytics technology also may be used to evaluate the internal controls of the company, including its disclosure controls.
Investors, securities litigation plaintiffs, and regulators are already using this technology. The SEC is using data analytics to identify non-compliance with disclosure requirements.
According to a recent survey, 73% of respondent investors indicated that companies should increase their investments to deploy AI at scale. Companies may be at a disadvantage if they wait much longer to use available AI technology for themselves.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Author Information
Matthew Richardson is a partner in Brown Rudnick’s Cybersecurity & Data Privacy and Digital Commerce groups.
Joel Todd is an associate in Brown Rudnick’s Transactions Practice Group.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.