US Agencies Extend Carrots, Raise Sticks in Compliance Framework

March 22, 2024, 8:30 AM UTC

Recent actions and announcements by federal authorities provide a roadmap for corporate enforcement in the year ahead that prioritizes individual accountability, encouraging whistleblowers, national security issues, and, of course, staying ahead of the potential perils of artificial intelligence.

Companies must navigate evolving threats and regulatory scrutiny, or they could find themselves enforcement targets.

Individual Accountability

While emerging technologies continue to present new challenges and opportunities for corporations and law enforcement, there’s one area where we won’t be seeing change anytime soon: the DOJ’s focus on individual accountability.

Both Attorney General Merrick Garland and Deputy Attorney General Lisa Monaco in recent remarks have called individual accountability the agency’s “first priority,” and Garland emphasized the greatest possible deterrence for corporate misconduct is the fear of individual prosecution.

Expect this directive to have a continued impact on how federal prosecutors exercise their discretion when charging and resolving cases.

New Whistleblower Programs

If holding culpable corporate individuals accountable is the mantra, incentivizing disclosure of wrongdoing and encouraging whistleblowers appear to be the favored means. Monaco announced that the DOJ is upping the ante in efforts to incentivize voluntary disclosure by implementing its own whistleblower program.

The DOJ’s new program is intended to “fill in the gaps” left by other rewards programs currently used by agencies. The Securities and Exchange Commission’s whistleblower program is the most well-known. Subject to certain yet-to-be-set guardrails, whistleblowers could receive large payments out of the forfeitures that result from prosecuting corporate misconduct. While anyone who provides such information is eligible, this program appears aimed particularly at encouraging corporate insiders to come forward.

Two of the largest US Attorney’s offices are also establishing their own voluntary disclosure “whistleblower” programs. Keenly aware of its position within Silicon Valley, the Northern District of California announced a new program in March focused on preventing the types of fraud engaged in by the recently prosecuted executives of Theranos and HeadSpin.

We anticipate this program (not yet published) will be similar to a pilot program that the Southern District of New York launched last month. SDNY’s policy is particularly aimed at individuals who may have exposure to criminal prosecution.

Like other voluntary disclosure policies, the SDNY policy requires the individual be the first in the door, thereby setting off a potential self-disclosure race between individual employees and their employers.

National Security

Federal agencies continue to focus on national security in driving corporate enforcement priorities. In particular, they aim to protect US intellectual property and trade secrets, as well as prevent distribution of sensitive technologies with potential military applications. This follows a notable uptick in enforcement around these areas over the past year supported by the DOJ and the Department of Commerce’s Disruptive Technology Strike Force.

To demonstrate the point, the US Attorney’s Office for the Northern District of California announced the unsealing of an indictment in United States v. Ding, charging a Chinese national with theft of AI-related trade secrets from Google, including software design and chip architecture.

Relatedly, the government emphasized the multi-agency approach being pursued to investigate and prosecute these issues. Commerce, Department of Treasury, the DOJ have issued a number of tri-seal compliance notices on important topics, including one earlier this month regarding the obligations of foreign-based persons to comply with US sanctions and export controls as well as joint guidance on voluntary disclosure last year.

Notably, Deputy Attorney General Matthew Olsen of the DOJ’s National Security Division said corporate violations of these provisions will begin to be more aggressively enforced. The NSD’s new role will pose potentially tricky strategic considerations for companies to navigate in making voluntary disclosure decisions, considering the strict liability nature of many offenses and that disclosure to one agency isn’t credited to others.

Companies should also be gearing up for a new regulatory framework that will create significant new compliance requirements to address sharing of data with certain parties (including investors, partners, and vendors) in countries of concern.

The Feb. 28 executive order directs the DOJ to regulate data transactions posing an “unacceptable risk to national security,” including transactions involving access to sensitive personal data by individuals and entities in countries controlled by “hostile foreign powers.” Among other countries, that category will include cross-border relationships with China and Russia.

AI Risks and Compliance

There has been no greater buzzword recently than AI, and that’s true among federal law enforcement as well. New areas of focus include “AI washing” occurring as companies race to promote how their products and services deploy AI to the investors or customers. But just as deceptively exaggerated claims about environmental attributes have landed companies in hot water, the same will be true about AI claims that deceive the public.

Companies that are developing AI products or platforms also have a responsibility to build in appropriate safeguards to avoid misuse. While most companies have carefully worded terms of service for customers, and AI acceptable use policies for employees, we should expect increased scrutiny. Companies should be prepared to explain the reasonableness of their risk-mitigation practices, in the event of missteps.

The DOJ recently appointed the agency’s first chief science and technology adviser and chief AI officer and announced two additional enforcement-related steps. First, the agency instructed prosecutors to seek stiffer penalties and sentencing enhancements in certain AI-assisted crimes, much like enhancements that apply when a firearm is used during a crime.

Second, the DOJ announced that the Criminal Division will be updating its guidance on the Evaluation of Corporate Compliance Programs to incorporate expectations related to assessing the risks associated with disruptive technologies, including AI.

Based on the raft of recently announced enforcement priorities and compliance expectations for the year ahead, corporate decision-makers now have an opportunity to evaluate their internal priorities, adjust resources to address regulatory expectations, and fill any gaps.

Read more in this Professional Perspective

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Author Information

Kevin Feldis is partner at Perkins Coier and co-founder of the firm’s Environmental, Social & Governance practice.

Jamie A. Schafer is partner in the Perkins Coie’s white collar and investigations group.

Ben Estes is an associate in the Perkins Coie’s white collar and investigations group.

Write for Us: Author Guidelines

To contact the editors responsible for this story: Jessie Kokrda Kamens at jkamens@bloomberglaw.com; Alison Lake at alake@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

Learn About Bloomberg Law

AI-powered legal analytics, workflow tools and premium legal & business news.

Already a subscriber?

Log in to keep reading or access research tools.