Artificial Intelligence agents could make up to 15% of day-to-day business decisions by 2028. Businesses’ success will require innovations, yet many companies deploying these tools are flying blind when it comes to compliance.
While policy makers fixate on preventing new AI-specific regulations—such as the moratorium on state AI regulations Congress tried to pass this summer— businesses shouldn’t be distracted from the fact that AI implementations might violate existing laws.
Here’s the hard reality: The “Wild West” of AI isn’t about the absence of regulation, it’s about the race to deploy powerful tools without understanding how laws apply to automated decision-making.
The Trump administration’s AI Action Plan notes that heavily regulated sectors such as health care are slow to adopt AI because of “a complex regulatory landscape and a lack of clear governance and risk mitigation standards.” But employees will increasingly use AI tools that make their work more efficient, whether approved or not.
Part of the solution is helping businesses understand their tools, identify the regulations that could lead to lawsuits or government enforcement actions, and develop strategic measures to innovate confidently.
AI Compliance Challenge
Working as lawyers in the technology space, we’ve observed how businesses deploying AI face unexpected compliance pitfalls.
AI provides tremendous opportunities, from enhanced efficiency to improved decision-making. Already, it has proved effective in supply chain optimization, customer service interactions, discovering crucial patterns across decades of scientific research, fraud detection, and predicting consumer trends.
But AI tools create unique compliance risks: Their speed and scale make persistent oversight difficult, their “black box” nature obscures where oversight is needed, it can be difficult to prevent them from releasing sensitive data they’ve been trained on, and AI systems often struggle to correctly apply legal requirements even when trained on relevant laws.
A central challenge is ensuring that the use of AI doesn’t put the business at risk of violating the laws and regulations that already apply to its work. Nearly all of these laws were written before large language models and other AI tools existed, creating challenges in interpreting how older legal concepts apply to new technologies.
New Risks
Highly regulated industries underscore the real-world challenges of rolling out AI tools.
Health Care. In health care, AI companies are developing innovative tools to support clinicians across a range of processes, from assessing scans and predicting conditions to communicating with patients via AI-assisted translation. Most businesses that engage with patient medical data must comply with Health Insurance Portability and Accountability Act.
This means their AI tools must also comply with HIPAA’s stringent rules, including patient consent requirements, technical access controls, and specific contract provisions with business associates. Even companies not typically covered by HIPAA—such as direct-to-consumer health devices—will need to comply with Federal Trade Commission and state-level consumer health privacy laws. Health-care businesses must navigate these overlapping regulations while anticipating regulatory evolution.
Defense Technologies. For defense contractors, the use of AI in operations or research and development can present Defense Federal Acquisition Regulation Supplement compliance risks regarding data protection and supply chain requirements. Even export control risks may be implicated. Careful technology choices and training will be crucial to avoid sharing controlled technology with non-US persons or routing data through servers in restricted jurisdictions.
Financial Services. AI tools deployed across financial services have proven particularly effective for fraud detection and anti-money laundering compliance. Loan underwriting and credit scoring increasingly use AI to expedite complex calculations, but this raises particular risks under consumer protection and fair lending laws.
AI may increase risks to the stability of a financial institution or the financial system, such as through AI-enhanced trading tools that can act faster than human-monitored risk limits and compliance controls. State agencies have targeted entities whose AI underwriting model allegedly resulted in unlawful disparate impact based on race and immigration status.
Regulators have sometimes reacted to certain financial institutions’ deployment of automated systems by mandating expensive, human customer support services for the entire sector. Such mandates are difficult to roll back.
Overall, the financial services sector continues to face active enforcement actions and consumer lawsuits, making compliance monitoring critical.
Compliance challenges extend beyond these heavily regulated industries. It remains important for all entities to understand their AI tools and how they are being used. For internal marketing, generating advertisement content using large language models requires attention and traditional clearance procedures to avoid risks of intellectual property infringement. Similarly, all entities’ HR departments need to be careful using AI to evaluate candidates’ resumes to prevent violations of anti-discrimination laws.
Five Questions
Now is the time for companies to be proactive about these risks. Here are key questions every company should consider:
- Where exactly is your company using AI? Don’t just think about the obvious applications—AI is likely embedded in more business functions than you realize.
- What laws apply to your AI tools, and who’s watching them? Review each automated function to identify which legal, compliance, policy, or security requirements apply. If you don’t have clear oversight for these requirements, you’re operating without a compliance safety net.
- Can you prove your AI follows the law? You need documented processes showing how your tools meet existing legal requirements.
- How are you managing compliance risk with your AI vendors? For low-risk tools, remain aware of key provisions—including oversight rights and incident response notice obligations. Treat more expansive deals, including agentic AI embedded in business operations, more like outsourcing of operation by implementing guardrails, requiring logs that can be audited, shifting Service Level Agreements to focus on operational safety, and allocating liability for unauthorized actions.
- What’s your backup plan? When—not if—you need to pull an AI tool offline for compliance reasons, will your business operations survive?
These questions aren’t academic exercises—they’re the foundation of responsible AI deployment. The companies that survive and thrive will be those that master the art of innovation within existing legal frameworks. The future belongs to organizations that can move fast without breaking things—including the law.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.
Author Information
Teddy Nemeroff is a former White House National Security Council member, co-founder of AI start-up VerificAI, and professor at Princeton University
Veronica Glick is partner at Mayer Brown and a former Council on Foreign Relations International Affairs Fellow.
Helen Wilson contributed to this article.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.