In-House Counsel Must Rethink AI Playbook Before Regulators Do

March 2, 2026, 9:30 AM UTC

In-house teams with an “acceptable use of AI” policy need to take a closer look at how to protect themselves from a stream of new rules across jurisdictions. Doing so will reduce enforcement risk and allow firms to move faster than competitors, who will be rebuilding from scratch each time a new rule drops.

The EU AI Act has general AI obligations that apply now. And so far, more than 240 state bills referencing artificial intelligence have been enacted. Colorado, Texas, and Illinois all have AI laws with enforcement dates in 2026.

Meanwhile, no comprehensive federal statute exists. While the Trump administration’s December 2025 executive order is geared at preempting state AI laws, the EO itself can’t overturn them without Congress or the courts.

If your company operates across states, sells to certain regulated industries, or uses AI in employment decisions, you’re already dealing with overlapping rules, and you have no single standard to anchor to.

Here’s how to build a governance framework that works regardless of which state enacts its law next.

Map every AI system, especially the ones you may not know about. As a first step, make a catalog of all the AI systems your company uses, including any shadow AI tools that employees may have adopted without approval. Then, classify them by risk.

These state laws converge on the same high-risk categories, such as employment, housing, finance, health care, education, insurance, and legal services. For example, Colorado is most explicit, Texas bans certain harmful AI uses in overlapping areas, and Illinois requires disclosure specifically when AI is used in employment decisions. If you build your taxonomy with these shared categories, you’ll build a classification system that scales across jurisdictions and can serve you as new laws are enacted.

Anchor everything to NIST. The National Institute of Standards and Technology AI Risk Management Framework and ISO/IEC 42001 are the standards regulators most consistently reference. Colorado goes the furthest, granting organizations that comply with a recognized framework a rebuttable presumption of reasonable care. But even where there’s no safe harbor yet, NIST alignment gives you evidence of diligence, which will be important if you face an enforcement action or a vendor dispute.

To anchor to NIST, center the program around its four functions:

  1. Govern (assign accountability)
  2. Map (document the use cases and foreseeable harms)
  3. Measure (test for bias, accuracy, and drift)
  4. Manage (define escalation paths)

Each of these helps ensure that if and when a federal standard is enacted, you already will be mostly compliant, because the federal standard will almost certainly reference NIST.

Start running impact assessments before you have to. Impact assessments are how you put the framework into action.

An impact assessment documents the system’s purpose, intended population, data inputs, discrimination risks, and mitigation steps. Here, you can draft a standardized template aligned to NIST’s map and measure functions and require your team to complete it before any high-risk deployment.

The EU AI Act already mandates impact assessments for high-risk systems. Colorado will require them annually for high-risk AI deployers starting June 30, 2026. California’s CCPA regulations, which cover automated decision-making technology and captures most AI-driven processing, include similar requirements, so it’s clear that they are here to stay.

Draw hard lines on tools and data. Specify which AI tools are approved, which are prohibited, and under what conditions employees may use general-purpose models. Confidential client data, personally identifiable information, trade secrets, and privileged material shouldn’t enter any non-enterprise AI model.

Name the vetted tools, specify their approved data classification levels, and designate who authorizes exceptions. Under the California Consumer Privacy Act, the EU’s General Data Protection Regulation, HIPAA (the federal law restricting release of medical information), and sector-specific rules, your company is liable for what third-party AI vendors do with personal data, and outsourcing the processing doesn’t outsource liability. Conduct training on this acceptable use policy, which should make that link explicit, so employees understand AI rules flow from privacy and security frameworks they already know.

Overhaul how you vet AI vendors. As the CEO of a legal technology company, I see the other side of procurement for some of the most secure organizations in the world, and the standard vendor risk questionnaire is dangerously outdated for AI.

Most of the time, these questionnaires don’t ask about hallucination rates or training data provenance.

They don’t account for model drift, where outputs shift as the system updates. These questionnaires also don’t account for agentic AI, where a vendor’s autonomous system could take actions inside your environment. Build an assessment that accounts for these.

Require human oversight and enforce it. Every emerging regulatory scheme shares one requirement, which is that humans must remain accountable for consequential decisions.

Define where human review is mandatory and how it’s documented. Then train your people by their role, because what a product engineer needs to know differs from what a recruiter needs.

None of these items are huge overhead. You can start this process with all your department heads. Once you align to the NIST, this structural decision will help inform everything that comes downstream. Take inventory of your AI systems and classify them by risk. Draft your impact assessment template, which can be done in a week.

Then, rewrite your vendor questionnaires to cover hallucination rates, model drift, and training data provenance. And remember, your governance framework must have teeth and carry consequences. Otherwise, it becomes evidence that the organization knew of the risks and didn’t act.

The in-house teams who build and align to the regulatory framework now won’t only be ready, they’ll also be the ones their boards trust to deploy AI the furthest, a potentially game-changing decision for the future of every company.

This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.

Author Information

Dorna Moini is a former litigator at Sidley Austin and current CEO and founder of Gavel and Gavel Exec, an AI contract review and drafting platform.

Write for Us: Author Guidelines

To contact the editors responsible for this story: Jada Chin at jchin@bloombergindustry.com; Jessica Estepa at jestepa@bloombergindustry.com

Learn more about Bloomberg Law or Log In to keep reading:

See Breaking News in Context

Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.

Already a subscriber?

Log in to keep reading or access research tools and resources.