New York City Mayor-elect Zohran Mamdani is set to begin his term soon, and the business community is already gaming out what his administration could mean for technology, artificial intelligence, and the cost of doing business in the Big Apple. Companies can adapt by taking inventory of their AI footprints, focusing on local laws, strengthening data management and privacy controls, and clarifying roles and responsibilities around AI governance.
Like California, New York City is a big enough market to influence how online platforms across the country do business. When California moved ahead with the California Consumer Privacy Act in 2018, some large companies adjusted their products and data practices nationwide rather than build separate regimes for one state. In 2023, California’s gross domestic product was about $3.9 trillion, roughly 14% of US GDP, with Texas and New York the next largest state economies at 9% and 8%. Given that kind of scale, New York City can play a similar role.
That’s why the appointment of Lina Khan to the mayor’s advisory committee is so consequential. As Federal Trade Commission chair under former President Joe Biden, Khan challenged mergers and sued major companies such as Amazon.com Inc., which she alleged was a monopolist that used a set of interlocking anticompetitive and unfair strategies to maintain its monopoly power.
New York City has already shown it’s willing to move ahead of state and federal law. The NYC Department of Consumer and Worker Protection, or DCWP, aims to protect and enhance the daily economic lives of New Yorkers to create thriving communities. The agency was created in 1969, when the City Council passed a landmark Consumer Protection Law giving DCWP broad authority to protect the public from deceptive business practices, making it the first municipal agency of its kind in the country.
More recently, DCWP has turned its attention to AI. In July 2023, it began enforcing Local Law 144 regarding automated employment decision tools. The law prohibits employers and employment agencies from using such a tool unless it has been subject to a bias audit within one year of its use, information about that audit is publicly available, and certain notices have been provided to employees or job candidates.
Khan has repeatedly emphasized that regulators should start fully using the laws already on the books. General consumer protection laws can be used to challenge many of the practices now embedded in digital platforms.
Take dynamic pricing, a strategy that bases prices on evolving market conditions such as demand, competitor pricing, and inventory levels. Khan herself has warned that AI is accelerating fraud risks and enabling new forms of price discrimination.
If she brings even a fraction of that mindset into City Hall, local rules could become a powerful tool to shape how large companies do business in New York City. We already have a preview in Local Law 144, which requires employers using certain automated employment decision tools to obtain bias audits and provide detailed notices to candidates and employees. Aggressively enforcing Local Law 144 is one place the new administration could start.
For C-suites, boards, and investors, this isn’t a reason to panic. It’s a reason to get organized:
- Map your AI footprint in New York City. Where are algorithms touching hiring, pricing, eligibility, or access to essential services? Start with a data inventory of all AI tools in use at your organization, then assess risks along the full lifecycle, from data collection and model development to deployment and ongoing monitoring.
- Strengthen data management and privacy controls. Effective AI governance rests on knowing where your data comes from, how it is transformed, and who can access it. That means tracking data sources and lineage, testing for bias and quality at key stages, and validating that your privacy and de-identification practices meet applicable legal and regulatory standards.
- Clarify roles and responsibilities. Governance only works if the right people are in the room. Legal, compliance, and risk teams may focus on laws and liability, while designers, developers, product managers, and marketers actually build and deploy AI-enabled systems. Defining who owns which decisions, and how issues escalate helps align policy goals, business goals, and technical realities.
In this environment, robust corporate governance around AI and data will be one of the best tools you have to respond effectively to whatever comes out of City Hall.
This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law and Bloomberg Tax, or its owners.
Author Information
Jean-Marc Appolon is a New York City based attorney specializing in AI, cybersecurity, and data privacy.
Write for Us: Author Guidelines
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.