The European Union offered companies pared-down AI, cyber, and privacy rules. Instead, the regulatory relief threatens to complicate compliance.
The European Commission, the EU’s executive body, released its long-awaited digital rules proposal Nov. 19. It includes amendments to simplify several laws including the bloc’s sweeping privacy statute, the General Data Protection Regulation, and the EU AI Act, the world’s first comprehensive AI law. The watered-down proposals follow criticism from the Trump administration and giant US tech platforms, which argued that the EU had gone too far.
Among other changes, the new package proposes delaying the application of some provisions of the artificial intelligence law, centralizing cyber incident reporting, and clarifying the legal grounds for training AI models on personal information.
Companies had hoped a revamp of the digital laws, which regulate their approach to tracking technology, AI systems, and data collection, would cut down on their compliance costs by at least €5 billion (about $5.7 billion) by 2029 as promised by the EU. But “simplification” might be a misnomer, because the revised proposals introduce new exemptions and definitions that haven’t been fleshed out, disrupts long-term planning, and opens the door to more uncertainty as the laws get re-worked.
“So many companies invested so much time and money to comply with the existing frameworks, especially the GDPR, which is nearly 10 years old now. And here we have proposals that purport to make their lives easier, compliance easier,” said Joe Jones, director of research and insights at the IAPP. “But the question for companies will be, is it worth it, or do we just stay with what would be a gold-plated regime?”
New Challenges
The package of changes, known in Brussels-speak as the Digital Omnibus, does deliver on some of its promises, attorneys said.
For example, it suggested reducing the number of cookie consent banners on websites by limiting the situations where consent is needed. And it clarified that businesses could lean on legitimate interest—an alternative to the EU’s consent requirement—as a legal basis to train AI models on personal information.
Companies could also fulfill their obligations to report cyber incidents under multiple laws via a single entry point in a new “report once, share many” principle. And providers of AI systems could be allowed to use sensitive personal data to detect bias in models, which has so far been prohibited by the GDPR.
“At a high level, it does deliver some of the innovation requests with respect to AI development and training and the need to do certain things, like use data in order to ensure AI develops in a way that we can test for bias and unfairness within the models, where we might otherwise have had our hands tied behind our back in order to do that sort of testing,” said Brandon Kerstens, associate general counsel and chief privacy officer at
Still, the reprieve might be limited.
New definitions and requirements will force companies to re-assess their policies, vendor agreements, and tech stacks. Businesses moving forward under the new exemptions will be treading riskier, uncharted grounds.
“The first step of determining if you’re in scope, or if your incident is regulated by those laws, if you’re a high risk AI system, if it involves personal data—this is a challenge in itself,” said Alice Portnoy, Brussels-based privacy and cyber lawyer at Alston & Bird. “So if you change the core definitions, it’s going to be a challenge again in practice, and companies are going to have to review their policies.”
If laws like the GDPR are diluted, companies will also have to decide whether to stick with higher standards—which have been adopted in other jurisdictions—or rethink their approach to personal data.
“Companies are going to need to look at their programs holistically and say, do we want to still be meeting this highest bar? Or are there areas where we could be a little more flexible and adjust?” said Ron De Jesus, field chief privacy officer at Transcend, a compliance provider. “These are the types of conversations I’ll need to have as a CPO with my executive team around our risk appetite. And there are costs involved with that.”
In Flux
Brussels also proposed changes to laws that haven’t yet fully been implemented or enforced. The AI Act and Data Act, for example, which together regulate AI systems, smart devices, and cloud platforms, have requirements that take effect in 2026 and 2027.
Unlike formal guidance, which would typically spell out responsibilities, and potential enforcement actions, the proposed changes offer little clarity to companies, attorneys said.
“They were building the plane as they were flying, and now they’re told: actually it’s not a plane you have to fly, it’s something else. You’re going to a different destination,” IAPP’s Jones said.
The proposal now faces a long road. Next stop: the European Parliament, then the 27 member states will have an opportunity to weigh in. Adoption of the package could come as late as mid-2026. Countries like France and Germany have already expressed support for some parts of the package, including the delayed application of certain EU AI Act provisions.
It’s too soon to know whether any changes will be adopted before existing deadlines go into effect.
“The terrain in the EU is just going to be changing yet again. In some ways, it may be for the better. In some ways not,” said Michael Vatis, partner at Benesch. “One of the things that’s a constant theme in this area is that companies want certainty at least as much as they want efficiency, because they need to be able to plan.”
He added, “What this proposal indicates for certain is that the field is going to remain in flux.”
To contact the reporter on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
See Breaking News in Context
Bloomberg Law provides trusted coverage of current events enhanced with legal analysis.
Already a subscriber?
Log in to keep reading or access research tools and resources.
