Artificial Intelligence Council and Parliament agree to simplify and streamline rules Consilium

The EU AI Act Delay That Wasn’t a Loophole

Artificial Intelligence Council and Parliament agree to simplify and streamline rules Consilium

Key quote:

Today’s agreement on the AI act significantly supports our companies by reducing recurring administrative costs. It ensures legal certainty and a smoother and more harmonised implementation of the rules across the Union, strengthening EU’s digital sovereignty and overall competitiveness. At the same time, we are stepping up the protection of children targeting risks linked to the AI systems.

Why it matters:

Unfortunately, much of the news ignores this is a provisional agreement that won’t be final until next month, and that’s not guaranteed. But critics are already screaming that the EU caved to Big Tech by pushing the high-risk AI deadline from August 2, 2026, to December 2, 2027. I’m expecting we’ll see a press release from Max Schrems, honestly. Critics have claimed this provisional agreement is a massive loophole where high-risk systems deployed before the new date escape oversight forever. While the non-retroactivity clause in Article 111 does legally allow this, framing the delay as pure corporate capture ignores reality across the European Union.

The regulatory infrastructure simply wasn’t there. National competent authorities remain partially designated, and accredited bodies capable of conducting conformity assessments are in short supply. If the August 2026 deadline does happen, enforcement’s probably going to be a theoretical exercise anyway, because the ecosystem to demonstrate compliance isn’t a thing right now. The delay isn’t a gift; it’s a recognition that you can’t enforce a law without the tools to measure it.

This isn’t just about buying time for Siemens or Bosch to avoid red tape, though Germany certainly lobbied hard to exempt machinery from the Act. It’s about preventing a chaotic rollout where companies are fined for failing to meet standards that haven’t been published or are inconsistently applied. The provisional agreement ties the new December 2027 date to the availability of technical standards and tools from the Commission.

But there’s a catch. While high-risk rules may be slipping back, the ban on “nudifier” apps and non-consensual intimate imagery moves up, potentially taking effect as early as December 2, 2026. This suggests the EU is prioritizing consumer protection over industrial convenience. The policy risk isn’t that companies will dodge the law forever, but they might rush risky hiring or biometric systems to market before the December 2027 clock starts, hoping that the non-retroactivity shield’s still in place.

Ultimately, this is an opportunity for companies to pressure-test their compliance procedures or reduce the risks designation of their systems in advance of full rollout. That’s something we didn’t have when DORA dropped, and if GDPR’s any guide, when the Act’s enforcement does start, the usual suspects will be first in the queue for enforcement actions.

Understand the stories that matter.

Every week, I break down the most important updates in cybersecurity and AI law and policy. Human-written, deeply analyzed.

I don’t spam! Read the privacy policy for more info.

Similar Posts