"Simplification is one of the most difficult things to do," Jony Ive, the British-American designer behind the iPod, iPhone and iPad once observed.
He was talking about consumer electronics. But the same truth applies to regulation, and Brussels is now testing it on the AI Act.
On 7 May, the Council presidency and European Parliament struck a provisional deal to slim down parts of the EU's flagship AI law after negotiations which lasted into the small hours. The proposal sits within Omnibus VII, one of ten simplification packages the Commission has tabled since 2025.
The stakes are higher than procedural tidiness. In his September 2024 report on EU competitiveness, Mario Draghi delivered a stark diagnosis: the productivity gap between the EU and the US is largely explained by the tech sector, and only four of the world's top 50 tech companies are European.
The EU, he wrote:
"largely missed out on the digital revolution led by the internet and the productivity gains it brought."
AI represents a second window and a chance for Europe to close that gap in both productivity and manufacturing capacity. But seizing it requires rules that companies can comply with.
What the deal does
The core move is a delay. High-risk AI rules now won't apply until:
- 2 December 2027 for standalone high-risk AI systems; and
- 2 August 2028 for high-risk AI systems embedded in products.
Beyond the timeline shift, the co-legislators broadly preserved the Commission's original proposal but added several notable tweaks.
New prohibitions and protections
The deal creates a new banned AI practice: using AI tools to generate non-consensual sexual content (so-called nudification tools) or child sexual abuse material (CSAM). This wasn't in the Commission's text; the EU Parliament pushed it through.
The standard of 'strict necessity' for processing special categories of personal data (race, health, sexual orientation) for bias detection has been reinstated. The Commission had sought to loosen this.
Regulatory relief for business
SME exemptions, including simplified technical documentation requirements, now extend to small mid-caps (SMCs), that is, companies of up to 500 employees.
On AI regulatory sandboxes, the deadline for member states to establish these is now 2 August 2027, which amounts to a year's breathing room for national authorities.
Transparency and registration obligations
Providers must register AI systems in the EU database even where they consider their system exempt from high-risk classification, which closes a potential loophole.
The grace period for implementing transparency measures on AI-generated content shrinks from six months to three, with a new deadline of 2 December 2026.
The AI Office's expanded reach
The deal clarifies the AI Office's supervisory competence over AI systems built on general-purpose AI models where the same provider develops both model and system. But national authorities keep control in specific domains, namely: law enforcement, border management, judicial authorities and financial institutions.
Untangling the overlap with sectoral law
This was the thorniest drafting problem. Where sector-specific legislation (the Machinery Regulation, medical devices rules) already imposes AI-related requirements, the deal introduces a mechanism to limit the AI Act's application through implementing acts with a view to preventing double regulation.
The Machinery Regulation gets a full exemption from direct AI Act applicability. Instead, the Commission gains power to adopt delegated acts adding health and safety requirements for high-risk AI systems under machinery law. It's a pragmatic fix: one set of rules, one compliance pathway.
The Commission must also publish guidance helping operators of high-risk AI systems in regulated sectors understand how to comply without duplicating effort.
What happens next?
The provisional agreement needs endorsement from both the Council and Parliament, then a legal-linguistic scrub before formal adoption, which is expected within weeks rather than months.
Simplification is among the hardest things to do. This deal proves the point.
The architecture of the legislation stands, but what changes is the timetable: deadlines will slide, mid-sized firms will get gentler treatment, and the tangled overlap between AI rules and product safety law will receive a neater settlement. Alas, none of this amounts to a fundamental rethink; rather, it's more of a tidy-up exercise.
One provision, though, breaks new ground. The prohibition on AI-generated child sexual abuse material is a genuinely new obligation, and it tells us something about the European Parliament's instincts. When legislators spot a vehicle, they'll use it. Business should expect more of the same.
For compliance teams, an extra 16 months before high-risk obligations bite (and 12 months for embedded AI) gives businesses room to build programmes that work rather than ones thrown together to meet an arbitrary deadline. This breathing space is real and welcome.
The more fundamental question is whether tidier rules will do what Mario Draghi argued they should: sharpen EU competitiveness. Simplification is not the same as substantive reform.
