July 2025, 2 texts, 2 logics, 2 visions of data protection, but one shared challenge: regulating complex, high-risk systems in a Europe that seeks sovereignty in a global tech landscape.
On one side: CNIL tightens the screws after a disastrous year for cybersecurity.
On the other: the European Commission unveils the final version of its Code of Practice for GPAI (General Purpose AI), aligned with the upcoming AI Act.
🔐 CNIL: Enhanced security for databases (under GDPR)
The new CNIL guidelines, released in July 2025, follow alarming findings from 2024:
➡️ 80% of data breaches involved unsecured access
➡️ 50% of leaks affected databases containing more than 1 million individuals
Relying on Article 32 of the GDPR, CNIL now requires:
- Mandatory multi-factor authentication for all remote access to large datasets (with recommended use of ANSSI-compliant hardware keys)
- Reinforced logging: retention of logs for 6 to 12 months and active monitoring of outgoing flows
- Strict contractual oversight of data processors (Art. 28 GDPR), including regular audits and immediate breach notifications
👉 A clear doctrine: risk prevention relies on technical and contractual robustness.
Stronger enforcement is planned from 2026 onward.
⚙️ GPAI Code: A preventive regulatory approach for large-scale AI
The GPAI Code of Practice, published on July 5, 2025, aims to guide industry players ahead of the AI Act’s entry into force.
It targets developers, providers, and integrators of “general purpose” AI models, especially those with systemic impact (large language models, voice assistants, embedded AI platforms…).
Though voluntary, the code outlines:
- A governance framework for risk management throughout the model’s lifecycle
- Documented assessments covering safety, transparency, bias, and environmental impact
- Risk mitigation measures before market release
- Ethical commitments related to IP rights and misuse prevention
📊 Comparison: convergence or regulatory disconnect?
Shared principles:
✔️ Ongoing risk assessment
✔️ Requirements for documentation and traceability
✔️ Accountability across the processing chain
Key differences:
⏱️ Timing: CNIL acts in response to actual breaches; the GPAI Code anticipates risk ex ante
🎯 Scope: GDPR covers all personal data; the GPAI Code focuses solely on broad-spectrum AI
⚖️ Legal force: CNIL enforces mandatory rules; the GPAI Code remains non-binding (until August 2, 2026)
Identified gaps:
❌ GDPR does not address algorithmic risks specific to generative AI
❌ The GPAI Code does not thoroughly cover personal data processing
❌ Neither framework fully tackles sovereignty issues related to underlying infrastructures (cloud, GPUs, model access)
🔍 Conclusion: two tools, one goal. But moving at different speeds.
GDPR is evolving through pressure from national authorities, with increasingly strict obligations.
The AI Act still leans on self-regulation to balance innovation and legal compliance.
But convergence is underway. And regulatory pressure will soon weigh more heavily on high-performing models.



