Legitimate interest for AI: The CNIL sets limits

As you may have read in our recent posts, for some time now, GAFAM and other tech giants have often resorted to the notion of “legitimate interest” to justify accessing our data.

On June 19, after public consultation, the CNIL issued its recommendations; what lessons can be learned?

 

Legitimate Interest in practice

📱 The Meta case perfectly illustrates this strategy. Since May 2025, the American giant has been using public data from its European Facebook and Instagram users to train its AI models (Meta AI, Llama) based on legitimate interest. This trend has raised legitimate questions about the balance between technological innovation and personal data protection.

 

What the CNIL says:

➡️ Yes, legitimate interest can be invoked to train AI systems, but only if serious safeguards are implemented.

🔐 Among the requirements:

  • Exclusion of sensitive data
  • Enhanced transparency
  • Clear information for data subjects
  • Effective right to object
  • Systematic pseudonymization

This position is consistent with the opinion adopted by the European Data Protection Board (EDPB) in December 2024, which had already established that AI development does not systematically require the consent of data subjects.

 

⚖️ Framing web scraping: A central issue

One of the most innovative aspects of these recommendations concerns the specific framework for online data harvesting (web scraping).

The CNIL now provides precise criteria allowing stakeholders to evaluate the conditions under which this practice can rely on legitimate interest. Example: The use of conversations from chatbots is possible if and only if strong safeguards are in place.

 

🛡️ Concrete safeguards expected

The recommendations do not merely authorize the use of legitimate interest; they mandate the implementation of specific safeguards adapted to different types of AI systems. These measures include, in particular, the exclusion of certain sensitive data, increased transparency vis-à-vis data subjects, and facilitating the exercise of rights.

 

🇪🇺 A coordinated European approach

These recommendations are part of a broader European approach, conducted in coordination with the EDPB and other data protection authorities. This harmonization is crucial to ensure legal certainty for companies operating at a European level.

 

🔭 What’s next?

The CNIL has already announced upcoming publications on:

  • The legal status of AI models
  • Security measures to be applied
  • Data annotation

A strong signal: personal data law is adapting to AI, but the articulation with the future AI Act (AIA) will pose new challenges, particularly regarding co-regulation between authorities.

 

Source : https://cnil.fr/fr/recommandations-developpement-ia-interet-legitime

Facebook
Pinterest
Twitter
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

Latest Post