🔍 On February 26, 2025, the European Parliament published a revealing report on a major legal dilemma: how to detect discriminatory biases in AI without violating the protection of sensitive data?
A Fundamental Tension Between Two European Legal Frameworks
The AI Act, which came into force in August 2024, allows the processing of sensitive data to detect and correct algorithmic biases in high-risk AI systems (Article 10(5)).
The GDPR, on the other hand, imposes strict restrictions on processing these same special categories of personal data (Article 9).
This divergence creates significant legal uncertainty for businesses.
Concrete Impact for Legal Departments
For corporate lawyers, this tension manifests in several critical areas:
- 🤖 Generative AI: While not classified as high-risk systems by default, chatbots can generate discriminatory speech. How can compliance be ensured without accessing the sensitive data needed to detect biases?
- đźš— Autonomous Vehicles: Computer vision systems may detect light-skinned pedestrians more accurately than dark-skinned ones. Is processing ethnic data to correct this bias compliant with the GDPR?
- đź‘” Recruitment: Candidate selection algorithms can perpetuate discrimination based on gender or health. How can they be audited without explicitly processing these protected data?
- đź’° Banking Scoring: Credit assessment systems may conceal biases linked to residence or ethnic origin. How can their neutrality be ensured while complying with the GDPR?
The Legal Paradox to Resolve
For a company to detect whether its AI system discriminates based on ethnic origin, it logically needs to know this information about its users—which involves processing sensitive data strictly regulated by the GDPR.
The report highlights that Article 10(5) of the AI Act could rely on the “substantial public interest” provision under Article 9(2)(g) of the GDPR. However, this interpretation remains controversial and does not provide the necessary legal certainty.
Essential Conditions According to the Belgian Supervisory Authority
The Belgian Supervisory Authority’s September 2024 report, cited in the Parliament’s analysis, underscores that for Article 10(5) of the AI Act to fully comply with the GDPR, several conditions must be met:
- Robust cybersecurity measures must be implemented
- GDPR principles (data minimization, purpose limitation, integrity, and confidentiality) must be respected
- Processing must be “strictly necessary” to prevent discrimination
- A specific legal basis from Article 9 of the GDPR must be identified
An Additional Complexity: The Variable Status of Discriminatory Data
Some discriminatory factors (age, gender) are not “special categories” under the GDPR but regular personal data covered by Article 6.
For legal professionals, this means two different legal regimes depending on the nature of the bias to be corrected—an operational headache that further complicates compliance.
Regulatory Evolution Perspectives
The current uncertainty calls for either a reform of the GDPR to adapt it to the AI era or the issuance of guidelines clarifying the interaction between these two fundamental regulations.
For corporate lawyers, anticipating these developments is now crucial by establishing algorithm auditing processes that respect both non-discrimination objectives and personal data protection.
🔄 What would be the most effective solution: modifying the GDPR to adapt it to AI challenges, interpreting the AI Act restrictively, or creating a new specific framework for managing algorithmic biases?
📝 Sources :
- EPRS | European Parliamentary Research Service, Analytical Note PE 769.509, February 2025
- Belgian Supervisory Authority Report, September 2024



