A Key Legal Turning Point for Generative AI and Data Protection
On March 18, 2026, the Tribunal of Rome annulled a €15 million GDPR fine imposed on OpenAI by the Italian data protection authority.
The case concerns the regulatory treatment of ChatGPT and generative AI models trained on large datasets containing personal data.
For legal departments, compliance teams, and technology companies, this decision is one of the most closely watched judicial developments in the evolving relationship between AI regulation and European data protection law.
The Background: Why Italy Fined OpenAI
The dispute originates from regulatory investigations launched after the rapid global adoption of ChatGPT in late 2022 and early 2023.
In November 2024, the Italian Data Protection Authority (Garante per la protezione dei dati personali) imposed a €15 million administrative fine on OpenAI under the General Data Protection Regulation (GDPR).
The authority identified several alleged violations of EU data protection law, including:
- Lack of a valid legal basis for processing personal data used to train the AI model (Article 6 GDPR)
- Insufficient transparency toward data subjects regarding how their data could be used in the training of ChatGPT (Articles 12–14 GDPR)
- Delayed notification of a personal data breach that occurred in March 2023 (Article 33 GDPR)
- Inadequate age-verification safeguards for minors accessing the service
In addition to the financial penalty, the authority ordered OpenAI to conduct a public awareness campaign about the functioning of ChatGPT and data protection rights.
The case quickly became one of the most visible regulatory actions involving generative AI and privacy law in Europe.
The Judicial Timeline of the Case
The regulatory decision was challenged before the Tribunal of Rome, leading to a two-stage judicial process.
March 2025 – Suspension of the Fine
The Rome court temporarily suspended the enforcement of the administrative sanction while reviewing the merits of the case.
This interim measure signaled that the court considered the legal questions raised by the dispute sufficiently serious to justify a full judicial review.
March 18, 2026 – Annulment of the Sanction
After examining the case, the Tribunal of Rome annulled the €15 million fine imposed on OpenAI.
At the time of writing, the full judicial reasoning of the decision has not yet been publicly released.
However, the ruling immediately triggered significant debate among legal scholars and data protection practitioners.
Why the Decision Matters for Generative AI Regulation
The Italian sanction was widely viewed as one of the first major GDPR fines directly targeting a provider of a large-scale generative AI model.
The case therefore raises a fundamental regulatory question:
How can the GDPR apply to the training of large language models (LLMs)?
Generative AI systems rely on a technical architecture that differs significantly from traditional data-processing operations regulated under European privacy law.
Key characteristics include:
- Large-scale training on heterogeneous datasets
- Data sources that may include publicly accessible web content
- Probabilistic outputs generated by statistical modeling rather than deterministic databases
- Difficulty identifying specific individuals whose data may have been included in training datasets
This architecture challenges some of the core operational concepts of the GDPR, including:
- purpose limitation
- data minimization
- individual data subject rights
As a result, courts and regulators must increasingly interpret traditional data protection rules in the context of AI training pipelines.
A Broader Legal Debate: Can the GDPR Govern Foundation Models?
The OpenAI case illustrates a structural tension emerging in European technology law.
The GDPR was designed primarily for identifiable, controlled data processing operations, such as customer databases or employee records.
Foundation models, by contrast, operate through:
- large-scale statistical learning
- indirect relationships between input data and generated outputs
- training processes that may involve billions of data points
This creates legal uncertainty around several compliance questions, including:
- What constitutes a valid legal basis for AI training data?
- How should data subject rights be exercised when training data cannot easily be traced back to individuals?
- Can transparency obligations be meaningfully applied to probabilistic AI systems?
The Rome court decision does not answer all of these questions.
But it signals that judicial scrutiny of AI-related GDPR enforcement is intensifying.
Regulatory Pressure Still Produced Compliance Changes
Even though the fine was ultimately annulled, regulatory pressure already led to significant changes in OpenAI’s practices.
Since 2023, the company has implemented several measures addressing privacy concerns raised by European regulators:
- a dedicated privacy notice explaining how ChatGPT processes personal data
- mechanisms allowing individuals to request removal or correction of personal data in model outputs
- options allowing users to opt out of having their conversations used for model training
- strengthened age verification safeguards
This demonstrates an important regulatory dynamic:
investigations alone can reshape industry practices, even before courts confirm the legality of sanctions.
What the Case Means for Companies Deploying AI
For businesses developing or deploying AI systems, the legal implications remain clear.
The annulment of a specific administrative sanction does not weaken the underlying compliance obligations under the GDPR.
Organizations deploying AI technologies in Europe should continue to focus on:
- identifying a lawful basis for processing personal data
- providing clear transparency notices regarding AI systems
- enabling the exercise of data subject rights
- documenting risk assessments related to AI processing
These principles remain central to EU data protection law regardless of the outcome of individual enforcement cases.
A New Phase of AI-Related GDPR Litigation
The OpenAI ruling suggests that the legal debate surrounding generative AI is entering a new phase.
Initially, regulatory authorities took the lead by launching investigations and issuing administrative decisions.
Now, national courts are beginning to examine these enforcement actions more closely.
This transition marks the beginning of a judicial interpretation phase for AI-related data protection law.
Future court decisions across Europe will likely shape how the GDPR interacts with:
- foundation models
- generative AI platforms
- large-scale training datasets
Key Takeaway
The Rome court’s annulment of the €15 million fine against OpenAI highlights a central reality of AI regulation in Europe:
the legal framework is still adapting to the technical architecture of generative AI systems.
While the GDPR remains the primary legal instrument governing personal data processing, its application to foundation models continues to raise complex legal questions that courts, regulators, and companies must progressively resolve.



