Over the past few months, we’ve seen headlines everywhere claiming that AI giants can legally use copyrighted works to train their models. Why? Thanks to fair use, an exception under U.S. copyright law.
📆 In June 2025, Meta and Anthropic were (partially) cleared for using copyrighted material in training their AI models.
🧩 Two Rulings, one legal lever: fair use to the rescue of generative AI
On June 23, 2025, federal judge William Alsup (District Court of Northern California) issued a landmark ruling in Bartz v. Anthropic. The court found that Anthropic’s use of copyrighted works to train Claude qualified as a transformative use—and therefore fell under fair use since it did not replace the original works but created something new.
Two days later, on June 25, 2025, Judge Chhabria delivered a similar ruling in favor of Meta, concluding that using copyrighted works to train Llama also constituted fair use even when those works were scraped from pirated sites.
However, both judges drew a critical distinction: while training may be transformative, using pirated content to build a centralized library remains punishable. Anthropic will now face a jury trial on damages related to its pirated copies.
🔍 Fair Use, cornerstone of U.S. copyright law… or legal loophole?
Fair use, codified under Section 107 of the 1976 U.S. Copyright Act, is an exception to copyright law that allows limited use of protected works without prior authorization. It is based on four cumulative criteria:
- The purpose and character of the use (commercial or educational)
- The nature of the copyrighted work
- The amount and substantiality of the portion used
- The effect on the potential market for the work
The Anthropic and Meta rulings agree on the transformative nature of AI training, but diverge on the use of pirated content and the assessment of commercial harm—two issues likely to be central in upcoming legal proceedings.
🇪🇺 In Europe, copyright and personal data are not negotiable !
Europe has taken a radically different approach. The AI Act, gradually coming into effect since August 1, 2024, establishes a strict regulatory framework for AI.
The CNIL has recently issued new guidelines to reconcile AI innovation with GDPR compliance, emphasizing the absence of a European equivalent to U.S.-style fair use. In 2024, the CNIL issued 87 sanctions totaling €55,212,400, clearly signaling a repressive, enforcement-driven stance.
Unlike the U.S., where transformative use can justify employing protected works, European law demands a clear legal basis for any processing of personal data. European data protection authorities have even claimed a stronger supervisory role over high-risk AI systems.
👉 This transatlantic split reveals two fundamentally opposed legal philosophies: U.S. economic pragmatism versus enhanced European protection of individual rights.
While fair use offers valuable flexibility for tech players, it also introduces legal uncertainty the kind of risk the more rigid European framework is designed to avoid.



