In 2026, one of the most revealing artificial intelligence disputes is unfolding between Anthropic and several Chinese AI laboratories including DeepSeek, Moonshot AI, and MiniMax.
Anthropic alleges large-scale industrial extraction of its Claude model through systematic API interactions designed to replicate performance via model distillation techniques.
The irony is difficult to ignore.
Anthropic recently resolved litigation reportedly totaling $1.5 billion related to the use of copyrighted works in its model training datasets. Now, it accuses competitors of extracting value from its own model outputs.
This is more than corporate rivalry.
It is a structural stress test of global AI intellectual property law.
What Happened: Model Distillation at Scale
According to public allegations:
- Approximately 16 million suspicious interactions were detected.
- Around 24,000 automated accounts were allegedly used.
- The objective was to replicate Claude’s reasoning and coding capabilities through output harvesting.
The technique at issue is known as adversarial model distillation.
Distillation, in itself, is not unlawful. It is a standard machine learning optimization method where a smaller model learns from the outputs of a larger, more powerful model.
The legal tension emerges when the training source is a competitor’s proprietary system accessed through contractual interfaces.
The question is not technical.
It is legal.
The Core Legal Question: Can AI Model Outputs Be Owned?
1. The Copyright Mirage
Under U.S. copyright law, authorship requires human creative input. Courts in 2025 reinforced that purely AI-generated outputs are not protected works in themselves.
If Claude’s responses lack copyright protection, then harvesting them does not constitute copyright infringement per se.
This severely limits the intellectual property arguments available.
Anthropic cannot plausibly argue that each response generated by Claude is a protected copyrighted work.
The copyright framework was not designed for probabilistic language models.
2. Contract Law: The Fragile Shield
The strongest legal argument appears contractual.
If API terms prohibit automated scraping or geographic access from restricted jurisdictions, then systematic extraction could constitute breach of contract.
However, contract law has structural limits:
- Extraterritorial enforceability is uncertain.
- Enforcement against entities operating in sovereign jurisdictions such as China is complex.
- Judgments may lack practical execution.
Contract law protects access conditions not the underlying knowledge extracted.
That distinction matters.
3. Trade Secrets: The Most Promising and Most Complex Avenue
Trade secret law may offer a stronger foundation.
Under the U.S. Defend Trade Secrets Act and the EU Trade Secrets Directive (2016/943), protection extends to information that:
- Derives independent economic value from secrecy.
- Is subject to reasonable protective measures.
Could the performance profile of a model constitute a protectable trade secret?
That is doctrinally unsettled.
Unlike source code or model weights, performance behavior is observable through lawful interaction unless specifically restricted.
Proving misappropriation would require demonstrating improper acquisition, not merely competitive learning.
The line between competitive intelligence and trade secret theft is thin in AI.
The Structural Problem: No Property Right in Model Capabilities
There is currently:
- No specific intellectual property right over model weights at the international level.
- No sui generis protection for AI model performance.
- No global treaty governing cross-border AI model extraction.
Copyright protects expression.
Patent law protects technical inventions.
Trade secret law protects confidential information.
Foundation models fall between these regimes.
The law was built for works and inventions not for emergent probabilistic architectures trained on web-scale data.
The Broader Context: OpenAI, Anthropic, and Reciprocal Exposure
This dispute also highlights a structural asymmetry.
Major AI developers have themselves faced legal scrutiny for training models on copyrighted datasets.
U.S. federal courts in 2025 examined whether large-scale model training qualifies as transformative use under fair use doctrine.
The results have not been uniform.
The AI ecosystem was built on expansive data ingestion.
Now, the same ecosystem confronts competitive data extraction.
The system is recursively testing itself.
Why This Matters for General Counsel and AI Governance Leaders
For legal departments, this dispute is not about corporate rivalry.
It signals three strategic risks:
1. Model Extraction Risk
If outputs can lawfully be used to train competitors, proprietary advantage weakens.
2. Enforcement Asymmetry
Cross-border AI enforcement remains fragmented and politically sensitive.
3. Governance Gap
No harmonized international legal framework governs model-to-model extraction.
AI is global.
Law remains territorial.
Three Possible Legal Futures
Scenario 1: Reinforced Contractual and Technical Barriers
Companies tighten API controls, geofencing, rate limits, and anomaly detection systems.
Protection becomes technological rather than legal.
Scenario 2: Sui Generis Protection for AI Models
Legislatures create a dedicated right for foundation model investments analogous to database rights in the EU.
This would formalize ownership of model weights or capabilities.
Scenario 3: International AI Governance Treaty
A multilateral framework could define:
- Permissible training sources
- Standards for model distillation
- Cross-border enforcement cooperation
- Mutual recognition of AI-related trade secrets
Without coordination, litigation will proliferate without resolving systemic uncertainty.
The Strategic Question: Espionage or Optimization?
Is adversarial distillation:
- Industrial espionage?
- Breach of contract?
- Lawful competitive learning?
- Or an inevitable byproduct of machine learning evolution?
The answer will shape:
- AI investment structures
- Open-source vs closed-source strategies
- National AI sovereignty policies
- Corporate risk management frameworks
A Bretton Woods Moment for Artificial Intelligence?
The global AI industry may be approaching a governance inflection point.
Just as Bretton Woods structured post-war monetary order, AI development may require:
- Standardized norms
- Shared enforcement principles
- Clear intellectual property boundaries
Absent that, AI competition will continue to operate in a doctrinal vacuum.
And vacuum environments tend to generate litigation rather than clarity.
Key Legal Themes Emerging from the Anthropic Dispute
- AI model distillation legality
- AI copyright protection of outputs
- Trade secrets and machine learning
- API contract enforcement across borders
- International AI governance
- Open source vs proprietary AI strategy
- AI intellectual property disputes 2026
Conclusion
The Anthropic controversy is not merely a dispute about scraping or API misuse.
It exposes a structural weakness in global AI law.
There is no settled property framework for model capabilities.
There is no harmonized enforcement regime.
There is no consensus on the legality of distillation across borders.
Until those gaps are addressed, similar disputes will multiply.
The question is no longer whether AI law must evolve.
It is whether it will evolve through coordinated governance or through fragmented, jurisdictional litigation.



