Executive Summary
In 2026, Meta reportedly combined large-scale workforce reductions (≈10%) with the deployment of internal systems designed to analyze employee activity (e.g., workflows, interactions, execution patterns) to train AI agents capable of replicating human tasks.
This development raises three core legal issues:
- Employment law: permissibility of layoffs under U.S. “at-will employment” and mass layoff frameworks
- Data protection & workplace monitoring: legality of collecting granular behavioral data
- Purpose limitation & AI training: whether employee-generated data can lawfully be repurposed to train systems that may replace those employees
This article provides a jurisdictional analysis (U.S. vs EU) and identifies emerging legal risks for companies adopting similar strategies.
1. From Generative AI to Agentic AI: A Structural Shift
The current transition is not incremental.
It reflects a shift from:
- Generative AI → assisting human work
- Agentic AI → executing multi-step tasks autonomously
In this model, AI systems are trained not just on static data, but on human behavioral patterns:
- decision-making sequences
- execution workflows
- productivity heuristics
👉 This raises a foundational legal question:
Can employee work behavior be treated as training data for systems designed to replicate that work?
2. U.S. Legal Framework: Permissive but Evolving
2.1 Employment Termination: At-Will Employment
In the United States, particularly in California:
- Employment is generally governed by the “at-will” doctrine
- Employers may terminate employees without cause, subject to:
- anti-discrimination laws
- contractual limitations
- federal/state notice requirements
WARN Act (Worker Adjustment and Retraining Notification Act)
Applies to mass layoffs and requires:
- 60 days’ advance notice
- Applicable to employers with ≥100 employees under certain thresholds
👉 Conclusion:
Mass layoffs such as those attributed to Meta are legally permissible in principle under U.S. law.
2.2 Workplace Monitoring and Data Collection
U.S. law provides greater flexibility than EU law regarding employee monitoring.
However, constraints exist:
California Privacy Framework
- CCPA / CPRA (California Consumer Privacy Act / Privacy Rights Act)
- Extends certain rights to employees:
- notice of data collection
- purpose limitation (increasingly debated)
- rights to access and deletion (with limitations)
Emerging Legislative Trends
Several bills aim to regulate algorithmic management:
- SB 7 (“No Robot Bosses Act”)
→ restricts fully automated employment decisions - AB 1221 / AB 1331 (proposed)
→ transparency obligations for workplace monitoring
→ restrictions on behavioral tracking
👉 Key issue:
The reuse of behavioral data for AI training may exceed the original purpose of collection.
2.3 Algorithmic Liability: Early Case Law Signals
Cases such as Mobley v. Workday illustrate:
- growing scrutiny of algorithmic decision-making in HR
- potential liability for:
- discrimination
- opacity
- lack of human oversight
👉 This is an emerging risk area, not yet fully stabilized.
3. European Perspective: Likely Legal Friction
A similar system deployed in the EU would face significant legal constraints.
3.1 GDPR (General Data Protection Regulation)
Key applicable principles:
Article 5 GDPR
- Purpose limitation
- Data minimization
- Proportionality
Article 6 GDPR
- Legal basis required (legitimate interest, contract, etc.)
Article 35 GDPR
- DPIA (Data Protection Impact Assessment) required for high-risk processing
Article 88 GDPR + national labor laws
- specific protections for employee data
3.2 Core Legal Risks
❗ Purpose Creep
Data collected for:
- performance
- workflow optimization
👉 cannot automatically be reused for:
- AI training aimed at task substitution
❗ Disproportionate Monitoring
Continuous tracking (mouse, keystrokes, screen capture) may violate:
- proportionality requirements
- fundamental rights (privacy, dignity at work)
❗ Automated Decision-Making
If AI outputs influence employment decisions:
- Article 22 GDPR may apply
- requires:
- human oversight
- explainability
- contestability
3.3 AI Act (EU)
Depending on implementation:
- such systems may qualify as “high-risk AI systems”
- triggering obligations:
- risk management systems
- data governance
- human oversight
- documentation and auditability
4. The Core Legal Question: Ownership of Work Methods
Beyond compliance, a deeper issue emerges:
Who owns the “method” of work?
More precisely:
- Can an employer:
- capture employee workflows
- model them
- redeploy them through AI
- eliminate the need for the original worker?
This intersects with:
- labor law
- intellectual property (know-how, trade secrets)
- data protection
- contractual obligations
👉 There is currently no unified legal doctrine addressing this question.
5. Implications for the Legal Profession
The legal sector is directly concerned.
5.1 From Assistance to Substitution
AI systems are evolving from:
- drafting assistance
to: - multi-step legal task execution
Examples:
- legal research
- contract review
- risk analysis
- document generation
5.2 Impact on Law Firm and In-House Models
Likely consequences:
- reduction of low-value repetitive tasks
- pressure on:
- junior roles
- billable hour models
- shift toward:
- fixed fees
- strategic advisory roles
5.3 Non-Delegable Element
One principle remains stable:
Legal responsibility cannot be automated
The value of the lawyer or in-house counsel increasingly lies in:
- validation
- accountability
- strategic judgment
6. Key Takeaways
- Meta’s strategy illustrates a shift toward behavioral data-driven AI training
- U.S. law currently allows:
- layoffs (at-will employment)
- broad monitoring (with emerging limits)
- EU law would impose:
- strict proportionality
- purpose limitation
- DPIA obligations
- The central unresolved issue is:
- legal ownership and reuse of employee-generated work patterns
- For legal professionals:
- the shift to agentic AI redefines the boundary between execution and responsibility
FAQ
What is the difference between generative AI and agentic AI?
Generative AI produces outputs (text, analysis), while agentic AI executes multi-step tasks autonomously, often based on learned workflows.
Is it legal to monitor employees to train AI systems?
In the U.S., it may be permissible under certain conditions. In the EU, such monitoring would be heavily restricted by GDPR principles (proportionality, purpose limitation, DPIA requirements).
Can employee data be reused to train AI?
Only if the reuse is compatible with the original purpose of collection. Under GDPR, repurposing data for AI training may be unlawful without a proper legal basis.
Could AI replace legal professionals?
AI can automate tasks, but legal responsibility remains human, making full replacement unlikely under current legal frameworks.


