At the beginning of 2026, one figure deserves the close attention of executives and in-house counsel alike.
According to the latest European data, nearly one in three Europeans has already used a generative AI tool. Yet behind this average lies a striking contrast: around 25% of usage is personal, while only about 15% takes place in a professional setting.
This gap is far from trivial. It reflects a reality that can no longer be ignored: professional use of AI entails concrete legal responsibilities, which naturally slows down its institutional adoption.
⚖️ When technological enthusiasm meets legal accountability
Generative AI has spread rapidly across personal uses information searches, drafting emails, automating simple tasks. In corporate environments, and especially within legal functions, adoption is noticeably more cautious.
This apparent restraint is neither technological backwardness nor resistance to innovation. Rather, it reflects a growing awareness of the practical constraints and legal risks associated with deploying AI in structured legal environments.
Recent feedback from in-house counsel on the use of AI for legal research highlights several recurring friction points:
- results that are sometimes too broad or insufficiently precise, requiring multiple refinements before becoming actionable;
- irregular updating of legal sources;
- limited geographical coverage, particularly in EU or cross-border law;
- confidentiality and data-hosting concerns;
- tools that lack intuitive design for operational legal teams.
These insights show that, for legal professionals, AI is not merely a question of technical performance. It is primarily a matter of practical usefulness and risk control.
📜 A regulatory framework that now shapes adoption
This caution is reinforced by the rapid evolution of the legal framework.
At EU level, Regulation (EU) 2024/1689 (the AI Act) introduces a risk-based regulatory approach and has been progressively applicable since 2024.
Rather than banning AI, the regulation clearly reasserts the central role of human responsibility in all AI-supported processes.
In this context, legal professionals are not expected to turn away from AI, but to structure its use within robust governance and verification mechanisms.
⚠️ Hallucinations, errors, and liability: a now-identified risk
Unsupervised use of generative AI may result in factual inaccuracies, fabricated references, or misleading legal analyses phenomena that have been increasingly documented across jurisdictions.
As these limitations become widely understood within the profession, the duty of human verification is no longer optional. It is fast becoming a core component of professional compliance and liability management.
🧭 What the adoption gap tells for the business
The discrepancy between personal and professional use should not be underestimated. It signals a form of institutional maturity.
It calls on legal departments to:
- explicitly regulate AI usage to prevent uncontrolled shadow AI practices;
- favor specialized solutions offering reliability, EU-based hosting, and confidentiality guarantees aligned with regulatory requirements;
- treat human oversight as a non-negotiable standard in any AI-assisted legal workflow.
Conclusion
European statistics confirm that generative AI is becoming embedded in everyday practice. For legal professionals, however, the challenge is not speed of adoption, but the quality and legal security of its integration.
Caution is not the opposite of innovation. It has become a necessary condition for sustainable, responsible use.
Sources : https://ec.europa.eu/eurostat/fr/web/products-eurostat-news/w/ddn-20251216-3


