At a time when generative AIs are revolutionizing content creation, one question arises: could we take an artificial intelligence directly to court for infringement or plagiarism? Under current law, the answer is NO, for reasons such as:
➡️ The fundamental lack of legal personality of AI
An AI is not recognized as a legal person in our legal system. It holds neither rights nor obligations of its own, nor does it have separate assets.
As a result, it cannot initiate or be subjected to a lawsuit, regardless of its level of sophistication. As recently recalled by the European AI Regulation adopted on March 13, 2024, these systems remain legally tools, not legal subjects.
➡️ The human chain of responsibility: who is liable for an AI’s actions?
While the AI itself escapes direct liability, responsibility must be sought among the human actors or legal entities involved:
The AI developers/providers:
Could they be held liable for using unauthorized data in training?
Does a design defect or lack of sufficient safeguards to prevent infringing content constitute negligence?
Concrete example: In Getty Images v. Stability AI (January 2023), Getty Images sued Stability AI for using its copyrighted images to train Stable Diffusion without authorization.
The company deploying or using the AI:
This is often the party most exposed to legal risk.
Practical case: Your marketing department uses a generative AI to create visuals that reproduce elements of a preexisting work without authorization. Your company will be on the front lines against rights-holders’ claims.
Liability may be contractual (toward clients) or tort-based (toward the harmed rights-holders).
The end user (the “prompter”):
Could they be considered the author or co-author of the infringement?
The classification will largely depend on the nature of the prompt, the demonstrated intent, and how the output is used.
Recent case law example: In The New York Times v. OpenAI (December 2023), the NYT alleges that OpenAI trained its models on its copyrighted articles and that the outputs sometimes reproduce those articles verbatim.
➡️ Major legal challenges in AI-related infringement disputes :
Several specific obstacles complicate infringement actions in this context:
Traceability and evidence issues:
How can one prove that a specific AI output derives directly from a copyrighted work in its training dataset?
Algorithmic opacity (the “black box” phenomenon) makes it particularly difficult to demonstrate causation.
Applying traditional infringement criteria:
Are classic notions of similarity, originality, or overall impression still relevant?
How do you legally characterize an AI’s imitation of an artistic “style,” which doesn’t literally reproduce a work but clearly draws inspiration from it?
International dimension and conflicts of law:
With AI infrastructures often spanning multiple jurisdictions, which national law applies?
Text and data mining exceptions vary widely between territories (more restrictive in Europe than in the United States).
Conclusion:
To borrow a familiar analogy: you don’t sue the hammer used to commit a wrongdoing—you sue the person wielding it.
While AI today escapes any direct legal liability, the question of attributing the infringement it can generate shifts to the human actors in the value chain.
The European AI Regulation, which will come fully into force in 2027, will bring some clarifications, notably regarding transparency of model training data. It remains to be seen how the United States and China will address these issues.



