- Introduction
In the area of conflict between AI systems and copyright law, one key question comes to the fore: Can photographs, illustrations, or other visual content be used to train algorithms, and if so, under what legal conditions?
For the first time, a British court of highest jurisdiction has dealt with questions in this context in a substantive manner. In the case between the image agency Getty Images and the AI developer Stability AI, the High Court in London handed down a ruling on November 4, 2025, that clarifies both the legal framework for training generative AI and the liability for its output – at least in part. For Switzerland, where copyright regulation is the subject of heated debate due to Motion Gössi No. 24.4596, the ruling raises pressing follow-up questions. These concern both the handling of copyright-protected material in the training process and the need for further legislative developments.
Facts
The case centered on Stable Diffusion, an AI system for image generation developed by Stability AI. It generates new images based on linguistic input and was trained using extensive image-text datasets.
Getty Images accused Stability AI of using millions of images from its own archive and that of its subsidiary platform iStock without a valid license during training,
including numerous images with visible watermarks such as “Getty Images” or “iStock.”
Getty also complained that the AI system also generated synthetic watermarks that were deceptively similar or even identical to its trademarks. On this basis, the plaintiff asserted trademark and copyright claims. The High Court’s ruling was nuanced: trademark infringements were only recognized in a few cases, while copyright claims were largely dismissed.
Trademark aspects
Getty Images relied on the British Trade Marks Act and argued that images with the generated watermarks could give the public the impression that they were originals or licensed content.
The court examined the classic requirements for trademark infringement: use in commercial transactions, likelihood of confusion, and exploitation or impairment of the trademark’s reputation. The court only recognized an infringement in relation to older versions of the model: the watermark “iStock” appeared in identical form on some images.
For later versions of the model, however, Getty was unable to provide sufficient evidence of identical or confusingly similar watermarks. The names often appeared fragmented or altered. The allegations of reputation exploitation or dilution also remained unsubstantiated.
Copyright aspects
Primary and secondary allegations of infringement
Far more complex and significant for the fundamental legal assessment were the copyright issues. Getty Images attempted to proceed on several levels of argumentation. The initial focus was on the allegation that the AI model had been unlawfully reproduced by training it with copyright-protected works without the consent of the rights holders. This was classified as a primary copyright infringement.
In addition, Getty argued that the trained model itself constituted an “infringing copy” because it contained internal representations of protected works that were stored in the model and could be technically reconstructed.
The model as an abstract representation, not an “infringing copy”
With regard to the claim that the model itself was an unlawful copy, Getty argued that digital content could also fall under the term “article.” However, the court clarified that Stable Diffusion does not store specific images or parts of works, but only numerical weights that determine the structure of the model. These weights are abstract, cannot be reconstructed, and do not contain any reproducible elements of protected works. The High Court particularly emphasized that a representation of statistical patterns or probability distributions is not a copy, as it does not contain any reproducible elements of the work.
As a result, the model could not be considered an “infringing copy.” The decisive factor here was the finding that the model parameters did not contain a copy of the training images at any time.
In the court’s opinion, mere exposure to training data is not sufficient to qualify a model as a “copy.”
Conclusion: Hurdles for copyright protection in the AI age
In summary, the ruling shows that copyright claims against AI developers currently face considerable hurdles. Central to this is the court’s clarification that a copy in the copyright sense must always contain an element that reproduces the original work itself. Purely statistical or latent representations, such as those created by AI models, do not meet this criterion.
Technically, it remains unclear how the concept of a work is to be applied to model parameters. As long as an AI model does not contain any reconstructable components of a work, no infringing copy can be derived from it.
Criticism, outlook, and significance for Switzerland
In our opinion, the London High Court fails to recognize that a copy must already exist for the vectorization of the content.
Only then can they be weighted, i.e., assigned probabilities, whether generated by machine or by backward propagation.
For Switzerland, the ruling highlights the urgent need for action at the legislative level. The current Copyright Act (URG) does not contain any specific provisions on the use of protected works for training artificial intelligence. Neither the definition of a work (Art. 2 URG) nor the definition of reproduction (Art. 10 URG) has been tailored to numerical representations or non-reconstructive models. There is a lack of legal clarity as to whether and when a copy or use in the digital space exists. The current legal situation therefore offers neither legal clarity for developers nor effective protection for rights holders.
Sources
Article