Skip to content

Artificial intelligence is transforming journalism – but who owns AI-generated content? A new political initiative and an industry declaration call for clear rules in copyright law. The federal government seems to be listening: The path to sectoral regulation is open.

Current event: Copyright in the age of AI

With the motion 24.4596 by Petra Gössi, member of the Council of States, the debate on the copyright protection of journalistic content in the context of artificial intelligence (AI) has arrived in Switzerland. The motion demands that Switzerland analyses and legislates to strengthen the copyright position of media companies with regard to AI-based applications. Shortly afterwards, the Swiss Media Association, together with DACHLUX publishers, published the so-called “Zurich Declaration”, which emphasises a strong need to protect journalistic works against AI uses.

Legal basis: copyright meets generative AI

Copyright law protects individual, intellectual creations with the character of a work (Art. 2 CopA). Journalistic articles, research or reports generally fulfil these requirements. But what happens when AI models such as ChatGPT or other large language models (LLMs) integrate millions of such works into their training data – without the consent of the rights holders?

Unlike in the EU, for example (keyword: ancillary copyright of press publishers, Art. 15 DSM Directive), Swiss copyright law does not yet have a specific regulation that legally addresses the systematic access to journalistic content by AI models. There is also no obligation to label AI-generated content.

This is precisely where Council of States motion 24.4596 comes in and calls on the Federal Council to take legislative action – for example by introducing its own ancillary copyright or technical protection mechanisms against unauthorised scraping. Such an ancillary copyright is to be introduced in advance for the use of journalistic content by search engines with a partial revision of the Copyright Act (HÄRTING has already reported on this proposed revision here )

The “Zurich Declaration”: industry signal with political impact

In the “Zurich Declaration” published on 11 April 2025, leading media companies from Switzerland, Germany, Austria and Luxembourg call for clear legal regulations on the use of journalistic content by AI. The declaration emphasises:

  • Right to consent and remuneration for any use of journalistic content by AI models;
  • Labelling obligations for AI-generated content;
  • Transparency obligations for AI providers with regard to the training data used;
  • Sanctions in the event of misuse or circumvention of technical safeguards.

The initiative emphasises the growing need for targeted sectoral regulation of artificial intelligence, as also specified in the Federal Council’s new report on the implementation of the Artificial Intelligence Strategy of February 2025 (which we reported on here ). Instead of enacting a general AI law, the federal government is pursuing a risk-based and sector-specific regulatory approach. Existing legal frameworks in selected sectors, such as copyright law, are to be reviewed and selectively adapted in order to take appropriate account of the specific challenges posed by AI.

Analysis: challenges and opportunities

Technical intransparency as a legal problem

A central problem is the lack of transparency of training processes and data sets for AI models. Without disclosure, it is almost impossible for rights holders to check whether and to what extent their content has been used. This threatens the de facto expropriation of intellectual works, especially journalistic contributions, by the AI industry’s training pipeline.

Growing power asymmetry

Media companies are faced with large tech platforms that have the technical, financial and legal means to systematically appropriate content. The call for an ancillary copyright or licence mechanisms is therefore an expression of a growing imbalance of power.

European parallels

In the EU, the ancillary copyright law (Art. 15 DSM Directive) provides a legal basis for remuneration claims by press publishers. France and Germany have already implemented the first licence agreements between media and platforms. Switzerland is in danger of falling behind here.

Practical implications and recommendations

For media companies

Media companies face the challenge of protecting their content from automated use by AI systems. To this end, they should consider technical protective measures such as the configuration of the “robots.txt” file to control web crawler access or the use of paywalls to restrict access to editorial content. Furthermore, it is advisable to bundle publishers’ interests and present a strategically united front in order to be able to influence the design of regulatory measures at a political level. In addition, media companies should review their licence models and, if necessary, adapt them to enable legally compliant and fair use of their content by AI providers.

For tech companies

Companies that develop or operate AI systems should also proactively deal with legal requirements. Integrating compliance considerations into training and development processes at an early stage is just as important as the transparent documentation of the data records used. In addition, these companies could prepare for regulatory requirements by developing standardised interfaces for licensing journalistic content (e.g. APIs) and at the same time create attractive cooperation models for media companies.

For legislators

Several areas of action arise for legislators from this mixed situation: One option is the introduction of an independent ancillary copyright for press publishers, which could guarantee the special protection of journalistic content in the digital space (an ancillary copyright is already being introduced for search platforms with the partial revision of the Copyright Act) . In addition, the existing provisions on text and data mining need to be further developed and, in particular, provided with an effective opt-out mechanism to enable rights holders to control the use of their works. Finally, a structured, cross-sector dialogue between the media industry, technology companies and civil society is necessary in order to develop practical, consensus-based solutions for the use of AI in an editorial context.

Conclusion and outlook

By accepting the Council of States’ motion, Swiss politicians have signalled a clear willingness to regulate AI in the sector – in line with the federal strategy. The “Zurich Declaration” reinforces this impetus from the industry and puts the issue of copyright protection for journalistic content on the agenda. Media companies, platforms and legislators now have the task of creating a fair, transparent and legally secure framework for the use of AI. In the long term, the democratic public sphere cannot afford a regulatory void like the one we have seen so far.

Sources