Skip to content

On 23 February 2026, the Federal Data Protection and Information Commissioner (FDPIC), together with 60 other national data protection authorities, published a joint statement on AI-generated images and the protection of privacy. In it, the data protection authorities formulate their most important expectations and principles for all organisations that develop and use AI systems to generate content. These include, in particular, robust safeguards against misuse, transparency requirements, effective and accessible erasure mechanisms and specific child protection.

Initial situation

Today, AI-based systems for generating images and videos are generally accessible, often free of charge or integrated into existing platforms. What used to require considerable technical knowledge can now be generated in high quality with just a few inputs. This means that realistic visualisations can be created with little effort. This becomes particularly relevant under data protection and copyright law when identifiable persons are depicted or modelled.

The possible consequences for those affected are manifold. In addition to violations of personality and reputation, there are abuse scenarios such as harassment, threats, blackmail, cyberbullying and non-consensual intimate depictions. The risk situation is particularly pronounced for minors. In practice, it is also important to note that the damage does not usually end with the creation: Dissemination, re-uploading and the difficulty of long-term removal can intensify and prolong the damage.

Much is in flux in terms of regulation. For organisations in Switzerland, however, the following already applies today: where personal data is affected, the Swiss Data Protection Act(SR 235.1) applies, as the FDPIC already stated in May 2025. This means that manufacturers, providers and users are already obliged to protect the privacy of data subjects during the development and use of AI systems.

The joint declaration of the national data protection authorities sharpens this principle for the specific risk area of image generation with AI: as soon as systems target or can depict real, identifiable persons, data protection compliance becomes mandatory.

The joint declaration of the data protection authorities

The declaration of 23 February 2026 was coordinated by the International Enforcement Cooperation Working Group (IEWG) of the Global Privacy Assembly (GPA).

The declaration is expressly formulated as a response to serious concerns raised by national data protection authorities: This refers to AI systems that can generate realistic images and videos that show identifiable persons even though they are unaware of this and have not consented to it, so-called deepfakes, fake news, etc.

The signatories are not questioning generative AI as such – they even recognise the advantages of AI systems, but are linking them to a very specific development: The integration of image and video generation into easily and widely available platforms has made it much easier to create certain forms of abuse and increases the risks for children and young people.

The target group is noteworthy: the authorities remind all organisations that use or develop AI of their duty to develop and use such systems within the framework of the applicable laws.

Expectations of organisations

While specific legal regulations vary from country to country, the joint declaration of the data protection authorities sets out four general principles that should guide organisations in the development and use of AI. In Switzerland, these principles by no means fall into a legal vacuum: the principles can be easily linked to the central obligations of the FADP.

1) Robust protective measures against misuse

In the first point, organisations are expected to implement effective safeguards to prevent the misuse of personal data and the generation of non-consensual and intimate content, especially where children and young people are concerned.

In Swiss data protection law, this principle is already standardised in Art. 7 FADP: The data controller is obliged to design the data processing technically and organisationally in such a way that the data protection regulations are complied with. The controller must take this into account right from the planning stage. These technical and organisational measures must be appropriate to the state of the art, the nature and scope of the data processing and the risk that the processing poses to the personality or fundamental rights of the data subjects.

In practice, this means less symbolic measures and more a clear protection concept: misuse scenarios are part of product and risk management. Clear responsibilities and robust incident handling are just as important. Where the risks are high, for example in the case of realistic depictions of identifiable persons or child-related scenarios, the question of a data protection impact assessment in accordance with Art. 22 FADP also quickly becomes relevant.

2) Transparency

There is also a need for meaningful transparency about the system’s capabilities, existing protective measures, authorised uses and the consequences of misuse. Users should be able to realistically assess what the system can do, where limits are set and what happens in the event of violations.

In Swiss data protection law, this principle can be subsumed above all by the principle of transparency, as well as by the information obligations under Art. 19 FADP: Anyone who processes personal data must disclose in a comprehensible form what is relevant for the data subjects so that they can exercise their rights. As part of the duty to provide information, data subjects have a right to information under Art. 25 FADP, according to which various information is disclosed to them.

3) Effective and accessible erasure mechanisms

According to the third principle, data subjects should be able to easily request the removal of harmful content. Organisations should respond quickly and efficiently to such requests.

Data subjects must be able to assert their rights effectively, for example via the right of access(Art. 25 FADP), which creates transparency about the processing and is often the first step in being able to take targeted action at all.

According to the Swiss Data Protection Act, erasure can be obtained after an assertion of a violation of personality rights in accordance with Art. 30 ff. DSG can be obtained.

4) Specific child protection

Finally, the declaration calls for specific risks for children to be addressed through increased protection measures and age-appropriate information to be provided for children, guardians and teachers.

Even though the FADP does not contain a separate “special standard for children”, a practical standard follows from its risk-based approach: where minors are affected, there will be a high risk for the person concerned. The closer a service is to children or young people (or the more easily it can be used by minors), the higher the standard of protection should be.

Conclusion

The joint declaration of 23 February 2026 sets clear guidelines: in the view of the data protection authorities, AI-generated images and videos are an area with considerable potential for abuse, particularly to the detriment of children and other vulnerable groups. They therefore expect concrete precautions rather than vague declarations of intent: Protective measures against abuse, comprehensible transparency, fast and accessible removal processes and explicitly strengthened child protection.

For Swiss organisations, the classification is clear: as soon as AI content concerns identifiable persons, data protection law is relevant – and the guard rails mentioned in the declaration can be directly linked to the obligations of the FADP. Anyone developing or using generative systems should understand these expectations as a minimum standard.

Sources