On 23 February 2026, the Federal Data Protection and Information Commissioner (FDPIC) together with 60 other national data protection authorities published a joint statement on AI-generated images and the protection of privacy. In it, the data protection authorities set out their key expectations and principles for all organisations that develop and use AI systems for content generation. These include, in particular, robust safeguards against misuse, transparency requirements, effective and accessible deletion mechanisms, and specific protection for children.
Current landscape
AI-based systems for generating images and videos are now generally accessible, often free of charge or integrated into existing platforms. What previously required significant technical knowledge can now be produced in high quality with just a few inputs. This enables realistic representations to be created with little effort. This becomes relevant under data protection and copyright law in particular where identifiable persons are depicted or recreated.
The possible impacts for affected persons are manifold. In addition to violations of personality rights and reputation, misuse scenarios such as harassment, threats, blackmail, cyberbullying, and non-consensual intimate representations are at issue. For minors, the risk situation is particularly pronounced. Of practical significance is also the fact that the harm regularly does not end with the creation: dissemination, re-uploading, and the difficulty of sustainable removal can intensify and prolong the impairment.
From a regulatory perspective, much is in flux. For organisations in Switzerland, however, the following already applies today: where personal data are affected, the Swiss Federal Act on Data Protection (SR 235.1), applies, as the FDPIC already stated in May 2025. . Manufacturers, providers and users are therefore already under a duty, when developing and deploying AI systems, to protect the privacy of affected persons.
The joint statement of the national data protection authorities sharpens this principle for the specific risk area of AI-based image generation: as soon as systems target real, identifiable persons or can depict them, data protection compliance becomes mandatory.
The joint statement of the data protection authorities
The statement of 23 February 2026 was coordinated by the International Enforcement Cooperation Working Group (IEWG) of the Global Privacy Assembly (GPA).
The statement is expressly formulated as a response to serious concerns raised by national data protection authorities: namely AI systems that can generate realistic images and videos showing identifiable persons although these persons are unaware of it and have not consented, so-called Deepfakes, Fakenews etc.
The signatories do not call generative AI as such into question – they even acknowledge the benefits of AI systems – but they tie their concerns to a very specific development: the integration of image and video generation into easily and widely available platforms has significantly facilitated certain forms of misuse and increases the risks for children and adolescents.
The addressee group is noteworthy: the authorities remind all organisations that use or develop AI of their duty to develop and use such systems within the framework of the applicable laws.
Expectations of organisations
While specific statutory regulations vary from country to country, the joint statement of the data protection authorities sets out four general principles intended to guide organisations in the development and use of AI. In Switzerland, these principles by no means fall into a legal vacuum: they can be linked well to the core obligations of the FADP.
1) Robust safequards against misuse
Under the first point, organisations are expected to implement effective safeguards to prevent the misuse of personal data and the creation of non-consensual and intimate content, particularly where children and adolescents are affected.
In the Swiss FADP, this principle is already codified in Art. 7 FADP: the controller is obliged to design the data processing in technical and organisational terms in such a way that the data protection provisions are complied with. The responsible entity must take this into account already from the planning stage. These technical and organisational measures must be appropriate to the state of the art, the nature and scope of the data processing, and the risk that the processing poses to the personality or the fundamental rights of the data subjects.
In practice, this means fewer symbolic measures and instead a clear protection concept: misuse scenarios belong in product and risk management. Clear responsibilities and robust incident handling are equally central. Where the risks are high, for example in the case of realistic depictions of identifiable persons or child-related scenarios, the question of a data protection impact assessment pursuant to Art. 22 FADP also quickly becomes relevant.
2) Transparency
Meaningful transparency is also required regarding the capabilities of the system, existing safeguards, permitted uses and the consequences of misuse. Users should be able to realistically assess what the system can do, where limits are drawn, and what happens in the event of violations.
Under the Swiss FADP, this principle can be subsumed primarily under the principle of transparency, as well as the information duties under Art. 19 FADP: anyone who processes personal data must disclose, in an understandable form, what is relevant for the data subjects so that they can exercise their rights. Within the framework of these information duties, data subjects have a right of access under Art. 25 FADP, under which various pieces of information are disclosed.
3) Effective and accessible deletion mechanisms
Under the third principle, data subjects should be able to request the removal of harmful content easily. Organisations should respond to such requests quickly and efficiently.Data subjects must be able to enforce their rights effectively, for example via the right of access (Art. 25 FADP), which creates transparency about the processing and is often the first step in being able to proceed in a targeted manner at all.
Under the Swiss FADP, deletion can be obtained following the assertion of an infringement of personality rights pursuant to Art. 30 FADP.
4) Specific protection for children
Finally, the statement calls for specific risks to children to be addressed through enhanced safeguards and for age-appropriate information to be provided for children, legal guardians and teachers.
Even though the FADP does not contain a specific “special provision for children”, its risk-based approach results in a practical standard: where minors are affected, there will be a high risk for the data subject. The more child- or youth-oriented a service is (or the more easily it can be used by minors), the more protection standards should, by default, be set higher.
Conclusion
The joint statement of 23 February 2026 sets clear guardrails: from the perspective of the data protection authorities, AI-generated images and videos are an area with considerable potential for misuse, especially to the detriment of children and other vulnerable groups. What is expected, therefore, are not vague declarations of intent but concrete measures: safeguards against misuse, comprehensible transparency, fast and accessible removal processes, and expressly strengthened protection for children.
For Swiss organisations, the classification is clear: as soon as AI content concerns identifiable persons, data protection law applies – and the guardrails set out in the statement can be directly linked to the obligations under the FADP. Anyone who develops or deploys generative systems should understand these expectations as a minimum standard.
Sources
- Mitteilung des EDÖB vom 23. Februar 2026: Gemeinsame Erklärung zu KI-generierten Inhalten
- Mitteilung des EDÖB vom 08. Mai 2025: Update – Geltendes Datenschutzgesetz ist auf KI direkt anwendbar
- Deepfakes – Neue rechtliche Herausforderungen aufgrund technologischen Fortschritts
- Social Bots, „Fake News“ und „Hate Speech“ – Eine Gefahr für den Meinungsbildungsprozess in den sozialen Netzwerken