Skip to content

Switzerland: Friendly reminder from the FDPIC: According to a press release from the FDPIC dated 9 November 2023, the Federal Act on Data Protection (FADP), which has been in force since 1 September 2023, is also directly applicable to data processing by AI. Read here to find out why this is the case and how it affects practice.

In terms of regulating the use of AI, the legal systems are at different stages of development. While important steps towards regulation have been taken in the USA with the signing of an executive order on 30 October 2023 and in the EU with the European Parliament’s approval of the AI Act, Switzerland is still in the evaluation phase until the end of 2024. The Federal Administration has until then to decide on an approach and, if necessary, issue a regulatory mandate. To date, Switzerland has implemented technology-specific legislation on a sector- or topic-specific basis (so-called sectoral approach). It remains to be seen whether this will be maintained or whether a general AI regulation modelled on the European AI Act will be pursued.

The current increase in data processing by AI prompted the Federal Data Protection and Information Commissioner (FDPIC) to point out that, despite the lack of AI regulation, Switzerland is by no means in a legal vacuum and that the new FADP in particular applies. The reason for this is the technology-neutral wording of the FADP.

This means that manufacturers, providers and users of AI systems must disclose the purpose, functionality and data sources of processing by AI. Related to this is the right of the data subject to review automated individual decisions by a human and to object to automated data processing. If intelligent language models are used that communicate directly with users, the latter have the right to know whether they are speaking or corresponding with a machine and whether their data is being used further, whether for improving the self-learning programme or for other purposes. Special care must be taken with programmes that can be used to change the face or voice. They must be explicitly labelled as such for everyone, unless they prove to be unlawful from a criminal law perspective anyway.

Finally, it is also worth mentioning the fundamental permissibility of AI-supported data processing with a high risk, provided that a data protection impact assessment can establish that appropriate measures are in place to protect the data subjects. Only applications that completely disregard privacy under data protection law are prohibited per se. This includes AI-based data processing systems that are primarily used in authoritarian states, such as “social scoring” (comprehensive surveillance to assess lifestyle) or comprehensive facial recognition in real time.

Sources