Deepfakes undermine the very foundation of digital communication: trust. What looks and sounds genuine is, in fact, anything but. For businesses, this creates a complex web of liability risks, data protection issues and new fraud scenarios. Those who do not understand the legal implications underestimate the scale of the problem. Deepfakes are no longer a marginal phenomenon on the internet, but are fundamentally changing how authenticity is perceived in the digital space, with consequences for reputation, data protection, platform regulation and law enforcement.
Deepfakes as a structural challenge for the law
The development of generative artificial intelligence has given rise to a new and previously unimagined level of manipulation. Deepfakes – understood as synthetically generated or manipulated image, audio or video content that imitates real people or events with deceptive realism – do not merely represent yet another technical risk, but deeply undermine the fundamental assumptions of legal systems.
Examples of the practical relevance of deepfakes can be found in both political and economic contexts: These include, for instance, manipulated videos of Volodymyr Zelenskyy in which he appears to be declaring surrender, content showing the White House in flames, and audio deepfakes used in CEO fraud, where the voices of executives are imitated to obtain payment authorisations or confidential information.
Legal systems assume, in key areas, that perception and representation fundamentally correspond with one another. This implicit premise forms the basis for evidence, communication and trust in legal transactions. Deepfakes undermine precisely this assumption by systematically blurring the line between authentic and manipulated representation. What is particularly legally sensitive here is not only their technical quality, but their effect: they decouple the person from the representation. A face, a voice or a scene can appear genuine without actually being so. The real damage therefore often lies not only in the content itself, but in the destruction of trust in perception, communication and the evidential value of digital media.
In a corporate context, this gives rise to particularly serious risks. Business processes – such as payment approvals, contract signings or internal directives – regularly rely on the authenticity of communication. If this authenticity becomes technically vulnerable, it not only creates new fraud scenarios but also significant legal uncertainties. Deepfakes thus act as a multiplicative risk factor, exposing and exacerbating existing vulnerabilities in organisation, technology and law.
For senior executives, this represents a fundamental shift in the risk landscape: it is no longer just systems, but the very basis for decision-making that becomes vulnerable. Consequently, the risk shifts from the technical level to the level of corporate decision-making – with immediate consequences for governance, liability and organisational duties.
Objective and methodological approach
The aim of this paper is to systematically analyse the legal implications of deepfakes and to assess their practical implications for businesses. The central question is whether and to what extent existing legal instruments are suitable for identifying and managing the risks arising from deepfakes. In contrast to purely doctrinal considerations, the analysis is deliberately conducted from the perspective of decision-makers: Which legal risks actually guide action, and what consequences do these have for the organisation and management of companies?
The analysis follows the traditional legal categories – civil law, data protection law and criminal law – and is supplemented by an examination of regulatory developments in the European Union. What is crucial here is not merely the abstract presentation of the legal norms, but their application through concrete subsumption based on typical deepfake scenarios, particularly in a corporate context.
Protection of personality as a central defence mechanism
Civil law protection of personality forms the primary line of defence against deepfakes. According to Art. 27 et seq. of the Swiss Civil Code (ZGB), personality is protected in its various forms, including in particular the right to one’s own image, the voice as an expression of individual identity, and one’s social and professional reputation.
Deepfakes typically infringe upon all these protected interests simultaneously. If, for example, a person is depicted in a specific situation or associated with specific statements by means of synthetically generated audio or video content, this constitutes an infringement of their personality rights. In the context of legal classification, it is not the technical production of the content that is decisive, but its effect: The decisive factor is whether an average third party would attribute the depiction to the person concerned. If this is the case, the elements of a violation of personality rights are generally fulfilled.
When determining liability, the first step is to assess whether a violation of personality rights has occurred. This is the case if the person concerned is recognisable and the depiction is capable of influencing their social perception. This is precisely the case with deepfakes, as they are specifically designed to be recognisable and authentic.
Illegality is presumed unless there is a justification. In particular, the consent of the data subject, an overriding public interest or a legal basis may be considered.
In a corporate context, these justifications are generally ruled out. Consent is typically not present, as deepfakes are created precisely without the knowledge of the person concerned. An overriding public interest may exist in the case of satirical or journalistic content, but this presupposes that the fictional nature remains recognisable to the average recipient. It is precisely this recognisability that deepfakes are specifically designed to undermine.
The consequence is that deepfakes are generally to be classified as unlawful infringements of personality rights. This gives rise to claims for injunctions, removal, damages and, where applicable, compensation for non-pecuniary damage. For businesses, this means that both the creation and the dissemination of such content give rise to significant civil law risks.
Data protection assessment
In addition to the protection of personality rights, data protection law is central. Deepfakes regularly involve the processing of personal data, in particular biometric data such as facial images or voice patterns.
Under Article 6 of the Data Protection Act (DSG), data processing is only permissible if it is lawful and complies with the principles of good faith, proportionality, purpose limitation and data accuracy. Upon analysis, it becomes apparent that deepfakes typically violate these principles. The principle of purpose limitation is disregarded, as data is used for a completely different purpose than that for which it was originally collected. The principle of accuracy is undermined, as objectively false content is generated that appears authentic. Transparency is also regularly entirely lacking.
Under Art. 30 of the Data Protection Act (DSG), a violation of personal rights occurs in particular where data is processed without justification or where incorrect data is disseminated. Even if the source data was originally collected lawfully, this does not legitimise its subsequent use for the creation and dissemination of manipulative content.
Of particular legal significance is the breach of the principle of data accuracy: Deepfakes deliberately generate false personal data that appears authentic. This constitutes not only unlawful processing but also a serious form of infringement of personal rights.
For businesses, this intensifies compliance requirements: it is not enough to collect data correctly – rather, it must be ensured that it is not transferred into manipulative contexts. At the same time, they must implement organisational and technical measures to minimise the associated risks.
Criminal law classification
Deepfakes are also relevant under criminal law in several respects. The primary focus is on the offence of identity theft under Section 179decies of the Swiss Criminal Code.
When determining whether the offence applies, it must be assessed whether another person’s identity is being used to cause harm or to obtain an unlawful advantage. In deepfake-based fraud scenarios – such as CEO fraud – these conditions are regularly met. The synthetic replication of a voice or appearance is specifically intended to generate trust and trigger financial transactions.
In addition, offences against honour may be considered where deepfakes are used to disseminate false information capable of damaging a person’s reputation. Cases involving sexualised deepfakes, which may fall under Article 197 of the Swiss Criminal Code, are particularly serious.
The criminal law implications underscore that deepfakes are not merely a civil or data protection issue, but also have a clear dimension of criminal offences.
Regulatory divergence: Switzerland and the European Union
A look at regulatory developments reveals clear differences between Switzerland and the European Union. Whilst Switzerland continues to rely on technology-neutral standards, the EU is pursuing a specific regulatory approach with the AI Act. Among other things, this provides for labelling and transparency obligations for AI-generated content. The aim is to ensure that deepfakes are recognisable and thus reduce their deceptive effect.
Added to this is a second regulatory axis: the Digital Services Act. Whilst this is not a general ‘deepfake law’, it shifts responsibility onto the platform infrastructure. Where synthetic or manipulated content is disseminated on a massive scale and systemic risks arise for the public, security or democratic processes, labelling, moderation and risk management become subject to regulation. For Swiss companies, both pieces of legislation are only relevant if they fall within their scope of application.
In this context, the preliminary draft of the Federal Act on Communication Platforms and Search Engines (KomPG) is gaining significance for Switzerland. Unlike the AI Act, it does not create a general obligation to label deepfakes, but regulates them indirectly where they appear on major platforms as allegedly unlawful content or as a systemic risk. Of particular practical relevance are reporting and complaint procedures for allegedly unlawful content, transparency obligations, and the duty to conduct an annual risk assessment of systemic risks. This can be of particular significance, especially for defamatory, disparaging, threatening or otherwise unlawful deepfakes. It remains to be seen whether the legislator will tighten the regulatory screws further in this area in view of the rapid spread of deepfakes.
Corporate risks and organisational obligations
Deepfakes are not merely an external risk; they also expose internal weaknesses. Companies are obliged to implement appropriate organisational and technical measures to prevent misuse. Whilst this primarily concerns IT infrastructure, the ‘human’ factor remains a critical issue. No matter how effective the security barriers in IT applications may be, they will fail if an employee fails to recognise that a deepfake is present. Deepfakes are therefore not an isolated IT risk. Legally, the decisive factor is rather that they trigger organisational obligations. Companies must be prepared to answer the question of whether they have taken appropriate measures to prevent such risks. This is where Art. 716a of the Swiss Code of Obligations (CO) applies: the management is obliged to ensure appropriate risk management. Deepfakes now constitute such a risk. If they fail to do so, organisational negligence may be established. This relates in particular to the duties of care incumbent on the management, which is responsible for appropriate risk management.
Classification in the context of AI content
As already demonstrated in our HÄRTING article: Artificial, but not without consequences: The underestimated risk of AI content, AI content is by no means legally neutral. In this respect, deepfakes represent a particularly high-risk manifestation of this phenomenon. The key insight must therefore be: AI risks should not be viewed in isolation, but as part of a systemic risk landscape. Deepfakes are merely its most visible manifestation.
Recommendations for action
Against this backdrop, prevention is of central importance. Companies should review their authentication processes and, in particular, rely on multi-factor verification mechanisms. Furthermore, raising staff awareness is crucial for recognising typical attack patterns. Equally important is the establishment of clear incident response structures that enable a rapid and coordinated reaction in the event of an emergency.
Conclusion
Deepfakes pose a fundamental challenge to the law and corporate practice. They undermine the foundations of digital communication and create new risks that existing tools can only partially address. The key challenge lies not in the technology itself, but in the ability of companies to manage its implications legally and organisationally. For senior management, this means that those who do not actively manage deepfake risks may already be breaching their organisational duties today.
Sources
- HÄRTING article: Artificial, but not without consequences: The underestimated risk of AI content
- Federal Act amending the Swiss Civil Code (Part Five: The Law of Obligations)
- Federal Act on Data Protection (DSG)
- Preliminary draft of the Federal Act on Communication Platforms and Search Engines (KomPG)