Skip to content

Deepfakes are fakes produced by computers. The term is a play on words made up of the technique of “deep learning”. Since the quality of deepfakes is increasingly improving, people today can hardly distinguish the fake from a real video. Deepfakes are thus likely to spread Fake News faster by increasing its credibility, they enable fraud or identity theft and violate privacy rights. In this article, we analyse problems that can arise due to deepfakes.

1. Introduction

Deepfakes are “credible media generated by a deep neural network”. The production of deepfakes requires “deep learning” and artificial neural networks, i.e. artificial intelligence. “Deep learning is a subspecies of machine learning that attempts to mimic the way the human brain works. With the help of this technique, data can be pooled and predictions made. For this purpose, “deep learning” uses multilayer artificial neural networks, e.g. deep neural recurrent networks, i.e. feedback neural networks. “Unlike machine learning, deep learning can also process unstructured data. Deepfakes can be created with the help of software that is freely available on the internet, such as DeepFaceLab or FakeApp. It is necessary to have enough image material in the form of photos or videos to train the artificial neural network. For ideal training, software currently needs about 5,000 images of the target person, but even several hundred images can be enough to create a realistic deepfake. During training, the algorithm recognises the most important parameters of the faces and learns to reconstruct them. Finally, the software delivers manipulated image files according to the manufacturer’s ideas, which can then be assembled into a video with the help of another programme. The same technology can also be used to create faces of people who do not exist.

2. Violations of personal rights

The majority of deepfakes available on the internet today are of a pornographic nature. The publication of such a manipulated video can result in serious violations of personality rights for those whose appearance is imitated. In particular, due to technological progress, even unknown persons who are not in the public eye can be affected by deepfakes today, as fewer and fewer images are needed to produce a convincing imitation. This problem will become more and more acute as technology advances. Already today, a method exists that requires only a single image to produce a deepfake. If an unlawful violation of personality rights occurs, the person concerned can take action against all persons involved in the violation in accordance with Art. 28 of the Swiss Civil Code (Schweizerisches Zivilgesetzbuch) (SR 210).

3. Damage to reputation

Apart from violations of the right to one’s own image, whether static or moving, or of one’s own voice, such deepfakes can also constitute a criminal offence of defamation under Art. 173 ff. of the Swiss Penal Code (StGB) (SR 311.0). A violation of honour can arise due to damage to the reputation of the person concerned by the manipulated statements or depictions or if the person concerned is accused of reprehensible behaviour in an inaccurate manner. Videos and images can no longer be used as evidence in court cases or by journalists as sources without close scrutiny. However, it is problematic that the available detection software is far from being able to identify all deepfakes.

4. Liability

It is also conceivable that liability issues may arise. It is questionable who is liable for the damage caused by incorrect information, e.g. if a deepfake with false information about a worthwhile investment object circulates. If an investor relies on this information, which thanks to the deepfake appears to be a safe option because, for example, a well-known person is allegedly promoting the product, he is deceived. He could claim damages under Art. 41 Code of Obligations (OR) (SR 220) for the loss suffered. In this case, he would have to be able to prove the illegality and the financial damage. The person concerned must also identify the producer of the deepfake so that he can prosecute him, which could cause difficulties. In our opinion, an action for damages pursuant to Art. 41 following OR cannot be brought against the person depicted in the deepfake if this person demonstrably has nothing to do with the statements made and the video was made by a third party. In our opinion, the same conclusion would have to be drawn if a well-known personality calls for acts of violence in a deepfake. This person cannot be held liable for the statements made in a fake video, as long as he or she was not involved in the production of the video or informed about it. We recommend that people affected by deepfakes file a criminal complaint. This will ensure that they are not held liable in the future for any statement made in the deepfake. It is then also made clear that the statement in the deepfake was not made by the person depicted, but that their image was manipulated by third parties or artificial intelligence. If necessary, we will be happy to assist you in drawing up a criminal complaint.

5. Data protection

The faces of people are recognisable in deepfakes, which is why these people can be identified or determined. This is personal data according to Art. 3 lit. a Federal Act on Data Protection (FADP; German: DSG) (SR 235.1). The FADP is therefore relevant. In the case of pornographic deep fakes, the privacy of the persons concerned is affected, which means that personal data requiring special protection is being processed (Art. 3 lit. c No. 2 FADP). The same situation arises in the case of deepfakes of politicians, whom the producer of the deepfake can allow to make any statements (Art. 3 lit. c No. 1 FADP). Therefore, the general processing principles according to Art. 4 FADP apply and only correct personal data may be processed (Art. 5 FADP). In our opinion, both principles are disregarded in the case of deepfakes, which is why the use, production and dissemination of deepfakes is impermissible from a data protection perspective.

6. Compliance obligations

The EU has already addressed the question of how to deal with artificial intelligence and with deepfakes and regulates the application in the proposal for an EU regulation on artificial intelligence. In this proposal, the following categories are formed for artificial intelligence systems: prohibited systems, high-risk systems, systems with limited risk and systems with minimal or no risk (see also Insight of 23 December 2021). According to the proposal, transparency obligations explicitly apply to deepfakes (cf. Art. 52 para. 3 of the EU proposal), which is why they are to be classified as systems with limited risk. The manufacturers of deepfakes must disclose the manipulation of the images on the basis of this regulation. There is no disclosure obligation if the deepfake technology is used in the context of criminal prosecution or investigation, if this would restrict fundamental rights or if protective measures already exist.

7. Aspects of intellectual property rights

In connection with intellectual property rights, the question also arises as to whether deepfakes can be protected by copyright. The Swiss Copyright Act (URG, SR 231.1) presupposes in this respect that a work as defined in Art. 2 URG exists. For this to be the case, the following elements must be fulfilled: The object constitutes an intellectual creation, can be classified in the field of literature or art and exhibits individuality. In addition, the creator principle enshrined in Art. 6 URG must be fulfilled, according to which only natural persons can qualify as authors. Deepfakes do not fulfil the creator principle, as they are computer-generated videos and images which accordingly were not produced by a natural person and which actually also only produce a copy of a real person, i.e. the original. Furthermore, it is questionable whether human cooperation, namely the triggering of the training process, is sufficient for there to be an intellectual creation under the URG. In our opinion, this minimal activity of the natural person is not sufficient to establish an intellectual creation. Rather, a creative activity of the producer would be necessary. The individuality of the deepfake can also be denied because the intended imitation or deception does not constitute an individual design. In addition, the manufacturer cannot control and influence the result of the software used. It must be noted, however, that these questions must be analysed and clarified on a case-by-case basis, as different manufacturing methods exist and therefore there is also a certain room for interpretation.

Sources