The use of artificial intelligence (AI) is revolutionising e-commerce in Switzerland. However, without careful consideration of data protection, significant risks can arise. Find out why transparent information about the use of AI in general terms and conditions (GTC) and privacy policies is essential to avoid legal pitfalls.
The integration of artificial intelligence (AI) into e-commerce offers companies innovative opportunities for process optimisation and customer interaction. At the same time, the handling of personal data in the context of AI applications places high data protection requirements on the companies concerned. Anyone using AI in e-commerce must ensure that their data processing procedures comply with legal requirements. This article highlights the risks associated with using AI without adequate data protection measures and emphasises the importance of clear and comprehensible provisions in terms and conditions and privacy policies.
Risks of non-compliance with data protection when using AI
AI is used in e-commerce in a variety of ways today: companies use AI-supported tools such as product recommendation systems, chatbots, dynamic pricing systems and customer analysis platforms that use data mining and machine learning to evaluate purchasing behaviour and make forecasts. The aim is to understand user behaviour, anticipate preferences and make tailored offers.
Such systems usually work on the basis of extensive personal data – such as click behaviour, previous purchases, length of stay or even location data. This is used to create individual user profiles in order to generate personalised advertising or product recommendations. Problems arise when this data processing goes beyond what is necessary for the respective purpose and far-reaching automated decisions are made – such as price variations depending on user profiles or exclusions from offers. This quickly constitutes profiling, and in some circumstances even high-risk profiling. High-risk profiling is defined in Art. 5 lit. f in conjunction with Art. 21 of the Data Protection Act (DSG) as automated processing that has a significant impact on the data subject, e.g. when a system decides what price a user sees or whether certain products are displayed to them at all. Even if such systems discriminate implicitly (e.g. by applying price surcharges for certain behaviour patterns), there is a high risk of fundamental rights violations. In such cases, consent for such data processing must be given expressly, the processing must be clearly justified and documented, and transparent information about how the systems used work must be provided. In short, the more targeted and ‘intelligent’ a system is in its personalisation, the greater the risk that it will result in legally problematic profiling. Companies must recognise this threshold at an early stage and design their processes accordingly.
In principle, the GDPR permits the processing of personal data provided that it is not unlawful (Art. 6(1) GDPR). The express consent of the data subject is required if:
- particularly sensitive personal data (e.g. health data, political opinions, religious beliefs) is processed (Art. 5 lit. c DSG),
- high-risk profiling is involved, e.g. through automated decision-making processes that have legal or similarly significant effects on the data subject (Art. 5 lit. f in conjunction with Art. 21 DSG),
- the processing of the data in another way involves a particularly high level of intrusion that would not be justified without consent.
In e-commerce, for example, a recommendation system based on extensive behavioural analysis and leading to personalised price displays or product suggestions may already constitute high-risk profiling – especially if this is done automatically and there is no possibility for the user to influence the process.
The privacy policy serves as a central means of implementing the information obligations under Articles 19 and 21 of the GDPR. It informs data subjects about what data is collected, for what purposes, how and by whom it is processed, and what rights the data subjects have. In the e-commerce context, where the use of AI, as explained above, often leads to profiling, tracking or automated decisions, the privacy policy becomes a crucial tool for meeting legal transparency requirements.
A breach of the information obligations is punishable and can result in fines of up to CHF 250,000. In addition to penalties, claims for damages and the loss of customers can also lead to considerable financial losses for the company concerned. Added to this is the damage to reputation caused by data protection violations. These can permanently undermine customer trust and seriously damage the company’s image.
Based on this, certain basic recommendations for action have been developed that must be observed.
- Create transparency: Users must be informed clearly and comprehensively about the use of AI and the associated data processing.
- Obtain consent: Where necessary, the express consent of users must be obtained and carefully documented.
Role of the general terms and conditions (GTC)
The general terms and conditions (GTC) define the contractual framework between the company and its customers. They are not only a means of limiting risk, but also an instrument for legal certainty and trust-building – especially in data-based business models where contracts are no longer concluded on paper. The purpose of the GTC is to clearly define in advance what applies between the parties, for example with regard to contract content, liability, payment terms and certain aspects relevant to data protection law.
When using AI in particular, terms and conditions should specify the extent to which personalised systems are used, which decisions are made on an algorithmic basis and how these influence the subject matter of the contract. For example, the terms and conditions may state that product recommendations are generated by AI or that price variations are based on automated analysis of user behaviour. This is not only crucial for legal classification, but also offers the opportunity to proactively manage customer expectations.
In addition, the terms and conditions play an informative and supplementary role to the privacy policy: If, for example, high-risk profiling is involved, explicit consent is required – but this consent does not necessarily have to be given in isolation. Terms and conditions can be used to draw attention to necessary consents, e.g. by referring to separate declarations of consent or by combining them with active opt-in mechanisms within the ordering process. This design makes it possible to create legal clarity and systematically draw users’ attention to critical points.
Last but not least, terms and conditions can regulate liability issues, dispute resolution procedures and areas of application – all aspects that are important when using complex, data-driven systems. Precise wording of the terms and conditions is also an important protective mechanism with regard to AI-generated incorrect decisions (e.g. incorrect product recommendations or automated customer classifications).
Therefore, it is crucial to:
- Review and adapt your terms and conditions: Ensure that your terms and conditions take into account current data protection regulations and contain specific clauses on data processing, especially when AI is used.
- Consistency with the privacy policy: Ensure consistency between the terms and conditions and the privacy policy to avoid contradictions.
Relevance of the federal government’s assessment for Swiss e-commerce
As already discussed in our article on the assessment of AI regulation: Federal Council wants to ratify the Council of Europe’s AI Convention, the regulation of AI in Switzerland will lead to sector-specific changes in the legal landscape. For e-commerce companies that are increasingly using AI for personalising offers, automated customer services, chatbots, dynamic pricing or fraud detection, the roadmap has the following implications:
- Greater transparency requirements: AI systems that make decisions with a significant impact on consumers (e.g. creditworthiness assessments or algorithmically determined prices) could fall into a higher risk category. This would entail stricter transparency requirements.
- Explainability of decisions: Companies may in future have to disclose how AI-supported decisions are made. This is particularly relevant in the area of automated customer interaction or product recommendations, for example when requests for information are made.
- Liability issues in the event of wrong decisions: The new regulation could require clear liability relationships for AI-generated decisions. For retailers, this means that processes must be reviewed and, if necessary, revised to remain legally compliant.
- Data protection remains a key element: Even though the review aims to create a new regulatory framework, data protection law – in particular the revised GDPR – remains a key point of reference. AI regulation will therefore not be separate, but will have a complementary effect.
Conclusion: Proactive data protection strategies are key to sustainable e-commerce with AI
The use of artificial intelligence in e-commerce offers enormous potential – from efficient customer interaction to data-based product optimisation. At the same time, this technology poses significant data protection challenges. The GDPR requires informed and explicit consent, especially for high-risk profiling and sensitive data processing. Those who fail to do so risk not only fines, but also the trust of their customers.
In addition, data protection declarations and general terms and conditions are becoming more important than ever. They are not merely mandatory documents, but an integral part of a legally compliant, transparent and trust-building customer relationship. Their consistency and timeliness are crucial for the legal protection of data-driven business models.
Last but not least, the federal government’s roadmap for AI regulation sets new priorities. It announces a regulatory regime that requires transparency, explainability and accountability – including in the private sector. Companies that already rely on responsible, data protection-compliant AI management not only gain a compliance advantage, but also position themselves strategically and sustainably in the competitive environment.
In short, anyone who uses AI must take data protection seriously – not only for legal reasons, but also for economic reasons. A holistic, forward-looking approach to consent, data protection declarations, terms and conditions and regulatory developments is essential for Swiss e-commerce providers.