Platform X (formerly Twitter) uses public user posts to train its AI Grok – and the Swiss Federal Data Protection and Information Commissioner (FDPIC) has now investigated this practice. Find out what rights users have and what companies should bear in mind.
Background to the preliminary investigation
In early summer 2024, it became known that the operator of platform X, Twitter International Unlimited Company (TIUC), was using public posts from users to train its AI Grok. The EDÖB then conducted an informal preliminary investigation, which was completed on 20 March 2025. The reason for this preliminary investigation was to determine whether TIUC complies with the requirements of the Swiss Data Protection Act (DSG).
Legal basis and analysis
Grok is X’s proprietary AI, which was developed to analyse content on the platform and provide users with personalised information and responses. To train and continuously improve Grok, X uses public posts from users, including posts, interactions and similar content.
The processing of this data affects key provisions of the Swiss Data Protection Act: personal data may only be processed with the consent of the data subject or on the basis of a legal basis (Art. 6 para. 1 DSG). In addition, all processing must be transparent (Art. 8 DSG) and comply with the principle of purpose limitation (Art. 6 para. 3 DSG). This means that data may not be used for purposes other than those for which it was originally collected, unless a new legal basis or consent has been obtained.
The use of public contributions for training AI systems constitutes significant processing of personal data, even if these contributions are in principle publicly accessible. It is particularly critical here that even seemingly harmless public information can generate new sensitive insights about individuals in the context of AI analyses.
The Federal Data Protection and Information Commissioner (FDPIC) therefore examined the following in particular as part of the informal preliminary investigation:
- The transparency of data processing (Art. 8 DSG)
- Compliance with the rights of data subjects, in particular the right to object (Art. 30 DSG)
TIUC informed the EDÖB that, since 16 July 2024, users have had an explicit option to object via the platform’s privacy settings. Users can prevent their contributions from being used for training purposes by Grok by simply opting out.
In addition, a representative in Switzerland was appointed in accordance with Art. 14 DSG, which is a key requirement for foreign companies to be able to process data lawfully in Switzerland.
The EDÖB concluded that the option to object introduced by TIUC and the explanations on data processing comply with the requirements of the DSG. Nevertheless, users and companies are still required to proactively exercise control over their data.
Practical implications and recommendations
The EDÖB’s preliminary clarification highlights the need for both private users and companies to actively influence the use of their data. Users of X should regularly review their privacy settings and, if necessary, object to the default use of their public posts for training Grok. Only by consciously managing their own privacy settings can users prevent their content from being used for AI purposes without their consent. Private users should also be aware that even publicly shared content may be subject to unwanted secondary use and should therefore exercise caution when publishing sensitive information.
Companies that use social media platforms such as X for professional purposes also have a responsibility: they must ensure that published content does not inadvertently flow into the training of AI systems. This includes adapting internal data protection guidelines accordingly and raising awareness among employees of the risks of using the platform. In addition, companies should offer internal training, revise social media usage guidelines and consider whether an explicit opt-out clause is necessary. Compliance departments are called upon to closely monitor technological developments, proactively carry out risk analyses and continuously adapt their internal guidelines to new regulatory and technological developments. Only a comprehensive and forward-looking approach to data protection can ensure that company content does not inadvertently find its way into AI systems.
Further developments: Meta also uses public content for AI training
The EDÖB’s preliminary clarification on the use of public posts on X comes at a time when more and more companies are pursuing similar approaches. Particularly noteworthy here is Meta Platforms, which will begin using public content from European users on Facebook and Instagram to further develop its AI assistant ‘Meta AI’ from 27 May 2025.
Meta also gives users the option to object to the use of their public content for AI training. To do so, Meta provides an online form where users simply need to enter the email address of their account. The deadline for objections is 26 May 2025.
It is becoming increasingly important for private users to be aware of their own data trail and to actively make use of the options available to them to object. Even if platform operators such as TIUC or Meta implement certain protective mechanisms, the primary risk remains: once something has been shared publicly, it can be reused and analysed as part of AI training processes.
Conclusion and recommendations
The EDÖB’s decision on the preliminary investigation into AI training at X once again highlights how essential transparency and user autonomy are in data protection law. The right to object that has been introduced is a minimum requirement, but does not rule out all risks. Users and companies alike are required to actively manage their settings and take the impact of AI technologies on their data seriously. Compliance teams must closely monitor new developments and ensure that employees are informed about the consequences of using platforms such as X, Facebook or Instagram for AI training.
The EDÖB’s review is of fundamental importance for the future handling of AI projects in Switzerland. It shows that the EDÖB is prepared to respond proactively to developments in the field of artificial intelligence and to consistently demand transparency and rights of use. This provides companies that use or develop AI in Switzerland with a clearer picture of the regulatory expectations: data protection and the protection of user rights are central prerequisites for the admissibility of AI applications. These principles will have to be incorporated even more strongly into practice and the development of new technologies in the future.
It is becoming increasingly important for private users to be aware of their own data trail and to actively make use of the options available to them to object. Even if platform operators implement certain protective mechanisms, the primary risk remains: once something has been shared publicly, it can be reused and analysed in the context of AI training processes.