Artificial intelligence has arrived in public communication. A new study shows how ambivalently the Swiss population reacts to the use of AI in public authorities. Why this is forcing not only the administration but also companies to act – and which legal issues need to be clarified now.
Ambivalence meets reality: the public’s attitude towards AI in communication
The latest study conducted by gfs.bern on behalf of the Federal Chancellery provides the first comprehensive picture of how the Swiss population perceives the use of AI in communication with the authorities. The basic tenor: there is a general openness – under clear conditions. Citizens expect transparency, human control, data protection and data sovereignty. At the same time, the use of AI is viewed positively, particularly for translations, summaries and text simplification.
This selective acceptance is a signal to the authorities, but also to companies: Technological change is being observed – and evaluated. Trust is the currency with which innovations must be paid for.
Communication change as a cultural challenge
Public authorities are under increasing pressure to become more digital, more efficient and more accessible. AI applications can play a supporting role in this – as a “co-pilot”, not an autopilot, as the interviews in the study aptly put it. However, the projects to date also show that success depends not only on technology, but above all on cultural change. This includes, for example, involving the population at an early stage, avoiding unnecessary promises of automation and communicating realistic expectations.
A similar need for cultural change also affects private sector organisations. The introduction of AI-based solutions must always be accompanied within the organisation, prepared in terms of communication and properly anchored in regulatory terms.
Legal categorisation and challenges
For companies – especially those in the fields of communication, marketing, customer support or content production – the question arises as to how AI can be used in a legally compliant manner. Various areas of law are affected:
- Data protection law: If personal data is processed or used by AI (e.g. with chatbots or content personalisation), data protection transparency, information and, if applicable, consent obligations must be examined.
- Liability: Who is liable for incorrect or discriminatory content generated by AI? What role does human control play? This also concerns questions of product liability in the digital context.
- Contract law and labour law: Contracts with service providers that use AI should clearly regulate who is responsible for errors or breaches. Labour law implications such as co-determination rights or training obligations are also playing an increasingly important role.
- Transparency obligations and regulatory requirements: The AI Act at EU level (and potentially similar future regulations in Switzerland) require clear documentation and risk assessments when using AI systems.
Recommendations for action for companies and authorities
In view of these legal challenges and social expectations, how can companies and authorities implement AI-based solutions responsibly and in compliance with the law?
The establishment of clear internal governance that defines responsibilities, processes and quality standards for AI projects is of central importance. Legal expertise should be involved at an early stage to ensure that new tools are legally sound. A thorough data protection impact assessment is necessary in order to recognise potential data protection risks at an early stage and meet legal requirements. Precise documentation of the systems used is also necessary, particularly with regard to training data, decision-making bases and sources. It is equally important to communicate the use of AI openly and transparently to employees, users and the public in order to create trust and promote social acceptance.
Conclusion with foresight: What counts now
The gfs.bern study provides a differentiated, empirically based insight into society’s attitude towards AI in communication with public authorities – and thus indirectly also towards AI communication in general. The population accepts technological innovation under clear conditions – and expects responsibility. Anyone integrating AI today must do so with care, legal clarity and social sensitivity.
The legal landscape will develop dynamically over the next few years – both at national and European level. Companies and authorities would be well advised to prepare for regulatory requirements now and establish appropriate processes. After all, those who act today will secure trust and competitive advantages tomorrow.
Sources