FINMA has formulated clear expectations for financial companies in its Supervisory Communication 08/2024 on governance and risk management in the use of artificial intelligence. The focus is on governance, risk management and compliance requirements, which have often been insufficiently taken into account to date. In this article, we explain which specific requirements now apply and which measures companies should take.
The use of artificial intelligence in the financial sector has increased rapidly in recent years. While AI systems enable new business models and lead to a significant increase in the efficiency of existing processes, they are also associated with considerable operational, legal and reputational risks. FINMA has taken this development into account and formulated clear requirements for supervised institutions that use AI-supported systems in its Supervisory Communication 08/2024.
- No specific AI regulation – but clear expectations
There is currently no specific legislation for artificial intelligence in Switzerland. Nevertheless, AI applications are subject to the general supervisory requirements of governance and risk management. FINMA therefore expects supervised institutions to actively address the risks of the AI systems they use and adapt their control mechanisms accordingly.
In its Supervisory Communication 08/2024, FINMA clarifies that supervised institutions must systematically identify, assess and manage the specific risks associated with the use of AI. The possible risks mentioned by FINMA in connection with the use of AI are commented on below and FINMA’s requirements for financial service providers are set out.
- Key risk aspects according to FINMA
According to FINMA, there is a fundamental danger of model risks arising from incorrect or distorted models, which can lead to incorrect decisions. Particularly problematic are bias, lack of robustness and insufficient explainability of the models.
FINMA also points out that supervised institutions are exposed to significant IT and cybersecurity risks. The increasing use of cloud services and third-party providers can favour security vulnerabilities and increase dependence on external providers.
Another significant risk relates to the lack of transparency and explainability of AI-supported decisions, which can raise regulatory and liability issues.
- Governance and internal control systems
From a governance perspective, FINMA emphasises that companies must develop clear structures for the use of AI. On the one hand, this includes maintaining an inventory of all AI applications and systematically categorising them according to risk levels. On the other hand, the decision-making processes of the AI models must be documented in a comprehensible manner.
FINMA also states that clear contractual regulations on liability and transparency are essential when using external AI services. If you need help in overcoming these challenges, we will be happy to assist you.
- Data quality, documentation and explainability
FINMA emphasises that the quality of the data used is crucial for the safety and reliability of AI models. Insufficient control over training data can lead to incorrect or distorted results. The correctness, consistency, completeness and up-to-dateness of the data must be continuously ensured. Regular testing, backtesting and continuous validation are therefore essential.
In order to be able to use AI models sustainably and reliably, FINMA emphasises the need for thorough documentation of the data. Among other things, the purpose of the application, the data selection and preparation, the model selection and its limitations must be recorded.
Incomprehensible AI models can lead to considerable legal and supervisory problems. FINMA is therefore calling for increased measures to improve explainability. Explainability goes hand in hand with the reproducibility of AI-generated results, which further strengthens the sustainability of AI systems. In addition, key AI systems should be regularly reviewed by independent experts to ensure their reliability and transparency.
FINMA remains committed to its technology-neutral approach, but is tightening expectations of companies that use AI.
We recommend that all financial service providers immediately review their governance and risk management processes to ensure that they meet the new requirements. Precise documentation, the traceability of AI-supported decisions, ensuring data quality and documenting the data and processes used are particularly important. Those who act proactively here minimise regulatory risks and at the same time strengthen the trust of customers and supervisory authorities.
Sources