Artificial intelligence is fundamentally transforming drug development – making it faster, more efficient and data-driven. However, the more powerful the systems become, the greater the regulatory challenges. New guidelines from Swissmedic, the WHO and international organisations show that without clear governance, there is a risk to patient safety and market integrity. Companies are faced with the question of how to ensure both innovation and compliance at the same time.
The use of artificial intelligence (AI) and machine learning (ML) in drug development has become significantly more important in recent years. Applications range from the identification of new active substances and the optimisation of clinical trials to quality control in production. At the same time, regulatory authorities worldwide – including Swissmedic, the WHO and international bodies such as ICH and IMDRF – are stepping up their efforts to establish a legal and ethical framework for the use of AI.
In an earlier article, HÄRTING Rechtsanwälte highlighted the transformative role of AI in medical research and, using the example of antibiotic development, demonstrated the disruptive potential of data-driven models. In particular, the article illustrates how AI not only enables efficiency gains but also raises fundamental questions regarding the role of human researchers, responsibility and regulatory classification. These considerations are directly linked to the regulatory challenges discussed here and underscore the need for a coherent legal framework for the use of AI in the life sciences sector.
AI and generative models in healthcare
AI systems refer to systems that learn from data and can perform tasks autonomously. Generative AI models currently play a special role, particularly so-called Large Multi-Modal Models (LMMs). These can process various types of data and generate new content – such as clinical hypotheses, study protocols or molecular structures.
The WHO emphasises that such models have considerable transformative potential for healthcare, research and drug development. At the same time, they harbour risks, such as those arising from non-transparent decision-making processes or systematic biases in training data.
Regulatory framework: Swissmedic and international harmonisation
Swissmedic requires that marketing authorisation applications reflect the current state of science and technology, which explicitly includes AI-based elements. A prerequisite for a successful regulatory assessment is complete, transparent and traceable documentation. In particular, this must disclose the origin and quality of the data used, describe the model architecture and the training methods applied, set out the validation and testing procedures carried out, and contain a well-founded risk assessment including potential biases.
When assessing such AI components, Swissmedic does not operate in isolation but explicitly follows international standards and guidelines. These include, in particular, the ethical and governance-related requirements of the World Health Organisation (WHO), the guidelines of the International Council for Harmonisation (ICH) on data integrity and quality, the requirements of the International Medical Device Regulators Forum (IMDRF) for software as a medical device, and relevant guidelines from the European Medicines Agency (EMA) and the US Food and Drug Administration (FDA) on AI and machine learning applications. Against the backdrop of global supply chains and increasingly cross-border regulatory approval processes, this international harmonisation is of central importance in ensuring both the comparability of regulatory decisions and the safety and efficacy of AI-supported medicines worldwide.
WHO principles as an ethical compass
The WHO has formulated six key principles for the use of AI in healthcare:
- Protection of autonomy
- Promotion of safety and the public interest
- Transparency and explainability
- Responsibility and accountability
- Inclusion and fairness
- Sustainability and adaptability
These principles have a direct impact on regulatory requirements. In particular, the demand for ‘explainability’ presents companies with significant technical and legal challenges.
Key challenges in practice
- Data quality and bias
AI systems are only as good as the data on which they are based. Biased or incomplete datasets can lead to erroneous results – with potentially serious implications for patient safety.
- Lack of transparency (“black box” problem)
Many machine learning models are difficult to understand in terms of how they function, a phenomenon often referred to as the ‘black box’ problem. This lack of transparency makes regulatory scrutiny considerably more difficult, as decision-making processes within the models cannot be readily disclosed or verified. At the same time, the question of liability arises when decisions are made on the basis of such systems and undesirable outcomes or harm occur. Finally, the lack of traceability also undermines the confidence of authorities, experts and users in the technologies used, which can further hinder their wider acceptance and implementation.
- Dynamic systems and lifecycle assessment
AI models are constantly evolving (‘adaptive learning’). This calls into question traditional approval procedures based on static products.
- Quality control and validation
The validation of AI systems presents new challenges for companies and authorities, as traditional testing methods are often insufficient to adequately capture the complexity and dynamics of such systems. Instead, an expanded methodological approach is required, which in particular involves continuous monitoring of the systems during operation in order to detect changes in model behaviour at an early stage. Furthermore, regular retraining processes are necessary to adapt the models to new data and changing conditions and to maintain their performance. Another key aspect is ensuring the auditability of algorithms, so that their functioning, decision-making logic and development steps remain traceable and verifiable, particularly with regard to regulatory requirements and potential liability issues.
Practical implications and recommendations for action
For businesses
- Early integration of regulatory requirements: AI projects should be designed with compliance in mind from the outset.
- Documentation and transparency: Comprehensive documentation is essential for approval procedures.
- Bias management: Implementation of systematic procedures to identify and reduce bias.
- Interdisciplinary teams: Combining IT, legal and subject-matter expertise.
For legal departments and legal counsel
- Monitoring regulatory developments: In particular at the level of the WHO, EMA, FDA, etc.
- Contract drafting: Taking liability issues into account in relation to AI systems.
- Data governance: Ensuring data processing complies with the law.
For DPOs and AI Officers
- Risk assessment in accordance with applicable data protection, Data Act and AI Act requirements.
For public authorities
- Flexible regulatory approaches: Adapting traditional procedures to dynamic AI systems.
- International cooperation: Avoiding regulatory fragmentation.
Conclusion and outlook
The integration of AI into drug development offers significant opportunities for innovation and increased efficiency. At the same time, demands on regulation, transparency and ethical responsibility are rising.
Swissmedic and international organisations such as the WHO are working intensively on a coherent framework that enables innovation without compromising patient safety. For companies, this means: Those wishing to use AI successfully must view regulatory requirements not as an obstacle, but as an integral part of their development strategy.
In future, further clarification of regulatory guidelines is to be expected – particularly with regard to generative AI and adaptive systems. Companies that focus on compliance, transparency and international standards at an early stage will gain long-term competitive advantages.
Sources