Skip to content

Artificial intelligence is fundamentally changing drug development – faster, more efficient, more data-driven. However, the more powerful the systems, the greater the regulatory challenges. New guidelines from Swissmedic, WHO and international organisations show this: Without clear governance, there are risks to patient safety and market integrity. Companies are faced with the question of how to ensure innovation and compliance at the same time.

The use of artificial intelligence (AI) and machine learning (ML) in drug development has become significantly more important in recent years. Applications range from the identification of new active ingredients to the optimisation of clinical trials and quality control in production. At the same time, regulatory authorities worldwide – including Swissmedic, the WHO and international bodies such as ICH and IMDRF – are intensifying their efforts to structure the use of AI in legal and ethical terms.

In an earlier article, HÄRTING Attorneys at Law highlighted the transformative role of AI in medical research and used the example of antibiotic development to show the disruptive potential of data-driven models. In particular, the article illustrates how AI not only enables efficiency gains, but also raises fundamental questions about the role of human research, responsibility and regulatory categorisation. These considerations are directly linked to the regulatory challenges discussed here and emphasise the need for a coherent legal framework for the use of AI in the life sciences sector.

AI and generative models in healthcare

AI systems are systems that learn from data and can perform tasks independently. Generative AI models, in particular so-called large multi-modal models (LMMs), currently play a special role. These can process different types of data and generate new content – such as clinical hypotheses, study protocols or molecular structures.

The WHO emphasises that such models have considerable transformation potential for healthcare, research and drug development. At the same time, they harbour risks, such as non-transparent decision-making processes or systematic distortions in training data.

Regulatory basis: Swissmedic and international harmonisation

Swissmedic requires that authorisation applications reflect the current state of the art in science and technology, which explicitly includes AI-based elements. The prerequisite for a successful regulatory assessment is complete, transparent and comprehensible documentation. In particular, this must disclose the origin and quality of the data used, describe the model architecture and the training methods used, describe the validation and testing procedures carried out and contain a well-founded risk assessment including possible biases.

When assessing such AI components, Swissmedic does not base its judgement in isolation, but explicitly on international standards and guidelines. These include, in particular, the ethical and governance-related requirements of the World Health Organisation (WHO), the guidelines of the International Council for Harmonisation (ICH) on data integrity and quality, for example, the requirements of the International Medical Device Regulators Forum (IMDRF) for software as a medical device, and relevant guidelines of the European Medicines Agency (EMA) and the US Food and Drug Administration (FDA) on AI and machine learning applications. Against the backdrop of global supply chains and increasingly cross-border approval procedures, this international harmonisation is of central importance in order to ensure both the comparability of regulatory decisions and the safety and efficacy of AI-supported medicinal products worldwide.

WHO principles as an ethical compass

The WHO has formulated six key principles for the use of AI in healthcare:

  1. Protection of autonomy
  2. Promotion of safety and public interest
  3. Transparency and explainability
  4. Responsibility and accountability
  5. Inclusion and fairness
  6. Sustainability and adaptability

These principles have a direct impact on regulatory requirements. The requirement for explainability in particular presents companies with considerable technical and legal challenges.

Key challenges in practice

  1. Data quality and bias

AI systems are only as good as the data on which they are based. Distorted or incomplete data sets can lead to erroneous results – with potentially serious consequences for patient safety.

  1. Lack of transparency (“black box” problem)

Many machine learning models are difficult to understand in terms of how they work, which is often referred to as the “black box” problem. This lack of transparency makes regulatory audits considerably more difficult, as decision-making processes within the models cannot be easily disclosed or reviewed. At the same time, the question of liability allocation arises if decisions are made on the basis of such systems and undesirable developments or damage occur. Finally, the lack of traceability also impairs the trust of authorities, experts and users in the technologies used, which can further inhibit their wider acceptance and implementation.

  1. Dynamic systems and life cycle assessment

AI models are constantly evolving (“adaptive learning”). This calls into question traditional authorisation procedures that are based on static products.

  1. Quality control and validation

The validation of AI systems presents companies and authorities with new challenges, as traditional testing methods are often not sufficient to adequately capture the complexity and dynamics of such systems. Instead, an extended methodological approach is required, which in particular includes continuous monitoring of the systems during operation in order to detect changes in model behaviour at an early stage. In addition, regular re-training processes are required in order to adapt the models to new data and changed framework conditions and to maintain their performance. Another key aspect is ensuring the auditability of algorithms so that their functionality, decision-making logic and development steps remain comprehensible and verifiable, particularly with regard to regulatory requirements and potential liability issues.

Practical implications and recommendations for action

For companies

  • Early integration of regulatory requirements: AI projects should be designed with compliance aspects in mind from the outset.
  • Documentation and transparency: Complete documentation is key for authorisation procedures.
  • Bias management: Implementation of systematic procedures to identify and reduce bias.
  • Interdisciplinary teams: Combination of IT, legal and technical expertise.

For legal departments and legal counsel

  • Monitoring regulatory developments: In particular at the level of WHO, EMA, FDA etc.
  • Contract drafting: Consideration of liability issues with AI systems.
  • Data governance: Ensuring legally compliant data processing.

For DPOs and AI officers

  • Risk assessment in accordance with applicable data protection, Data Act and AI Act requirements.

For public authorities

  • Flexible regulatory approaches: Adaptation of classic procedures to dynamic AI systems.
  • International cooperation: avoiding regulatory fragmentation.

Conclusion and outlook

The integration of AI into drug development offers considerable opportunities for innovation and increased efficiency. At the same time, the requirements for regulation, transparency and ethical responsibility are increasing.

Swissmedic and international organisations such as the WHO are working intensively on a coherent framework that enables innovation without jeopardising patient safety. For companies, this means If you want to use AI successfully, you need to see regulatory requirements not as an obstacle, but as an integral part of your development strategy.

In the future, regulatory guidelines are expected to become more specific – particularly with regard to generative AI and adaptive systems. Companies that focus on compliance, transparency and international standards at an early stage will achieve competitive advantages in the long term.

Sources