Skip to content

What previously required extensive studies and years (if not decades) of research has now become reality within a very short time: artificial intelligence (AI) has independently developed a new antibiotic against multi-resistant germs, without any human assistance or lengthy clinical trials. The media response in medical research was correspondingly huge. Is this a turning point in research?

Medical breakthrough with legal catching up to do

In its news blog on 14 August 2025, the Massachusetts Institute of Technology (MIT) reported on a medical breakthrough that caused a worldwide sensation: researchers have used generative AI to identify new active substances against antibiotic-resistant pathogens such as MRSA (multi-drug-resistant Staphylococcus aureus) and gonorrhoea. Initial tests on mouse models show remarkable effectiveness – a glimmer of hope in a field of research that has seen only slow progress in recent decades and has repeatedly suffered severe setbacks.

As welcome as this technological breakthrough is, it also raises fundamental legal questions. What does this mean for drug approval, patent protection, data protection and bioethics? This article takes a closer look at these issues.

Approval of new drugs – AI as a ‘black box’ in the regulatory process?

The speed with which new technologies are put into practice stands in stark contrast to the sluggishness of existing procedures and legal structures. Both European (EU Regulation 536/2014) and Swiss drug law (in particular the Federal Act on Medicinal Products and Medical Devices (Therapeutic Products Act, TPA, SR 812.21, and the associated ordinances, in particular the Ordinance of the Swiss Agency for Therapeutic Products on the Simplified Authorisation of Medicinal Products and the Authorisation of Medicinal Products under the Notification Procedure (VAZV, SR 812.212.23)) provide for a multi-stage authorisation procedure. Novel active substances must undergo extensive preclinical and clinical studies, which allow for a final assessment by the competent authorities such as the European Medicines Agency (EMA) or the Swiss Agency for Therapeutic Products (Swissmedic) before marketing authorisation is granted.

When new active substances are developed not through traditional laboratory research but through AI systems such as deep learning algorithms or generative models, this raises a number of questions, as many of these systems operate as so-called ‘black boxes’:

  • Validation of AI models: How is the traceability of the results ensured?
    AI systems deliver results without it being possible to trace the path to those results in detail. This poses a considerable challenge for authorities, as transparency, reproducibility and traceability are key criteria in an approval process. However, these characteristics are not guaranteed in many AI systems.
  • Responsibility: Who is liable for incorrect AI results? Developers, operators or users?
    When developing a drug as important as antibiotics, the question of responsibility must be clearly established. However, there are currently no regulations in this regard.
  • Regulatory gaps: The EMA and Swissmedic have not yet formulated any specific guidelines for AI-generated molecules.
    Without explainable AI, there is a risk of a regulatory impasse: A lack of basis for decision-making could lead to approval procedures being delayed, suspended or legally challenged. To prevent this, regulatory guidelines must be adapted and minimum technical standards for explainable AI models must be introduced. Understanding how AI works is an essential prerequisite for this.

Patent law issues – who is the inventor?

Another key area of tension arises in patent law. According to the European Patent Convention (EPC) and the Federal Act on Patents for Inventions (Patent Act, PatA, SR 232.14), only a natural person can be considered an inventor. This position was expressly confirmed by the European Patent Office in its decision J 8/20. This confirms that AI cannot be an inventor, even if it goes through the creative process independently. Inventorship would necessarily be limited to the person using the AI.

However, this position raises questions. What happens if a person’s creative contribution is limited to merely operating the AI, while the actual creative work – such as the recombination of molecular structures – is performed by the machine? In such cases, is the user of the AI actually the rightful inventor? And does the result of machine recombination even meet the requirements for an ‘inventive step’ in terms of patentability? According to the current legal situation, this is likely to be denied.

This question becomes particularly controversial when AI models are trained on extensive databases containing copyright-protected scientific studies or proprietary chemical data. This can lead to conflicts between patent law and copyright law: is an active ingredient based on such data even patentable if the underlying information was not freely available? Companies must expect third parties to assert their (legally protected) property rights if AI-generated results are based on their proprietary data sets.

All these uncertainties make it clear that companies working with AI-generated active ingredients should define clear responsibilities at an early stage and adapt their patent strategies. This also includes accurate documentation of the roles of all those involved in the innovation process. After all, those who are considered inventors not only have rights, but also obligations and liability risks.

Data protection challenges – when data is the fuel for AI

AI-supported research thrives on huge amounts of data. In a medical context, this regularly involves particularly sensitive personal data in accordance with Art. 5 lit. c of the Federal Act on Data Protection (Data Protection Act, DPA, SR 235.1). Its processing is only permitted under strict conditions and poses considerable challenges for research companies and supervisory authorities alike. As a rule, such data processing is only permitted if the data subjects have given their express consent, if there is an overriding private or public interest, or if the data processing is regulated by law. Art. 31 para. 2 lit. e DSG stipulates that an overriding interest may be considered in particular in cases of research, planning or statistics. However, the law also specifies concrete protection requirements for how data processing must be implemented (anonymisation of personal data, non-identifiability of the persons concerned, no processing for personal purposes and ensuring that non-identifiability is also maintained upon publication). It is questionable whether, given the volume of data required, the above principles are always complied with – even if the law requires them to be. Particular attention should be paid to the cloud-based data pools often shared in research, which are used for a range of clinical research series and raise both technical risks and questions of responsibility and liability.

In addition, data protection law requires transparency: data subjects must be informed in a clear and comprehensible manner about who is processing which data, for what purpose and by what means: a requirement that is hardly trivial in the context of complex AI systems. Here, too, the principle applies: once data has been fed into an AI system, it is very difficult, if not impossible, to delete it.

Data quality – training data is not always correct

Another problem arises from the quality of the training data. In order to identify effective new antibiotics, AI models require huge amounts of high-quality chemical, biological and clinical data. However, if the data sets are incomplete, outdated or unbalanced, there is a risk that the models will deliver incorrect or discriminatory results. This can not only impair the effectiveness of the active ingredients discovered or to be developed, but also lead to significant liability risks. This raises the question of whether companies can be held liable if incorrect or incomplete training data leads to dangerous medical misjudgements when AI has utilised this data. According to the current legal situation, liability on the part of the user, i.e. the company or individual using the AI, must be clearly affirmed.

Last but not least, there is also the risk of bias: if the training data is distorted, this can lead to discriminatory results with potentially serious medical and ethical consequences.

Dual-use problem – between medicine and biological weapon

A hitherto little-noticed but highly sensitive aspect is the so-called dual-use problem. Technologies originally developed for medical purposes can, under certain circumstances, also be misused for unethical or criminal purposes. AI systems in particular could be used to identify not only healing but also harmful substances, for example as potential biological weapons (dual use). Dual-use goods are goods, technologies and software with a dual purpose: civil and military. These goods are subject to export controls to prevent them from being misused for military purposes or for the production of weapons of mass destruction.

Legally, this issue is regulated by, among other things, EU Regulation 2021/821 on the control of dual-use goods. In Switzerland, further details are regulated by the Federal Act on the Control of Goods for Civil and Military Use, Special Military Goods and Strategic Goods (Goods Control Act, GKG, SR 946.202) and the associated Ordinance on the Control of Goods for Civil and Military Use, Special Military Goods and Strategic Goods (Goods Control Ordinance, GKV, SR 946.202.1) regulate the further details. Companies involved in cross-border research collaborations or using AI systems for molecule development are well advised to review their projects at an early stage for potential export restrictions and control obligations.

In addition, the AI models themselves, especially if they can generate potentially dangerous molecules, may also be considered dual-use goods and are therefore subject to strict export controls. International regulations such as the AI Act (EU AI Regulation (Regulation (EU) 2024/1689)), but also agreements such as the Biological Weapons Convention (BWC) and the Australia Group guidelines are playing an increasingly important role in this regard. This affects not only manufacturers, but also universities, research institutes and software providers. Close cooperation with authorities, internal control systems and careful risk analysis are essential to avoid violations of export control law or international obligations.

Recommendations for practical action

Switzerland, as an attractive location for numerous renowned pharmaceutical and research companies, is particularly affected here: companies that are active in the field of research and also use AI should develop a comprehensive AI compliance framework at an early stage that integrates not only regulatory approval and data protection, but also ethical aspects and security risks. Patent specialists should be involved as early as the development phase in order to examine the protectability of new active ingredients and strategically secure property rights. Development teams must also be made aware of regulatory requirements, particularly with regard to the dual-use issue, which has often been underestimated to date.

Authorities also face new tasks: they should develop specific guidelines for the use of AI in drug development, establish interdisciplinary committees to evaluate new technologies, and actively promote international exchange to harmonise standards and export control requirements.

Legislators will also have to take this into account when adapting existing laws governing AI to specific sectors. A timid approach is not appropriate here.

Conclusion and outlook

AI-based antibiotic research shows once again that there is a paradigm shift, both from a medical and a legal perspective. While the technology has the potential to address global health problems in a sustainable manner, the legal system faces the challenge of creating an adequate framework. What is needed is forward-looking, technology-neutral regulation that does not stifle innovation, but at the same time upholds fundamental ethical, data protection and security standards. Only when law, technology and ethics are considered together can we prevent the hope for cures and medical innovation from becoming a legal risk or even a threat to society.

 

Sources