Artificial intelligence promises efficiency and innovation, but it can also distort competition. The Competition Commission has made it clear that algorithmic pricing and data-driven market power will be scrutinised more closely in future. This creates a new regulatory risk for businesses, one that is often underestimated.
In its press release of 31 March 2026, the Competition Commission (WEKO) sets out a clear priority for the coming years: developments in the field of artificial intelligence are to be monitored more closely and, where necessary, addressed under competition law.
In its statement, the authority makes it clear that it will intervene as soon as there are signs of anti-competitive behaviour. This applies in particular to situations where dominant firms use AI to secure or expand their position, as well as cases of potential algorithmic coordination between market participants.
The WEKO is thus responding to a trend that has been clearly emerging internationally for some time. Artificial intelligence is no longer merely a driver of innovation, but is increasingly shaping the functioning of entire markets. In data-driven sectors in particular, algorithmic systems influence prices, market behaviour and competitive dynamics in ways that challenge traditional antitrust categories. This development is also the subject of intense debate at the international level. Competition authorities and organisations such as the OECD or the European Commission are increasingly addressing the question of the extent to which existing competition law instruments are suitable for addressing algorithmic market mechanisms. The WEKO is thus acting in line with a global trend to view AI not in isolation as a technological issue, but in an integrated manner within the framework of competition law.
Between efficiency gains and concentration risks
At the same time, the COMCO emphasises the ambivalent role of AI. On the one hand, its use can enable significant efficiency gains, accelerate innovation and even lower barriers to market entry. On the other hand, these positive effects depend crucially on specific market structures and access to key resources – particularly data.
Particularly in markets with a high degree of data dependency, there is a risk that existing positions of power will become further entrenched. Companies with privileged access to large volumes of data can train AI systems more effectively and thereby gain competitive advantages that smaller market participants can scarcely match. In this context, the WEKO expressly draws attention to existing concentration risks. This development is particularly relevant from an antitrust perspective, as it may create the conditions for a possible abuse of dominant market positions within the meaning of Art. 7 of the Competition Act.
In the context of Art. 7 of the Competition Act, the question arises in particular as to the extent to which the use of AI by dominant firms may constitute an abuse. Constellations are conceivable in which data-driven AI systems are specifically deployed to exclude competitors, for example through discriminatory ranking or pricing strategies, by refusing or restricting access to essential data resources, or by reinforcing lock-in effects on digital platforms. Self-learning systems can also have abusive effects, even if these are not directly intended. The decisive factor remains whether the conduct is objectively capable of restricting competition and can be attributed to the dominant firm.
Algorithmic pricing and the risk of implicit coordination
The COMCO places particular emphasis on algorithmic pricing. AI systems are capable of adjusting prices in real time whilst processing large volumes of market data. Whilst this can lead to more efficient markets, it also carries the risk that prices will converge and competition will be weakened.
Particularly challenging from a legal perspective are scenarios in which there is no explicit agreement between companies, yet the algorithms used lead to coordinated market behaviour. The traditional concept of a competition agreement within the meaning of Art. 5 KG presupposes deliberate coordination. In the case of self-learning systems, however, such alignment can arise even without direct human intervention.
From an antitrust perspective, the key question here is whether, and under what conditions, such algorithmically generated synchronisation should be classified as concerted behaviour within the meaning of Section 5 KG. According to prevailing doctrine and practice, a competition agreement does not necessarily require an explicit agreement; conscious, coordinated behaviour that aims to or results in the restriction of competition may be sufficient. In the context of AI, the focus therefore shifts to the upstream design and parameterisation of the systems used: if an algorithm is deliberately designed or deployed in such a way that it reacts to competitors and thereby promotes parallel market behaviour, this may be classified as indirect coordination. The legal assessment will increasingly concentrate on questions of attribution and the foreseeability of algorithmic outcomes. In particular, it will need to be clarified to what extent companies are liable for the behaviour of autonomous or semi-autonomous systems. According to the prevailing view, attribution is likely to apply in any case where the company has initiated the use of the system and its competitive effects were at least foreseeable. By contrast, seeking to exonerate oneself merely by pointing to the ‘autonomy’ of a system appears hardly convincing.
The COMCO has indicated that it is closely monitoring these developments and will assess on a case-by-case basis whether algorithmically generated market outcomes are to be classified as anti-competitive. This also entails significant challenges in terms of evidence. Elucidating algorithmic decision-making processes requires technical understanding as well as access to data and models, which are often held internally within companies. For companies, this means that they should prepare early on for potential obligations to provide information and cooperate, and create the necessary technical and organisational conditions.
Regulatory dilemma: intervene or wait and see?
The COMCO’s nuanced stance on the regulation of AI is noteworthy. The authority expressly points out that both premature and delayed intervention can be problematic. Whilst hasty regulatory measures could hamper innovation potential, a wait-and-see approach carries the risk that anti-competitive structures will become entrenched.
This consideration highlights that the application of competition law in the context of AI requires a particularly careful balancing of interests. The constitutional principle of proportionality takes on additional significance here.
COMCO’s activities in 2025 as an indicator of future priorities
The press release also provides an insight into the Competition Commission’s operational activities in 2025. With 18 investigations, 8 preliminary inquiries and 43 market observations, the authority’s activity remains high. In addition, there were 34 mergers dealt with, as well as numerous submissions in consultation procedures.
The breadth of the cases dealt with – ranging from ticketing and broadband networks to the healthcare sector – illustrates that WEKO is already heavily involved in digital and data-driven markets today. Against this backdrop, it seems logical that AI will come even more into focus in the future.
Practical implications for businesses
WEKO’s stance creates a clear need for action on the part of businesses. The use of AI systems, particularly in sensitive areas such as pricing or platform management, must in future also be carefully assessed from a competition law perspective. Central to this is the recognition that responsibility for the behaviour of AI systems remains with the company. Even if decisions are made automatically, this does not relieve the company of its duty to avoid anti-competitive effects.
Furthermore, it cannot be ruled out that, as part of its investigative activities, the WEKO will increasingly resort to traditional tools such as dawn raids to gain access to algorithmic systems, training data and internal documentation. Companies should therefore also review their internal processes and responsibilities for dealing with such situations and adapt them where necessary.
Furthermore, the lack of transparency in many AI systems – often described as a “black box” problem – can pose significant challenges in the context of investigations. Companies should therefore ensure at an early stage that they understand how their systems work and can explain them if necessary.
Furthermore, there are close links to other areas of law, in particular data protection law and regulatory developments in the field of AI (e.g. European AI regulation). Competition law issues can therefore often not be assessed in isolation, but require a holistic compliance approach.
Recommendations for action and outlook
Against this background, it is advisable to integrate AI applications into existing compliance structures at an early stage and to supplement them with specific review mechanisms. Regular risk analyses should be carried out, particularly in the case of data-driven business models and algorithmic pricing.
Furthermore, companies should implement specific organisational and technical measures. This includes, in particular, the establishment of an interdisciplinary ‘AI governance’ approach in which the legal department, IT and specialist departments work closely together. AI systems should be assessed for competition law risks as early as the development phase (‘compliance by design’). It is also advisable to introduce monitoring mechanisms that allow algorithmic decisions to be continuously reviewed and suspicious patterns to be identified at an early stage.
Documentation is also of particular importance. Companies should record in a transparent manner the objectives pursued by an algorithm, the parameters used, and the control mechanisms in place. Such documentation can be crucial in the event of an investigation by the Competition Commission (WEKO) to demonstrate due diligence and reduce liability risks.
Finally, awareness-raising at management level is also necessary. The strategic importance of AI in competition requires that the associated risks be addressed not only at an operational level but also at management level.
From a strategic perspective, it is also advisable to embed AI not merely as a technical tool but as a competition-relevant factor within corporate strategy. Companies should actively assess whether their use of AI is potentially competition-sensitive and establish appropriate governance structures. This applies in particular to companies with a strong market position or platform function.
The COMCO communication makes it clear that competition law is undergoing a phase of adaptation. Artificial intelligence is becoming a key test case for the further development of antitrust concepts. Companies that anticipate this development at an early stage and align their internal processes accordingly can not only minimise regulatory risks but also strengthen the trust of authorities and market participants.
WEKO’s stance suggests that the phase of mere observation is likely to increasingly give way to a phase of concrete enforcement. Companies using AI should therefore not wait to react to regulatory intervention, but proactively review their systems from a legal perspective. We would be happy to assist you with the legal classification and implementation of AI projects, as well as with the assessment of competition law risks. Please feel free to contact us if you have any questions.