The responsible use of AI is essential, especially for the public sector. Here, decisions have far-reaching consequences for citizens. To meet these requirements, the European Commission has published updated model contract clauses for the procurement of high-risk AI systems. These contract clauses, which are aligned with the EU AI Act, are intended to help public procurers comply with legal and ethical standards when purchasing such technologies. They ensure that AI systems are developed, delivered and operated in a transparent, secure and legally compliant manner.
Purpose of the model contract clauses
The model contract clauses are intended to support public institutions in the secure and legally compliant integration of high-risk AI systems. They are based on the requirements of the EU AI Act and are designed to ensure that providers of high-risk AI systems comply with certain standards. These include effective risk management, ensuring transparency and explainability, and robust security measures.
Key content of the model contract clauses
The model contract clauses contain detailed specifications for various aspects of AI procurement. The focus is on the following topics in particular:
- Risk management and compliance: AI providers must carry out a risk analysis of their systems to identify potential threats to fundamental rights, security and data protection. They must also demonstrate that appropriate measures have been taken to minimise these risks.
- Data quality and fairness: The model contract clauses require that training and test data sets meet high quality standards, do not contain any discriminatory biases and comply with data protection regulations. Public contracting authorities have the right to inspect the data sources used and to verify their origin and processing methods.
- Transparency and explainability: The lack of traceability of decisions made by AI systems is often criticised. The clauses stipulate that providers of high-risk AI must provide detailed technical documentation and explain how their systems make decisions. In addition, the technical and organisational measures taken by the AI provider must enable users to critically question the results.
- Human oversight and intervention options: The new clauses require providers to enable human intervention in the decision-making processes of AI systems at all times. This includes mechanisms for real-time monitoring, training for users and measures to prevent ‘automation bias’, i.e. the blind acceptance of AI-generated results.
- Cybersecurity: As AI systems are increasingly becoming targets for cybercrime, the contract clauses contain strict requirements for security measures: providers must carry out regular security tests, implement protection mechanisms against data manipulation and provide a robust IT infrastructure.
Practical implications and recommendations
For public contracting authorities, these model contract clauses mean greater legal certainty when procuring AI systems. By implementing these standards, they can ensure that the technologies used comply with legal requirements and adhere to ethical standards. It is recommended that these clauses be taken into account as early as the tendering phase and that potential providers be required to comply with them.
For companies offering AI systems, the clauses set out clear requirements that must be met. This may require adjustments in the development and implementation of their systems and the establishment of internal compliance structures. In the long term, these measures could strengthen trust in AI products and offer competitive advantages.
Conclusion and outlook
The introduction of the model contract clauses by the European Commission is a significant step towards the responsible and safe integration of AI in the public sector. They provide a clear framework for the procurement and use of high-risk AI systems and help to uphold ethical and legal standards. It remains to be seen whether these clauses will be widely accepted in practice or whether they will need to be adapted in line with technical and regulatory developments.
Sources