Skip to content

With this white paper, the European Commission wants to set guidelines for the protection of individuals and at the same time promote the development of artificial intelligence through targeted funding. In its own words, the Commission calls this the creation of an “ecosystem for trust” and an “ecosystem for excellence”.According to the white paper, the EU wants to invest a total of more than 20 billion euros annually in artificial intelligence over the next 10 years. In 2016, according to its own statement, the EU’s investment would still amount to 3.2 billion euros. In addition, the research and innovation communities of the member states are to be better networked and focus on certain core topics.

If there are products and solutions in the field of artificial intelligence, SMEs and the public sector should ultimately also be able to benefit and the economy strengthened as a result.The protection of the individual against decisions by artificial intelligence is to be ensured by means of targeted regulations and based on existing European legislation (e.g. data protection or the prohibition of discrimination). Accordingly, there will be no law on artificial intelligence, but rather the fundamental rights in the various existing laws will be protected in each case also with regard to artificial intelligence.

For the purpose of proportionate regulation, a distinction will be made between artificial intelligence with high risk (High Risk AI) and artificial intelligence without high risk. An AI application should be classified as a high risk application if the following criteria are cumulatively fulfilled:

  • The AI will be used in a sector where the nature of typical activities is likely to present significant risks; and
  • The AI is used in such a way that considerable risks are to be expected.

According to the Commission, the additional requirements for high-risk AI applications should then relate to the following key features, which are discussed in more detail in the white paper:

  • Training data
  • Retention of data and records
  • Information to be submitted
  • Robustness and accuracy
  • Human supervision
  • special requirements for certain AI applications, e.g. applications for biometric remote identification.

In order to ensure the enforcement of these new requirements, the Commission considers that an objective ex-ante conformity assessment is needed. Conformity assessments should be part of the existing conformity assessment mechanisms already in place for certain products in the EU internal market.

For those economic operators who offer AI applications without high risk and thus not subject to the obligatory procedure, there should be the voluntary possibility to comply with the requirements and thus obtain a kind of quality label.

In its white paper, the European Commission further refers to the already published report on liability for artificial intelligence of 21 November 2019 and the ethical guidelines of 9 April 2019.

It is now possible to submit comments on the white paper until 19 May 2020.

 

Sources