msg Logo

Artificial Intelligence

The Artificial Intelligence Act – system and regulations

by Stefan Nörtemann / 22. February 2022

As part of the European Union’s digital strategy, the EU Commission presented a proposal for the Artificial Intelligence Act on 21 April 2021.* My previous post provided an initial rough overview of the Artificial Intelligence Act. We will briefly examine the regulations in detail below.

Regulatory system

The proposed regulations relate to AI systems, which are defined in Annex I. These include machine learning, logic and knowledge-based systems, as well as statistical approaches. As with the General Data Protection Regulation, the marketplace principle will also apply. This means the regulations apply if providers or users are located in the EU or the results are processed in the EU.

AI systems are categorised for the specific regulations (risk-based approach):

  • Prohibited AI systems (unacceptable risk)
  • High-risk systems (high risk)
  • AI systems with special transparency obligations (limited risk)
  • All other AI systems (minimal risk)

Prohibited practices

Prohibited practices generally include manipulative subliminal techniques or the exploitation of a person’s vulnerability due to their age or disability in order to influence a person’s behaviour in a way that causes or is likely to cause physical or mental harm to that person or another person.

Public authorities are also prohibited from using AI systems to assess the trustworthiness of individuals based on their social score or personal characteristics, resulting in these individuals being disadvantaged.

High-risk systems

By far the most comprehensive set of regulations are applied to high-risk systems. Criteria for classifying systems as high-risk are defined in Annex III, where eight specific areas are identified: biometric identification and categorisation of natural persons; management and operation of critical infrastructure; education and vocational training; employment, worker management and access to self-employment; access to and enjoyment of essential private services and public services and benefits; law enforcement; migration, asylum and border control management; administration of justice and democratic processes.

The regulations for high-risk systems include, but are not limited to, specific risk management, data governance, detailed technical documentation and record-keeping requirements, transparency requirements, human oversight, and specific requirements for accuracy, robustness and cybersecurity.

In addition, there are far-reaching obligations for providers of high-risk systems. These include the establishment of a quality management system, the obligation to carry out regular conformity assessments, as well as extensive information obligations towards the relevant authorities.

Systems with special transparency obligations

People who interact with AI systems must be informed that they are dealing with an AI. The same applies to the use of emotion recognition systems or biometric categorisation systems.

AI systems that generate or manipulate image, audio or video content which may falsely appear to an individual to be genuine or true (‘deepfake’) must disclose that the content has been artificially created or manipulated.

Codes of conduct

Codes of conduct are recommended for all other AI systems. Providers of non-high-risk AI systems are encouraged to voluntarily apply appropriate risk requirements that are analogous to those for high-risk systems, as well as their own requirements, such as ecological sustainability or accessibility.


Violations of prohibited practices or high-risk systems carry fines of at least €30 million or 6% of total global annual turnover (for other violations: €20 million or 4%).

AI systems in the insurance industry

There is a need for further detailed analysis to assess the impact of regulations on the insurance industry. When using AI systems for risk assessment, for example, it is necessary to check whether they are high-risk systems. Special transparency obligations may arise when using chatbots in communication with customers. This also raises the question of whether codes of conduct are necessary or useful in the insurance industry. Associations such as the GDV or DAV will certainly attend to this issue in due course.



Subscribe to our blog and stay well informed.

You can revoke your consent at any time