msg Logo

Artificial Intelligence

The Artificial Intelligence Act – an overview

by Stefan Nörtemann / 3. February 2022

The rapid spread of applications based on artificial intelligence (AI) has generated increased discussion surrounding trustworthy artificial intelligence in recent years. As part of the European Union’s digital strategy, the EU Commission is planning a series of directives and regulations, presenting a proposal for the Artificial Intelligence Act on 21 April 2021.*

High-level expert group on artificial intelligence

The topic of trustworthy AI had already been on the agenda of the previous EU Commission. A taskforce known as the high-level expert group on artificial intelligence (AI HLEG) took the first steps towards transforming the concept of trustworthy AI into a regulation that is valid and accepted throughout the EU. On 8 April 2019, the AI HLEG published a document entitled ‘ETHICS GUIDELINES FOR TRUSTWORTHY AI’.**

My blog post from 9 August 2019 already outlined the key ideas in detail, so here is a brief overview of the main takeaways.

Trustworthy AI

When defining trustworthy AI, the AI HLEG takes a holistic approach. Not only does a single AI system need to be trustworthy, but also AI itself and all the stakeholders involved. The AI HLEG defines three key requirements: trustworthy AI must be legal, ethical and robust. Focusing on the aspect of ethical AI, the AI HLEG set outs four fundamental principles: respect for human autonomy, prevention of harm, fairness and applicability.

Seven requirements for AI systems are derived from these principles, as well as a checklist. This is used to ensure a specific AI application meets the core requirements towards satisfying the fundamental principles.

Industry-, division- and company-wide implementation

The publication of the AI HLEG’s guidelines has generated extensive discussion among the public, universities, associations and companies that research, manufacture or use AI systems. The discussions focused on the question of how compliance with ethical principles as well as the core requirements can be achieved, ensured and monitored in a specific case or in relation to an industry. My blog post from 26 August 2019 compiles some initial responses for the insurance industry.

Against this backdrop, observers watched with interest to see which specific provisions would be formulated in the proposed Artificial Intelligence Act.

Classification of AI systems

A first leaf through the proposal sprung a big surprise. While the AI HLEG recommendations were still formulated for all (!) AI systems, the proposed Artificial Intelligence Act provides a structure for classifying AI systems. A key aspect of the proposal is the concept of high-risk systems for which very extensive requirements have been defined. A follow-up article will explain in more detail what these are and how high-risk systems are defined in the proposal.
However, it is important that the central focus of the regulation is limited to high-risk systems. When it comes to guidelines, requirements and regulations for all AI systems, we are left searching in vain!

Regulatory system

However, the prohibited practices (pursuant to Article 5) are new additions – at least compared to the AI HLEG’s recommendations. These regulations strictly prohibit certain AI applications of use, such as manipulative subliminal techniques or the systematic exploitation of a weakness or a vulnerability.

Finally, there is a definition of AI systems for which special transparency obligations apply, such as emotion recognition systems and biometric categorisation systems.

For all other AI systems, there are recommended codes of conduct that set out specific requirements to be applied voluntarily, such as environmental sustainability or accessibility.

Summary and outlook

The proposed Artificial Intelligence Act details requirements for three categories of AI systems: prohibited AI systems, high-risk systems, and systems with special transparency obligations.

A follow-up article will explain in further detail how the systems are defined and which specific requirements have been formulated.

So far, there is only one proposal. Once in force, the regulation will directly apply in all EU member states after a lead time of 24 months.

*) Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS (https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1623335154975&uri=CELEX%3A52021PC0206)
**) ETHICS GUIDELINES FOR TRUSTWORTHY AI, High-Level Expert Group on Artificial Intelligence (HLEG), European Commission, Brussels, 8 April 2019

INTERESTED IN MORE?

Subscribe to our blog and stay well informed.

You can revoke your consent at any time