msg Logo

Artificial Intelligence

The Artificial Intelligence Act – will it also affect the insurance industry?

by Stefan Nörtemann / 15. January 2023

Trustworthy artificial intelligence

For a number of years now, various European bodies have been dealing with ethical issues surrounding the use of AI in many aspects of life. First and foremost, their goal is trustworthy artificial intelligence that is ethical, robust and compliant with the law. This is not only to be accomplished for every individual AI system, but rather for AI in general. Consequently, requirements are imposed on the players involved, as users, providers and developers of AI systems.

 

The state of play at the start of 2023

The release of a joint draft regulation* by the European Commission and the European Parliament on 21 April 2021 represented a step towards the standardised EU-wide regulation of artificial intelligence. Since then, there have been consultations and extensive discussions between the bodies, industry representatives and authorities. The original goal of passing the so-called Artificial Intelligence Act in 2022 was not achieved. After all, the European Council agreed on a common approach on 6 December 2022 which is set out in the general approach** dated 25 November 2022. It is not a decision on the regulation; it describes the position with which the European Council is entering the trilogue process that is expected to lead to the act being passed in 2023.

 

System of the regulations

As described in my articles dated 3 February 2022 and 22 February 2022, the draft of the Artificial Intelligence Act pursues the principle of risk-based regulation. This principle requires every AI application to be assigned to one of four categories: prohibited practices, high-risk systems, systems with special transparency obligations, all other AI systems (that do not fall into any of the other categories). Extensive regulation is (only) intended for high-risk systems.

 

What does this mean for the insurance industry?

Since the publication of the original draft at the latest, the question is dicussed whether AI applications in the insurance sector qualify as high-risk systems and are thus subject to extensive regulation, and if so, which ones. This was not provided for at all in the original draft*. An earlier compromise text from the European Council (Compromise text of 29 November 2021)*** reads as follows: ‘… AI systems intended to be used for insurance premium setting, underwritings and claims assessments.’ In the general approach,** this has been changed to ‘AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance …’ (general approach, Annex III, 5. (d))

 

Requirements for high-risk systems

Put simply, this means that high-risk systems in insurance are limited to AI applications used for risk assessment and pricing in life and health insurance. No other insurance division will be mentioned here. This gives rise to a wide range of requirements for AI applications which are intended to be used for risk assessment and pricing in life and health insurance (general approach, Chapter 2, Articles 8 – 15).

These include specific risk management, data governance, detailed technical documentation and record-keeping requirements, transparency regulations, human oversight and specific requirements for accuracy, robustness and cybersecurity.

 

The challenges of using high-risk systems

Aside from the fairly ‘bureaucratic’ and organisational regulations which are located in the area of “hard work”, the very design of AI systems has to take challenging technical requirements into account.

 

Two of many such examples of this are the transparency requirements (Article 13) for AI systems in combination with human oversight (Article 14), the purpose of which is to ensure that users are able to interpret the AI system’s output correctly and oversee its operation effectively. This is a fundamental obstacle for black-box systems in particular, like those used in deep learning (the key word here being ‘explainability’).

 

Interim summary

Even though the European Council has agreed on a general approach,** nothing is final. Nevertheless, the general approach is a clear indication of the direction in which things are moving, and we can expect at least selected AI applications in the fields of life and health insurance to be categorised as high-risk systems. We will keep you posted.

 

*) Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS

**) Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts – General approach

***) Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts – Presidency compromise text

INTERESTED IN MORE?

Subscribe to our blog and stay well informed.

You can revoke your consent at any time

AI Act