msg Logo

Artificial Intelligence

The AI Act passes: how will the act on AI regulation affect the insurance industry?

by Stefan Nörtemann / 31. July 2024

After several years of preparation, the Artificial Intelligence Act (AI Act) was published in the Official Journal of the European Union (EU) on 13 July 2024. This means that the world’s first regulation for the use of artificial intelligence will come into force on 2 August 2024 and, as is customary for EU regulations, will become law in all member states immediately, without any further national implementation acts.

 

As outlined in my articles The Artificial Intelligence Act – an overview (3 February 2022) and The Artificial Intelligence Act – system and regulations (22 February 2022), the new regulation takes a risk-based approach and divides AI applications into four categories:

  • Prohibited practices
  • High-risk systems
  • AI systems with special transparency requirements
  • All other AI systems (minimal risk).

High-risk systems in the insurance sector

The insurance industry waited with bated breath to see which applications would ultimately fall into the category of high-risk systems listed in Annex III of the Act, and thus be subject to regulation. This was the subject of controversy during the course of the legislative process. The final law now states:

‘AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of health and life insurance;’ (Annex III, point 5c).

The categorisation of AI applications in the insurance sector as high-risk systems is therefore limited to risk assessment and pricing in life and health insurance. The regulations for high-risk systems include, but are not limited to, specific risk management, data governance, detailed technical documentation and record-keeping requirements, transparency requirements, human oversight, and specific requirements for accuracy, robustness and cybersecurity. In addition, there are far-reaching obligations for providers of high-risk systems. These include the establishment of a quality management system, the obligation to carry out regular conformity assessments, as well as extensive information obligations towards the relevant authorities.

For the sake of completeness, it should be mentioned that according to point 4 of Annex III, the use of AI in worker management within companies – including insurance in particular – is classed as a high-risk application.

Transparency requirements in insurance

In view of the transparency obligations for providers and users of AI systems, the issue of interaction with natural persons appears to be relevant for insurance companies. Specifically, when using a chatbot in customer communication, the user must be informed that they are interacting with an AI.

In addition, further transparency requirements are set out for ‘general-purpose AI models,’ which primarily conceal the basic models.

Large language models

During the lengthy legislative process, bureaucracy was overtaken by reality in the form of large language models. With the triumph of ChatGPT in 2023 at the latest, it became clear that generative AI should also be taken into account in regulation. In fact, the AI Act almost failed at the last hurdle due to the controversy over how far regulation should go here.

Between high risk and no regulation, they agreed to meet ‘in the middle’ and formulated new transparency requirements. The basic models are divided into two categories: general-purpose AI models and general-purpose AI models with systemic risk. Classification criteria are set out in the new Annex XIII. The complexity of the model, measured in FLOPs (floating point operations per second), is of key importance here. Comprehensive transparency requirements apply progressively to providers of basic models.

Large language models in insurance

Large language models are currently being developed in the insurance sector for a wide range of applications, such as knowledge management. In practice, no proprietary basic models are used for this purpose, with companies opting instead to rely on existing and available basic models. In this context, insurance companies are generally not affected by these transparency obligations themselves – on the contrary, as the addressees of the disclosure obligations of the providers of basic models, they benefit from them.

Everything isn’t so bad?!

 In summary, we can conclude that insurance companies are affected by the AI regulation in a manageable (moderate) manner.

  • In the case of health and life insurance, AI applications for risk assessment and pricing in relation to natural persons are high-risk applications and are therefore extensively regulated.
  • AI-based communication (chatbots) with natural persons is subject to the obligation to notify that interaction with an AI is taking place.

The many other applications that we deal with on a daily basis, on the other hand, fall into category four of those applications for which no regulation is envisaged. There is the option of making a voluntary commitment here, which is not currently to be expected due to the extensive regulation already in place in our industry.

In good hands: msg insur:it offers the highest level of security for AI application

The AI applications developed by msg insur:it to date (AI-supported migration, msg.ask:it, msg.claim:it) fall into category four for which no specific regulation is provided in the AI Act. Irrespective of this, however, the applications offer a high degree of security and compliance in accordance with the existing regulatory requirements in the insurance industry. If, in future, we offer AI applications that are subject to the regulations of the AI Act, we guarantee that they will comply with the rules, both in the case of high-risk applications and in the case of specific transparency requirements.

INTERESTED IN MORE?

Subscribe to our blog and stay well informed.

You can revoke your consent at any time