msg Logo

Artificial Intelligence

Interpretable machine learning in the insurance sector – An introduction

by Thomas Hofmann / 11. August 2023

Never a day goes by now that artificial intelligence (AI) is not mentioned in the media. The topic of AI has also finally reached the heart of society in the form of ChatGPT. This language-based AI model is a chatbot that has been trained to understand natural language as model input and generate contextual responses as model predictions that humans are able to understand. In this context, discussions are currently under way on how to preserve the authenticity of essays and homework in schools, and how to protect against false information when using social media on a daily basis.

 

If you ask ChatGPT about potential areas of application and what it is useful for, the answer is:

ChatGPT is a powerful language processing tool with a wide range of potential areas of application across different domains and industries[1]

 

Sectors such as customer support, e-commerce, education, healthcare, legal and financial services are mentioned. It goes on to say:

ChatGPT is useful because it provides a quick and effective way to interact with and help users in natural language without the need for human intervention….[1]

 

Since the user is supposed to be able to rely on the answers given, i.e. on the model predictions, without these having been verified by a human, the question arises as to how reliable and correct the answers given are in terms of content.

 

Comprehensible model predictions

How comprehensible and trustworthy a model prediction based on model input should be generally has to be assessed depending on the use case. For example, if you are interested in movie suggestions based on your three favourite films, the transparency of how the recommendations come about is of lesser importance. In the worst case, your next movie night will not be as successful as you hoped. However, the situation is different in the highly regulated financial services sector. If a client wants to know why the loan they applied for has not been approved or why the disability cover they applied for has not been granted, they have a legitimate interest in finding out the reasons for this. It must also be possible to explain these to the client in a comprehensible and transparent manner, even if no one was involved in the decision-making process. Transparency and trust are top priorities, especially in the banking and insurance sector!

 

Guidelines and regulatory requirements

In financial matters, wrong decisions can have far-reaching and often unpredictable consequences. This is why the use of trustworthy and transparent AI in the insurance environment is crucial. The previous blog post entitled Trustworthy Artificial Intelligence in the Insurance Environment briefly explained the guidelines a trustworthy AI must follow and what requirements can be derived from this for the insurance industry. These requirements apply not only to the German-speaking insurance market, but also to the whole of the EU. This is set out in more precise detail in the 2021 report entitled ‘Artificial Intelligence Governance Principles: Towards Ethical And Trustworthy Artificial Intelligence In The European Insurance Sector’ [2] published by the European Insurance and Occupational Pensions Authority (EIOPA). The key requirements, which are described and explained in more detail, are: Fairness and non-discrimination, human oversight, data governance and record keeping, robustness and performance and transparency and explainability.

 

In this context, the European Union goes one step further with the Artificial Intelligence Act (AI Act). The AI Act is a planned EU regulation for which the European Parliament submitted an initial proposal in April 2021, which has since been discussed by the relevant EU bodies. The central idea of the draft is to regulate AI systems across the EU on the basis of a risk classification. The aim of the proposal is to transpose the aforementioned requirements into binding legislation. This means that the requirements are not only limited to the European insurance sector, but also apply across sectors and can be severely sanctioned in the event of violations. The planned regulation is currently in the process of being adopted under the trilogue procedure. After a transitional period of two years after the decision, it enters into force automatically in all EU member states.

 

What exactly does explainability mean?

Defining the terms ‘explainability’ and ‘interpretability’ with respect to AI model predictions is difficult. There is currently no definition for both terms. They are often used synonymously in literature, since they generally refer to the same concept. Both model explainability and model interpretability aim to make complex models more understandable and comprehensible in order to strengthen confidence in their predictions and identify possible errors. Two descriptions of interpretability are, for example:

 

‘Interpretability is the degree to which a human can understand the cause of a decision. ‘[3]

 

or

 

‘Interpretability is the degree to which a human can consistently predict the model’s result. ‘[4]

 

In other words, the more interpretable a model is, the easier it is for someone to understand why certain model decisions or predictions were made. One model is therefore more explainable than another if its decisions are easier for a person to understand than the decisions of another model [5]. This also makes it clear that the assessment of how explainable a model is is largely dependent on the use case and the user.

 

Why is explainability necessary?

As already seen, the need for model explainability varies depending on the application. If there are no unacceptable consequences for the user or if the problem has been extensively researched and validated in real applications, an explicit explanation of the model predictions is not mandatory. In such cases, the behaviour of the system can be trusted, even if it is not perfect [6]. However, this is not the case for many areas of application in the insurance sector. If it can be ensured that the predictions of a model are explainable and comprehensible for the user, other model requirements such as non-discrimination and fairness, as well as technical robustness and security, can also be specifically examined [6]. Model explainability is therefore essential for the use of ethical and trustworthy artificial intelligence, particularly in the insurance industry.

 

Outlook

In the following blog posts, I will focus in more detail on the topic of explainable and interpretable machine learning. The specific taxonomy, the general functioning of different procedures and potential applications will be discussed in more detail.

 

[1] https://chat.openai.com/

[2] EIOPA. Artificial Intelligence Governance Principles: Towards Ethical And Trustworthy Artificial Intelligence In The European Insurance Sector. A report from EIOPAs Consultative Expert Group on Digital Ethics in insurance. https://www.eiopa.europa.eu/sites/default/files/publications/reports/eiopa-ai-governance-principles-june-2021.pdf

[3] Miller, T. (2017). Explanation in artificial intelligence: Insights from the social sciences. arXiv Preprint arXiv:1706.07269.

[4] Kim, Been, Rajiv Khanna, and Oluwasanmi O. Koyejo. (2016). Examples are not enough, learn to criticize! Criticism for interpretability. Advances in Neural Information Processing Systems.

https://dl.acm.org/doi/pdf/10.5555/3157096.3157352.

[5] Molnar, C. (2022). Interpretable Machine Learning: A Guide for Making Black Box Models Explainable (2nd ed.). christophm.github.io/interpretable-ml-book/

[6] Doshi-Velez, Finale, and Been Kim. (2017). Towards a rigorous science of interpretable machine learning. http://arxiv.org/abs/1702.08608

INTERESTED IN MORE?

Subscribe to our blog and stay well informed.

You can revoke your consent at any time