msg Logo

Artificial Intelligence

Interpretable machine learning in an insurance environment – clarification of terminology

by Thomas Hofmann / 23. August 2024
Maschinelles Lernen

The blog article “Interpretable machine learning in the insurance sector – An introduction has already offered a brief introduction to two terms in connection with AI model predictions: explainability and interpretability. It explained why explainability and interpretability are crucial in the strictly regulated insurance environment. This article takes a closer look at what these two terms really mean and sets the scene for further discussion in future blog articles.

Practical examples: machine learning in the insurance industry

The use of machine learning (ML), a branch of artificial intelligence (AI), is a deeply rooted tradition in the insurance industry. For example, the combination of predictive power and explainability has made generalised linear models (GLM) a popular choice for various applications in the field of insurance. In vehicle insurance, the number of anticipated claims is often estimated on the basis of a Poisson distribution, with the size or amount of these claims estimated using a gamma distribution. The number and size of claims can be modelled as a GLM based on various rating factors such as vehicle type, engine power, mileage or the age of the driver. If the model assumptions made here are approximately met, the GLM produces reliable predictions and is less prone to outliers.

 

However, the analysis of innovative data sources requires the use of more complex machine learning methods, as the patterns in these data often cannot be comprehended sufficiently using simple models. The blog article Artificial intelligence: use cases in the insurance industry and subsequent articles  highlight a few practical examples. Due to their high adaptability and the accuracy of their predictions, ML models like (D)NNs are used for applications including image and speech recognition. The data-driven approach taken by these models is usually to control their model complexity using what are known as hyperparameters. This flexibility often goes hand-in-hand with excellent predictive capability, to the detriment of the model’s interpretability.

What do interpretability and explainability really mean?

As we can see in the examples above, interpretability and explainability are often used synonymously to refer to AI model predictions. There is currently no recognised definition of these two terms, and there is uncertainty as to whether these model characteristics can or should be measured. Fundamentally, model explainability and model interpretability both aim to make the generation of a model prediction more understandable and comprehensible, so as to increase the confidence in the prediction. In this context, for example, interpretability can be understood as follows:

 

Interpretability is the degree to which a human can understand the cause of a decision. ‘[1]

 

In other words, the more interpretable a model is, the easier it is for someone to understand why certain model decisions were made. One model is therefore more interpretable than another if its decisions are easier for a person to understand than the decisions of another model.

 

‘To explain an event is to provide some information about its causal history. In an act of explaining, someone who is in possession of some information about the causal history of some event — explanatory information, I shall call it — tries to convey it to someone else.’ [2]

 

I.e.

 

‘An explanation is an assignment of causal responsibility.’ [2]

 

This means that an explanation is an assignment of causal responsibility, enabling the recipient of the explanation (human or machine) to obtain an understanding of the model’s decision-making process. An explanation links together the model input’s feature values with the model prediction, ideally in a way that is understandable for human beings. However, the evaluation of the term understandable depends heavily on the application in question and the prior knowledge of the individual recipient. As such, explanations are not limited exclusively to the presentation of causal relationships; they are actually highly contextual. Although an event can have a range of causes, a recipient is often only interested in a certain selection of these causes that are of relevance in the given context.

Conceptual distinction: Algorithm transparency

Please note that the terms ‘cause’ and ‘effect’ relate to a pre-trained AI model. This context does not take into account how the model learns the relationship between the target variable and explanatory variables from the data, i.e. what learning algorithm is used. The comprehensibility of this learning process is referred to as algorithm transparency and is distinct from the terms interpretability and explainability, as described above. Algorithm transparency merely requires knowledge of the learning algorithm, and not of the data or the underlying model.

What does this mean for the practical examples above?

In general, it is not absolutely necessary to understand the entire chain of causation, i.e. the seamless sequence of cause and effect, in order to provide a sound explanation for how a model prediction has come into being. However, this is the case in the previous example of car insurance due to the inherent model structure of the GLM. The GLM uses a (non-linear) link function to connect the weighted total of tariff features used with the expected number/amount of claims. The model prediction, i.e. the number/amount of claims, therefore possess a compact analytical formula that is based on the tariff features and is easily understandable and comprehensible to an expert user.

 

However, models with a larger number of features or parameters are often beyond the limits of human imagination. A feature or parameter space with more than three dimensions is utterly unimaginable for a person. When people attempt to understand a model, they normally focus on sub-aspects of the model, such as the weight parameters in a (generalised) linear model. Future blog articles will take a closer look at what sub-aspects could be significant in this context, and how they can be presented in an understandable fashion even for the aforementioned image and speech-processing models.

 

[1] Molnar, C. (2022). Interpretable Machine Learning: A Guide for Making Black Box Models Explainable (2nd ed.). 
christophm.github.io/interpretable-ml-book/ 

 

[2] Miller, T. (2017). Explanation in artificial intelligence: Insights from the social sciences. 
Preprint arXiv:1706.07269 

INTERESTED IN MORE?

Subscribe to our blog and stay well informed.

You can revoke your consent at any time