Koehn AI Company Logo

Explainable AI using Shapley Values

4 January 2023

Shapley values for feature-importance analysis

Any machine-learning task revolves around gaining insights from data. The machine-learning model receives data of reasonable quality as input. The model gets tuned on input features derived from the input data to produce output of desired quality levels. But not all input features are of equal importance for the model. Some matter very much, but others might only add noise and increase computational complexity without any gain. There also is business value in learning about the key drivers for an output KPI. The question is: how to perform this technically?

The only machine-learning models that bring this evaluation along for free are based on Decision Trees, and even in those cases, it is left unresolved whether an increase in feature value leads to an increase or a decrease in the target variable. In contrast, the concept of Shapley values allows not only to cover this shortcoming, it also allows to evaluate the feature importance of general types of machine-learning model beyond Decision Trees. In this approach, each feature is systematically integrated out in succession and replaced with suitable background noise to study its impact on the target.

The analysis of feature importance using Shapley values constitutes a universal tool that we have used in almost every project across a range of verticals. We have used this method in the retail sector to identify key drivers for conversion. We have also implemented Shapley values when we designed a tool for scientists which gives information on what to change about an experiment in order to optimize a certain outcome.

To set up the algorithm, the SHAP Python package may be used. However, this package is still in experimental state and you have to be careful about bugs appearing from one version to the next. Of course you can also implement the routine by hand. Whichever route you take, keep in mind that the learnings from your feature analysis are limited in their reliability by the quality of the machine-learning model you apply it to. If the prediction quality of the model is poor, then also the feature analysis won't be of much value.

Interpretation of results

An exemplary evaluation is given in the figure below. RNN
In this figure, the 25 most important features of the specific machine-learning model under scrutiny here are listed in descending order. At the same time, the figure shows whether a high or low feature value leads to an increase or decrease in the target value. If the Shapley value is positive (applies to all points to the right of the vertical line), the target value increases, while a negative Shapley value decreases the target value.

Feature 1 will serve as an example for illustration at this point, for the case of a customer churn model. Feature 1 shall indicate how many days ago a customer last interacted with the company studied. A low feature value indicates a recent interaction of the customer with the company, a high value indicates a longer inactivity. Since the red dots for the feature value are all to the left of the vertical line, a high feature value (longer inactivity) obviously lowers the probability of the customer to be "alive" (the target variable). Conversely, a low feature value (recent interaction) provides an increase in the probability for the customer to be alive. There can also be Boolean variables - for instance, customers who either are members of a loyalty program or not. The Shapley analysis will in this case confirm the expectation that loyalty program members have a reduced probability of churning.



Stay up to date with our free newsletter

0 comments

Leave a comment