An information-theoretic perspective on model interpretation

In the ninth seminar of our XAI series, Kristof Schröder, Senior Research Engineer at appliedAI, will discuss how maximizing mutual information between selected features and the response variable can aid in model interpretation, by offering a unique, information-theoretic perspective on AI models.

Abstract: Providing explainability in a model-agnostic way is a challenging task. We have a look at a work by [Che18L], known as L2X, which tackles the problem from an information-theoretic point of view. The key idea is to stack an explainer model and a variational approximation to maximize a relaxation of the mutual information objective, which allows for instance-wise feature selection.

References

In this series