Shapley values for XAI: the good, the bad and the ugly

In this talk Anes will ask questions like: What is the true significance of Shapley values as feature importance measures? How can Shapley Residuals help us quantify its limits? How can we better understand global feature contributions with additive importance measures? Join us as we merge game theory with machine learning in today’s session.

References

In this series