November 2024 Feature Importance: A Closer Look at Shapley Values and LOCO
Isabella Verdinelli, Larry Wasserman
Author Affiliations +
Statist. Sci. 39(4): 623-636 (November 2024). DOI: 10.1214/24-STS937

Abstract

There is much interest lately in explainability in statistics and machine learning. One aspect of explainability is to quantify the importance of various features (or covariates). Two popular methods for defining variable importance are LOCO (Leave Out COvariates) and Shapley Values. We take a look at the properties of these methods and their advantages and disadvantages. We are particularly interested in the effect of correlation between features which can obscure interpretability. Contrary to some claims, Shapley values do not eliminate feature correlation. We critique the game theoretic axioms for Shapley values and we question their relevance for assessing feature importance. We propose new, more statistically oriented axioms for feature importance and some measures that satisfy these axioms. However, correcting for correlation is a Faustian bargain: removing the effect of correlation creates other forms of bias. Ultimately, we recommend a slightly modified version of LOCO. We briefly consider how to modify Shapley values to better address feature correlation.

Acknowledgments

The authors thank Art Owen and the reviewers for helpful comments.

Citation

Download Citation

Isabella Verdinelli. Larry Wasserman. "Feature Importance: A Closer Look at Shapley Values and LOCO." Statist. Sci. 39 (4) 623 - 636, November 2024. https://doi.org/10.1214/24-STS937

Information

Published: November 2024
First available in Project Euclid: 30 October 2024

Digital Object Identifier: 10.1214/24-STS937

Keywords: feature importance , interpretability , LOCO , shapley values

Rights: Copyright © 2024 Institute of Mathematical Statistics

Vol.39 • No. 4 • November 2024
Back to Top