Open Access
2022 Interpretable machine learning: Fundamental principles and 10 grand challenges
Cynthia Rudin, Chaofan Chen, Zhi Chen, Haiyang Huang, Lesia Semenova, Chudi Zhong
Statist. Surv. 16: 1-85 (2022). DOI: 10.1214/21-SS133

Abstract

Interpretability in machine learning (ML) is crucial for high stakes decisions and troubleshooting. In this work, we provide fundamental principles for interpretable ML, and dispel common misunderstandings that dilute the importance of this crucial topic. We also identify 10 technical challenge areas in interpretable machine learning and provide history and background on each problem. Some of these problems are classically important, and some are recent problems that have arisen in the last few years. These problems are: (1) Optimizing sparse logical models such as decision trees; (2) Optimization of scoring systems; (3) Placing constraints into generalized additive models to encourage sparsity and better interpretability; (4) Modern case-based reasoning, including neural networks and matching for causal inference; (5) Complete supervised disentanglement of neural networks; (6) Complete or even partial unsupervised disentanglement of neural networks; (7) Dimensionality reduction for data visualization; (8) Machine learning models that can incorporate physics and other generative or causal constraints; (9) Characterization of the “Rashomon set” of good models; and (10) Interpretable reinforcement learning. This survey is suitable as a starting point for statisticians and computer scientists interested in working in interpretable machine learning.

Funding Statement

Partial support provided by grants DOE DE-SC0021358, NSF DGE-2022040, NSF CCF-1934964, and NIDA DA054994-01.

Acknowledgments

We thank Leonardo Lucio Custode for pointing out several useful references to Challenge 10. Thank you to David Page for providing useful references on early explainable ML. Thank you to the anonymous reviewers that made extremely helpful comments.

Citation

Download Citation

Cynthia Rudin. Chaofan Chen. Zhi Chen. Haiyang Huang. Lesia Semenova. Chudi Zhong. "Interpretable machine learning: Fundamental principles and 10 grand challenges." Statist. Surv. 16 1 - 85, 2022. https://doi.org/10.1214/21-SS133

Information

Received: 1 March 2021; Published: 2022
First available in Project Euclid: 10 January 2022

arXiv: 2103.11251
MathSciNet: MR4361744
zbMATH: 07471610
Digital Object Identifier: 10.1214/21-SS133

Subjects:
Primary: 68T01
Secondary: 62-02

Keywords: explainable machine learning , Interpretable machine learning

Vol.16 • 2022
Back to Top