Pearson’s ρ is the most used measure of statistical dependence. It gives a complete characterization of dependence in the Gaussian case, and it also works well in some non-Gaussian situations. It is well known; however, that it has a number of shortcomings; in particular, for heavy tailed distributions and in nonlinear situations, where it may produce misleading, and even disastrous results. In recent years, a number of alternatives have been proposed. In this paper, we will survey these developments, especially results obtained in the last couple of decades. Among measures discussed are the copula, distribution-based measures, the distance covariance, the HSIC measure popular in machine learning and finally the local Gaussian correlation, which is a local version of Pearson’s ρ. Throughout, we put the emphasis on conceptual developments and a comparison of these. We point out relevant references to technical details as well as comparative empirical and simulated experiments. There is a broad selection of references under each topic treated.
This work has been partially supported by the Finance Market Fund (Norway).
We are grateful to the Editor and to two anonymous referees for a number of valuable comments and suggestions.
"Statistical Dependence: Beyond Pearson’s ρ." Statist. Sci. 37 (1) 90 - 109, February 2022. https://doi.org/10.1214/21-STS823