Abstract
The log-concave maximum likelihood estimator of a density on the real line based on a sample of size $n$ is known to attain the minimax optimal rate of convergence of $O(n^{-4/5})$ with respect to, for example, squared Hellinger distance. In this paper, we show that it also enjoys attractive adaptation properties, in the sense that it achieves a faster rate of convergence when the logarithm of the true density is $k$-affine (i.e., made up of $k$ affine pieces), or close to $k$-affine, provided in each case that $k$ is not too large. Our results use two different techniques: the first relies on a new Marshall’s inequality for log-concave density estimation, and reveals that when the true density is close to log-linear on its support, the log-concave maximum likelihood estimator can achieve the parametric rate of convergence in total variation distance. Our second approach depends on local bracketing entropy methods, and allows us to prove a sharp oracle inequality, which implies in particular a risk bound with respect to various global loss functions, including Kullback–Leibler divergence, of $O(\frac{k}{n}\log^{5/4}(en/k))$ when the true density is log-concave and its logarithm is close to $k$-affine.
Citation
Arlene K. H. Kim. Adityanand Guntuboyina. Richard J. Samworth. "Adaptation in log-concave density estimation." Ann. Statist. 46 (5) 2279 - 2306, October 2018. https://doi.org/10.1214/17-AOS1619
Information