The Annals of Statistics

Adaptation in log-concave density estimation

Arlene K. H. Kim, Adityanand Guntuboyina, and Richard J. Samworth

Full-text: Access denied (no subscription detected)

We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text


The log-concave maximum likelihood estimator of a density on the real line based on a sample of size $n$ is known to attain the minimax optimal rate of convergence of $O(n^{-4/5})$ with respect to, for example, squared Hellinger distance. In this paper, we show that it also enjoys attractive adaptation properties, in the sense that it achieves a faster rate of convergence when the logarithm of the true density is $k$-affine (i.e., made up of $k$ affine pieces), or close to $k$-affine, provided in each case that $k$ is not too large. Our results use two different techniques: the first relies on a new Marshall’s inequality for log-concave density estimation, and reveals that when the true density is close to log-linear on its support, the log-concave maximum likelihood estimator can achieve the parametric rate of convergence in total variation distance. Our second approach depends on local bracketing entropy methods, and allows us to prove a sharp oracle inequality, which implies in particular a risk bound with respect to various global loss functions, including Kullback–Leibler divergence, of $O(\frac{k}{n}\log^{5/4}(en/k))$ when the true density is log-concave and its logarithm is close to $k$-affine.

Article information

Ann. Statist., Volume 46, Number 5 (2018), 2279-2306.

Received: September 2016
Revised: July 2017
First available in Project Euclid: 17 August 2018

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Primary: 62G07: Density estimation 62G20: Asymptotic properties

Adaptation bracketing entropy log-concavity maximum likelihood estimation Marshall’s inequality


Kim, Arlene K. H.; Guntuboyina, Adityanand; Samworth, Richard J. Adaptation in log-concave density estimation. Ann. Statist. 46 (2018), no. 5, 2279--2306. doi:10.1214/17-AOS1619.

Export citation


  • Baraud, Y. and Birgé, L. (2016). RHO-estimators for shape restricted estimation. Stochastic Process. Appl. 126 3888–3912.
  • Barlow, R. E., Bartholomew, D. J., Bremner, J. M. and Brunk, H. D. (1972). Statistical Inference Under Order Restrictions. Wiley, London.
  • Birgé, L. (1987). Estimating a density under order restrictions: Nonasymptotic minimax risk. Ann. Statist. 15 995–1012.
  • Chatterjee, S., Guntuboyina, A. and Sen, B. (2014). On risk bounds in isotonic and other shape restricted regression problems. Ann. Statist. 43 1774–1800.
  • Chatterjee, S., Guntuboyina, A. and Sen, B. (2018). On matrix estimation under monotonicity constraints. Bernoulli 24 1072–1100.
  • Chen, Y. and Samworth, R. J. (2016). Generalised additive and index models with shape constraints. J. Roy. Statist. Soc. Ser. B 78 729–754.
  • Chen, Y. and Wellner, J. A. (2016). On convex least squares estimation when the truth is linear. Electron. J. Stat. 10 171–209.
  • Cule, M. and Samworth, R. (2010). Theoretical properties of the log-concave maximum likelihood estimator of a multidimensional density. Electron. J. Stat. 4 254–270.
  • Cule, M., Samworth, R. and Stewart, M. (2010). Maximum likelihood estimation of a multi-dimensional log-concave density. J. Roy. Statist. Soc. Ser. B 72 545–607.
  • Doss, C. and Wellner, J. A. (2016). Global rates of convergence of the MLEs of log-concave and $s$-concave densities. Ann. Statist. 44 954–981.
  • Dümbgen, L. and Rufibach, K. (2009). Maximum likelihood estimation of a log-concave density and its distribution function: Basic properties and uniform consistency. Bernoulli 15 40–68.
  • Dümbgen, L. and Rufibach, K. (2011). logcondens: Computations related to univariate log-concave density estimation. J. Stat. Softw. 39 1–28.
  • Dümbgen, L., Rufibach, K. and Wellner, J. A. (2007). Marshall’s lemma for convex density estimation. In Asymptotics: Particles, Processes and Inverse Problems. Institute of Mathematical Statistics Lecture Notes—Monograph Series 55 101–107. IMS, Beachwood, OH.
  • Dümbgen, L., Samworth, R. and Schuhmacher, D. (2011). Approximation by log-concave distributions, with applications to regression. Ann. Statist. 39 702–730.
  • Dvoretzky, A., Kiefer, J. and Wolfowitz, J. (1956). Asymptotic minimax character of the sample distribution function and of the classical multinomial estimator. Ann. Math. Stat. 33 642–669.
  • Grenander, U. (1956). On the theory of mortality measurement. II. Skand. Aktuarietidskr. 39 125–153.
  • Groeneboom, P. and Jongbloed, G. (2014). Nonparametric Estimation Under Shape Constraints. Cambridge Univ. Press, Cambridge.
  • Groeneboom, P., Jongbloed, G. and Wellner, J. A. (2001). Estimation of a convex function: Characterizations and asymptotic theory. Ann. Statist. 29 1653–1698.
  • Han, Q. and Wellner, J. A. (2016). Multivariate convex regression: Global risk bounds and adaptation. Available at
  • Hildreth, C. (1954). Point estimates of ordinates of concave functions. J. Amer. Statist. Assoc. 49 598–619.
  • Kim, A. K. H., Guntuboyina, A. and Samworth, R. J. (2018). Supplement to “Adaptation in log-concave density estimation.” DOI:10.1214/17-AOS1619SUPP.
  • Kim, A. K. H. and Samworth, R. J. (2016). Global rates of convergence in log-concave density estimation. Ann. Statist. 44 2756–2779.
  • Lim, E. and Glynn, P. W. (2012). Consistency of multidimensional convex regression. Oper. Res. 60 196–208.
  • Marshall, A. W. (1970). Discussion of Barlow and van Zwet’s paper. In Nonparametric Techniques in Statistical Inference. Proceedings of the First International Symposium on Nonparametric Techniques Held at Indiana University (M. L. Puri, ed.) 174–176. Cambridge Univ. Press, Cambridge.
  • Massart, P. (1990). The tight constant in the Dvoretzky–Kiefer–Wolfowitz inequality. Ann. Probab. 18 1269–1283.
  • Pal, J., Woodroofe, M. and Meyer, M. (2007). Estimating a Polya frequency function. In Complex Datasets and Inverse Problems: Tomography, Networks and Beyond (R. Liu, W. Strawderman and C.-H. Zhang, eds.). IMS Lecture Notes—Monograph Series 54 239–249.
  • Samworth, R. J. and Yuan, M. (2012). Independent component analysis via nonparametric maximum likelihood estimation. Ann. Statist. 40 2973–3002.
  • Schuhmacher, D. and Dümbgen, L. (2010). Consistency of multivariate log-concave density estimators. Statist. Probab. Lett. 80 376–380.
  • Seijo, E. and Sen, B. (2011). Nonparametric least squares estimation of a multivariate convex regression. Ann. Statist. 39 1633–1657.
  • Seregin, A. and Wellner, J. A. (2010). Nonparametric estimation of multivariate convex-transformed densities. Ann. Statist. 38 3751–3781.
  • van de Geer, S. (2000). Applications of Empirical Process Theory. Cambridge Univ. Press, Cambridge.
  • Van Eeden, C. (1956). Maximum likelihood estimation of ordered probabilities. In Indagationes Mathematicae (Proceedings) 59 444–455. North-Holland, Amsterdam.
  • Walther, G. (2002). Detecting the presence of mixing with multiscale maximum likelihood. J. Amer. Statist. Assoc. 97 508–513.
  • Zhang, C.-H. (2002). Risk bounds in isotonic regression. Ann. Statist. 30 528–555.

Supplemental materials

  • Supplement to “Adaptation in log-concave density estimation”. Auxiliary results.