Open Access
August 2020 Nonparametric regression using deep neural networks with ReLU activation function
Johannes Schmidt-Hieber
Ann. Statist. 48(4): 1875-1897 (August 2020). DOI: 10.1214/19-AOS1875


Consider the multivariate nonparametric regression model. It is shown that estimators based on sparsely connected deep neural networks with ReLU activation function and properly chosen network architecture achieve the minimax rates of convergence (up to $\log n$-factors) under a general composition assumption on the regression function. The framework includes many well-studied structural constraints such as (generalized) additive models. While there is a lot of flexibility in the network architecture, the tuning parameter is the sparsity of the network. Specifically, we consider large networks with number of potential network parameters exceeding the sample size. The analysis gives some insights into why multilayer feedforward neural networks perform well in practice. Interestingly, for ReLU activation function the depth (number of layers) of the neural network architectures plays an important role, and our theory suggests that for nonparametric regression, scaling the network depth with the sample size is natural. It is also shown that under the composition assumption wavelet estimators can only achieve suboptimal rates.


Download Citation

Johannes Schmidt-Hieber. "Nonparametric regression using deep neural networks with ReLU activation function." Ann. Statist. 48 (4) 1875 - 1897, August 2020.


Received: 1 May 2018; Revised: 1 March 2019; Published: August 2020
First available in Project Euclid: 14 August 2020

MathSciNet: MR4134774
Digital Object Identifier: 10.1214/19-AOS1875

Primary: 62G08

Keywords: Additive models , minimax estimation risk , multilayer neural networks , Nonparametric regression , ReLU activation function , Wavelets

Rights: Copyright © 2020 Institute of Mathematical Statistics

Vol.48 • No. 4 • August 2020
Back to Top