Abstract
Deep learning has achieved notable success in various fields, including image and speech recognition. One of the factors in the successful performance of deep learning is its high feature extraction ability. In this study, we focus on the adaptivity of deep learning; consequently, we treat the variable exponent Besov space, which has a different smoothness depending on the input location x. In other words, the difficulty of the estimation is not uniform within the domain. We analyze the general approximation error of the variable exponent Besov space and the approximation and estimation errors of deep learning. We note that the improvement based on adaptivity is remarkable when the region upon which the target function has less smoothness is small and the dimension is large. Moreover, the superiority to linear estimators is shown with respect to the convergence rate of the estimation error.
Funding Statement
TS was partially supported by JSPS KAKENHI (18K19793, 18H03201, and 20H00576), Japan DigitalDesign, and JST CREST.
Acknowledgments
I would like to thank Sho Sonoda, Koichi Taniguchi, Masahiro Ikeda, Mitsuo Izuki, and Takahiro Noi for the discussions. We would like to thank Editage (www.editage.com) for English language editing.
Citation
Kazuma Tsuji. Taiji Suzuki. "Estimation error analysis of deep learning on the regression problem on the variable exponent Besov space." Electron. J. Statist. 15 (1) 1869 - 1908, 2021. https://doi.org/10.1214/21-EJS1828
Information