Open Access
2021 Estimation error analysis of deep learning on the regression problem on the variable exponent Besov space
Kazuma Tsuji, Taiji Suzuki
Author Affiliations +
Electron. J. Statist. 15(1): 1869-1908 (2021). DOI: 10.1214/21-EJS1828


Deep learning has achieved notable success in various fields, including image and speech recognition. One of the factors in the successful performance of deep learning is its high feature extraction ability. In this study, we focus on the adaptivity of deep learning; consequently, we treat the variable exponent Besov space, which has a different smoothness depending on the input location x. In other words, the difficulty of the estimation is not uniform within the domain. We analyze the general approximation error of the variable exponent Besov space and the approximation and estimation errors of deep learning. We note that the improvement based on adaptivity is remarkable when the region upon which the target function has less smoothness is small and the dimension is large. Moreover, the superiority to linear estimators is shown with respect to the convergence rate of the estimation error.

Funding Statement

TS was partially supported by JSPS KAKENHI (18K19793, 18H03201, and 20H00576), Japan DigitalDesign, and JST CREST.


I would like to thank Sho Sonoda, Koichi Taniguchi, Masahiro Ikeda, Mitsuo Izuki, and Takahiro Noi for the discussions. We would like to thank Editage ( for English language editing.


Download Citation

Kazuma Tsuji. Taiji Suzuki. "Estimation error analysis of deep learning on the regression problem on the variable exponent Besov space." Electron. J. Statist. 15 (1) 1869 - 1908, 2021.


Received: 1 October 2020; Published: 2021
First available in Project Euclid: 26 March 2021

Digital Object Identifier: 10.1214/21-EJS1828

Keywords: adaptive approximation , deep learning , neural network , Nonparametric regression , variable exponent Besov

Vol.15 • No. 1 • 2021
Back to Top