May 2024 Empirical likelihood ratio tests for non-nested model selection based on predictive losses
Jiancheng Jiang, Xuejun Jiang, Haofeng Wang
Author Affiliations +
Bernoulli 30(2): 1458-1481 (May 2024). DOI: 10.3150/23-BEJ1640


We propose an empirical likelihood ratio (elr) test for comparing any two supervised learning models, which may be nested, non-nested, overlapping, misspecified, or correctly specified. The test compares the prediction losses of models based on the cross-validation. We determine the asymptotic null and alternative distributions of the elr test for comparing two nonparametric learning models under a general framework of convex loss functions. However, the prediction losses from the cross-validation involve repeatedly fitting the models with one observation left out, which leads to a heavy computational burden. We introduce an easy-to-implement elr test which requires fitting the models only once and shares the same asymptotics as the original one. The proposed tests are applied to compare additive models with varying-coefficient models. Furthermore, a scalable distributed elr test is proposed for testing the importance of a group of variables in possibly misspecified additive models with massive data. Simulations show that the proposed tests work well and have favorable finite-sample performance compared to some existing approaches. The methodology is validated on an empirical application.

Version Information

Corresponding author was noted in the Acknowledgement section.


All authors make equal contributions to this work. This research was supported by NSFC grants 11871263 and 12271238, Guangdong NSF Fund 2017A030313012, and Shenzhen Sci-Tech Fund (JCYJ20210324104803010) for Xuejun Jiang. Xuejun Jiang is the Corresponding author.


Download Citation

Jiancheng Jiang. Xuejun Jiang. Haofeng Wang. "Empirical likelihood ratio tests for non-nested model selection based on predictive losses." Bernoulli 30 (2) 1458 - 1481, May 2024.


Received: 1 September 2022; Published: May 2024
First available in Project Euclid: 31 January 2024

MathSciNet: MR4699560
Digital Object Identifier: 10.3150/23-BEJ1640

Keywords: cross-validation , nonparametric smoothing , scalable distributed test


This article is only available to subscribers.
It is not available for individual sale.

Vol.30 • No. 2 • May 2024
Back to Top