Open Access
December 2009 Robust nearest-neighbor methods for classifying high-dimensional data
Yao-ban Chan, Peter Hall
Ann. Statist. 37(6A): 3186-3203 (December 2009). DOI: 10.1214/08-AOS591


We suggest a robust nearest-neighbor approach to classifying high-dimensional data. The method enhances sensitivity by employing a threshold and truncates to a sequence of zeros and ones in order to reduce the deleterious impact of heavy-tailed data. Empirical rules are suggested for choosing the threshold. They require the bare minimum of data; only one data vector is needed from each population. Theoretical and numerical aspects of performance are explored, paying particular attention to the impacts of correlation and heterogeneity among data components. On the theoretical side, it is shown that our truncated, thresholded, nearest-neighbor classifier enjoys the same classification boundary as more conventional, nonrobust approaches, which require finite moments in order to achieve good performance. In particular, the greater robustness of our approach does not come at the price of reduced effectiveness. Moreover, when both training sample sizes equal 1, our new method can have performance equal to that of optimal classifiers that require independent and identically distributed data with known marginal distributions; yet, our classifier does not itself need conditions of this type.


Download Citation

Yao-ban Chan. Peter Hall. "Robust nearest-neighbor methods for classifying high-dimensional data." Ann. Statist. 37 (6A) 3186 - 3203, December 2009.


Published: December 2009
First available in Project Euclid: 17 August 2009

zbMATH: 1191.62113
MathSciNet: MR2549557
Digital Object Identifier: 10.1214/08-AOS591

Primary: 62H30

Keywords: Classification boundary , Detection boundary , False discovery rate , heterogeneous components , higher criticism , optimal classification , threshold , zero–one data

Rights: Copyright © 2009 Institute of Mathematical Statistics

Vol.37 • No. 6A • December 2009
Back to Top