Workshop on Structure Adapting Methods - Abstract

Picard, Dominique

LOL : Learning out of leaders

Joint work with: G. Kerkyacharian, M. Mougeot, K. Tribouley In this talk, we consider the general linear model and we are mostly in- terested in the case where the number $p$ of regressors is large compared with the number $n$ of the observations (although there is no such restrictions). We construct an algorithm (called LOL) which consists in a two steps thresholding. Indeed, as we do not perform any optimization step, the procedure is one of the simplest and it is important to address in which domains the procedure is competitive with more sophisticated algorithms and especially the algorithms performing a one or two steps $l_1$ minimization. We investigate the properties of LOL procedure through two different points of view: the prediction problem and the estimation problem. More precisely, we establish that the LOL procedure has a prediction error which is going to zero in probability with exponential rates. We also establish that our procedure works quite well regarding the detection since the number of false negative as well as false positive are going to zero in probability with pretty fast rates. From both a theoretical and practical point of view, when the coherence of the data is small, we prove that LOL procedure is as powerful as the best procedures.