Forschungsgruppe "Stochastische Algorithmen und Nichtparametrische Statistik"
Research Seminar "Mathematical Statistics" Winter Semester 2017/2018
|
|
18.10.17 | Prof. Dr. Vladimir Spokoiny (WIAS und HU Berlin) |
Big ball probability with applications in statistical inference We derive the bounds on the Kolmogorov distance between probabilities of two Gaussian elements to hit a ball in a Hilbert space. The key property of these bounds is that they are dimensional-free and depend on the nuclear (Schatten-one) norm of the difference between the covariance operators of the elements. We are also interested in the anticoncentration bound for a squared norm of a non-centered Gaussian element in a Hilbert space. All bounds are sharp and cannot be improved in general. We provide a list of motivation examples and applications in statistical inference for the derived results as well. (joint with Götze, Naumov and Ulyanov) |
|
25.10.17 | Debarghya Ghoshdastidar (Universität Tübingen) |
Kantorovich distance based kernel for Gaussian Processes: estimation
and forecast In this work, we consider the problem of testing between two populations of inhomogeneous random graphs dened on the same set of vertices. We are particularly interested in the high-dimensional setting where the population size is potentially much smaller than the graph size, and may even be constant. It is known that this setting cannot be tackled if the separation between two models is quantied in terms of total variation distance. Hence, we study two-sample testing problems where the separation between models is quantied by the Frobenius or operator norms of the dierence between the population adjacency matrices. We derive upper and lower bounds for the minimax separation rate for these problems. Interestingly, the proposed near-optimal tests are uniformly consistent in both the large graph, small sample and small graph, large sample regimes. This is a joint work with Maurilio Gutzeit, Alexandra Carpentier and Ulrike von Luxburg. |
|
01.11.17 | Prof. Dr. Denis Belomestny (Universität Duisburg-Essen) |
Statistical inference for McKean-Vlasov-SDEs McKean-Vlasov-SDEs provide a very rich modelling framework for large complex systems. They naturally appear in modelling and simulation of turbulent flows by fluid-particle method. In biomathematics, a McKean-Vlasov-SDE model for neuronal networks has been proposed. Although potentially very powerful, the lack of efficient statistical procedures prevents further expansion of these results into application areas. When proposing a McKean-Vlasov-SDE model, one of the main challenges is the appropriate choice of the coefficients. In this talk, we study the problem of the nonparametric estimation of the McKean-Vlasov diffusion coefficients from low-frequency observations. |
|
08.11.17 | Prof. Arnak Dalayan (ENSAE ParisTech) |
On the exponentially weighted aggregate with the Laplace prior In this talk, we will present some results on the statistical behaviour of the Exponentially Weighted Aggregate (EWA) in the problem of high-dimensional regression with xed design. Under the assumption that the underlying regression vector is sparse, it is reasonable to use the Laplace distribution as a prior. The resulting estimator and, specically, a particular instance of it referred to as the Bayesian lasso, was already used in the statistical literature because of its computational convenience, even though no thorough mathematical analysis of its statistical properties was carried out. The results of this talk ll this gap by establishing sharp oracle inequalities for the EWA with the Laplace prior. These inequalities show that if the temperature parameter is small, the EWA with the Laplace prior satises the same type of oracle inequality as the lasso estimator does, as long as the quality of estimation is measured by the prediction loss. Extensions of the proposed methodology to the problem of prediction with low-rank matrices will be discussed as well. (based on a joint work with Edwin Grappin and Quentin Paris) |
|
15.11.17 | Prof. Enkelejd Hashorva (Universität Lausanne) |
From classical to parisian ruin in Gaussian risk models This talk is concerned with Gaussian risk models which approximate reasonably the risk process of an insurance company. Such models incorporate various nancial elements related to in ati- on/de ation and taxation. Of interest also from the probabilistic point of view, is the approximation of the ruin probability and the ruin time when the initial capital is large. The concept of Parisian ruin is quite new and appealing for mathematical models of insurance risks. However the calculation of Parisian ruin and the Parisian ruin time is a hard problem. Recent research has also focused on the investigation of multi-valued risk models analysing the ruin probability and the ruin time. Currently, due to the lack of appropriate tools, results are available only for the Brownian risk model. In this talk various approxi- mations of ruin probability and ruin times for both classical and Parisian case will be discussed including results for the multi-valued Brownian risk model. Joint work with K. Debicki, University of Wroclaw and L. Ji, University of Lausanne |
|
22.11.17 | We celebrate the 50th anniversary of the MSS |
|
|
29.11.17 | no seminar |
|
|
06.12.17 | Prof. Dr. Alexander Meister (Universität Rostock) |
Nonparametric density estimation for intentionally corrupted functional data We consider statistical models where, in order to satisfy privacy constraints, functional data are artificially contaminated by independent Wiener processes. We show that the corrupted observations have a Wiener density, which determines the distribution of the original functional random variables uniquely. We construct a nonparametric estimator of the functional density and study its asymptotic properties. We provide an upper bound on its mean integrated squared error which yields polynomial convergence rates, and we establish lower bounds on the minimax convergence rates which are close to the rates attained by our estimator. Our estimator requires the choice of a basis and of two smoothing parameters. We propose data-driven ways of choosing them and prove that the asymptotic quality of our estimator is not significantly affected by the empirical parameter selection. We apply our technique to a classification problem of real data and provide some numerical results. This talk is based on a joint work with A. Delaigle (University of Melbourne). |
|
13.12.17 | Dr. Fabian Dunker (University of Canterbury, NZ) |
Multiscale tests for shape constraints in linear random coefficient models A popular way to model unobserved heterogeneity is the linear random coecient model Y i = i;1Xi;1 + i;2Xi;2 + ::: + i;dXi;d. We assume that the observations (Xi; Yi); i = 1; :::; n, are i.i.d. where Xi = (Xi;1; :::;Xi;d) is a d-dimensional vector of regressors. The random coecients i = (i;1; :::; i;d); i = 1; :::; n, are unobserved i.i.d. realizations of an unknown d-dimensional distribution with density f independent of Xi. We propose and analyze a nonparametric multi-scale test for shape constraints of the random coecient density f. In particular we are interested in condence sets for slopes and modes of the density. The test uses the connection between the model and the d-dimensional Radon transform and is based on Gaussian approximation of empirical processes. This is a joint work with K. Eckle, K. Proksch, and J. Schmidt-Hieber. |
|
20.12.17 | |
|
|
27.12.17 | |
|
|
03.01.18 | |
|
|
10.01.18 | Prof. Antoine Chambaz (Université Paris Descartes) |
An introduction to targeted learning Coined by Mark van der Laan and Dan Rubin in 2006, targeted learning is a general approach to learning from data that reconciles machine learning and statistical inference. On the one hand, \machine learning" refers to the estimation of innite-dimensional features of the law of the data, P, for instance a regression function. Machine learning algorithms are versatile, and produce (possibly highly) data-adaptive estimators. Driven by the need to make accurate predictions, they do not care so much about the assessment of prediction uncertainty. On the other hand, \statistical inference" refers to the estimation of nite-dimensional parameters of P, for instance a measure of association with a causal interpretation. It focuses on the construction of condence regions or the development of hypotheses tests. Emphasis is placed on robustness (guaranteeing that one goes to the truth even under mild and reasonable assumptions on P), eciency (trying to draw as much information from the data as possible), and controlling the asymptotic levels or type I errors. |
|
17.01.18 | Prof. Anthony Nouy (École Centrale Nantes) |
Learning high-dimensional functions with tree tensor networks Tensor methods are among the most prominent tools for the approximation of high- dimensional functions. Such approximation problems naturally arise in statistical learning, stochastic analysis and uncertainty quantication. In many practical situations, the approximation of high- dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we give an introduction to tree-based (hierarchical) tensor formats, which can be interpreted as deep neural networks with particular architectures. Then we present adaptive algorithms for the approximation in these formats using statistical methods. |
|
24.01.18 | Andreas Maurer (München) |
Concentration for functions of bounded interaction Some multivariate functions have the property that their variation in any argument does not change too much when another argument is modied. The talk will give some examples and concentrates on the random variable W obtained by applying such a function to a vector of independent variables. Functions with weakly interacting arguments share some important properties with sums: the expectation of W can be estimated by a version of Bernstein's inequality and its variance can be tightly estimated in terms of an iid sample, which has only one datum more than the function has arguments. There is also a version of the central limit theorem. |
|
31.01.18 | Prof. Elisabeth Gassiat (Université Paris-Sud) |
Estimation of the proportion of explained variation in high
dimensions The estimation of the heritability of a phenotypic trait based on genetic data may be set as the estimation of the proportion of explained variation in high dimensional linear models. I will be interested in understanding the impact of: not knowing the sparsity of the regression parameter, not knowing the variance matrix of the covariates on minimax estimation of heritability. In the situation where the variance of the design is known, I will present an estimation procedure that adapts to unknown sparsity. When the variance of the design is unknown and no prior estimator of it is available, I will show that consistent estimation of heritability is impossible. (Joint work with N. Verzelen, and PHD thesis of A. Bonnet). |
|
07.02.18 | Prof. Dr. Gitta Kutyniok (TU Berlin) |
Optimal approximation with sparsely connected deep neural
networks Despite the outstanding success of deep neural networks in real-world applications, most of the related research is empirically driven and a mathematical foundation is almost completely missing. One central task of a neural network is to approximate a function, which for instance encodes a classification task. In this talk, we will be concerned with the question, how well a function can be approximated by a neural network with sparse connectivity. Using methods from approximation theory and applied harmonic analysis, we will derive a fundamental lower bound on the sparsity of a neural network. By explicitly constructing neural networks based on certain representation systems, so-called -shearlets, we will then demonstrate that this lower bound can in fact be attained. Finally, we present numerical experiments, which surprisingly show that already the standard backpropagation algorithm generates deep neural networks obeying those optimal approximation rates. This is joint work with H. Bolcskei (ETH Zurich), P. Grohs (Uni Vienna), and P. Petersen (TU Berlin). |
|
14.02.18 | Prof. Jean-Pierre Florens (Université Toulouse) |
Is completeness necessary? Penalized estimation in non identied linear models Identification is an important issue in many econometric models. This paper studies potentially non-identified and/or weakly identified ill-posed inverse models. The leading examples are the nonparametric IV regression and the functional linear IV regression. We show that in the case of identification failures, a very general family of continuously-regularized estimators is consistent for the best approximation of the parameter of interest. We obtain L2 and L1 convergence rates for this general class of regularization schemes, including Tikhonov, iterated Tikhonov, spectral cut-off, and Landweber-Fridman. Unlike in the identied case, estimation of the operator has non-negligible impact on convergence rates and inference. We develop inferential methods for linear functionals in such models. Lastly, we demonstrate the discontinuity in the asymptotic distribution in case of weak identification. In particular, the estimator has a degenerate U-statistics type behavior, in the extreme case of weak instrument. |
|
|
|
|
|
|
|
|
last reviewed:January 30, 2018 by Christine Schneider