Research Group "Stochastic Algorithms and Nonparametric Statistics"
Research Seminar "Mathematical Statistics" Winter Semester 2024/25
|
|
16.10.2024 | Botond Szabo (Bocconi Milan) |
HVP 11 a, R.313 | Privacy constrained semiparametric inference
For semi-parametric problems differential private estimators are typically constructed in a case-by-case basis. In this work we develop a privacy constrained semi-parametric plug-in approach, which can be used in general, over a collection of semi-parametric problems. We derive minimax lower and matching upper bounds for this approach and provide an adaptive procedure in case of irregular (atomic) functionals. Joint work with Lukas Steinberger (Vienna) and Thibault Randrianarisoa (Toronto, Vector Institute). |
23.10.2024 | Prof. Dr. Weining Wang (University of Groningen) |
HVP 11 a, R.313 | Conditional nonparametric variable screening by neural factor regression High-dimensional covariates often admit linear factor structure. To effectively screen correlated covariates in high-dimension, we propose a conditional variable screening test based on non-parametric regression using neural networks due to their representation power. We ask the question whether individual covariates have additional contributions given the latent factors or more generally a set of variables. Our test statistics are based on the estimated partial derivative of the regression function of the candidate variable for screening and a observable proxy for the latent factors. Hence, our test reveals how much predictors contribute additionally to the non-parametric regression after accounting for the latent factors. Our derivative estimator is the convolution of a deep neural network regression estimator and a smoothing kernel. We demonstrate that when the neural network size diverges with the sample size, unlike estimating the regression function itself, it is necessary to smooth the partial derivative of the neural network estimator to recover the desired convergence rate for the derivative. Moreover, our screening test achieves asymptotic normality under the null after finely centering our test statistics that makes the biases negligible, as well as consistency for local alternatives under mild conditions. We demonstrate the performance of our test in a simulation study and two real world applications. |
30.10.2024 | Olga Klopp (ESSEC Business School, Paris) |
HVP 11 a, R.313 | Adaptive density estimation under low-rank constraints In this talk, we address the challenge of bivariate probability density estimation under low-rank constraints for both discrete and continuous distributions. For discrete distributions, we model the target as a low-rank probability matrix. In the continuous case, we assume the density function is Lipschitz continuous over an unknown compact rectangular support and can be decomposed into a sum of K separable components, each represented as a product of two one-dimensional functions. We introduce an estimator that leverages these low-rank constraints, achieving significantly improved convergence rates. We also derive lower bounds for both discrete and continuous cases, demonstrating that our estimators achieve minimax optimal convergence rates within logarithmic factors. |
06.11.2024 | Prof. Dr. Vladimir Spokoiny (WIAS und HU Berlin) | HVP 11 a, R.313 | Regression estimation and inference in high dimension The talk discusses a general non-asymptotic and non-minimax approach to statistical estimation and inference with applications to nonlinear regression. The main results provide finite-sample Fisher and Wilks expansions for the maximum-likelihood estimator with an explicit remainder in terms of the effective dimension of the problem. |
13.11.2024 | Workshop: Foundations of Modern Nonparametric Statistics |
Findet in Adlershof statt !!! | |
20.11.2024 | Bernhard Stankewitz (Universität Potsdam) |
HVP 11 a, R.313 | Contraction rates for conjugate gradient
and Lanczos approximate posteriors in Gaussian process regression
|
27.11.2024 | Julien Chhor (Toulouse School of Economics) |
HVP 11 a, R.313 | Locally sharp goodness-of-fit testing in sup norm for high-dimensional counts
|
04.12.2024 | N.N. |
HVP 11 a, R.313 | |
11.12.2024 | Dr. Chloé Rouyer (Universität Potsdam) |
HVP 11 a, R.313 | Foundations of online learning for easy and worst-case data. Online learning is a well-studied framework used to represent learning problems where the learner only has access to one data-point at the time and has to learn sequentially. This problem is particularly challenging in the bandit framework, which is a repeated game between the learner and the environment. In this game, the learner is faced with a list of actions and the environment generates losses associated with these actions. Then, the learner repeadly needs to play an action within this list in order to minimize their cumulative loss, but they can only observe the loss associated with the action they played. This means that at each round, the learner has to balance exploration (gathering information on less studied actions) and exploitation (using the already gathered information to play an action with a supposed small loss). Developing learner strategies for this problem depends on the assumptions made on the environment. There have been two major lines of research in this field, one assuming that these losses follow some unknown stochastic distributions and the other only assuming that these losses are bounded and independent of the learner's actions. In this talk, we introduce the recent field of best-of-both worlds sequential learning, which aims to develop algorithms that are optimal for both types of losses simultaneously. |
18.12.2024 | N.N. |
HVP 11 a, R.313 | |
08.01.2025 | Prof. Dr. Johannes Schmidt-Hieber (University of Twente) |
HVP 11 a, R.313 | Statistical estimation using zeroth-order optimization In this talk, we study statistical properties of zeroth-order optimization schemes, which do not have access to the gradient of the loss and rely solely on evaluating the loss function. Such methods are often considered to be suboptimal for high-dimensional problems, as their convergence rates to the minimizer of the objective function are typically slower than those of gradient-based methods. This performance gap becomes more pronounced as the number of parameters increases. Considering the linear model, we show that reusing the same data point for multiple zeroth-order updates can overcome the gap in the estimation rates. Additionally, we demonstrate that zeroth-order optimization methods can achieve the optimal estimation rate when only queries from the linear regression model are available. Special attention will be given to the non-standard minimax lower bound in the query model. This is joint work with Thijs Bos, Niklas Dexheimer and Wouter Koolen. |
15.01.2025 | Xiaorui Zuo (National University of Singapore) |
HVP 11 a, R.313 | Cryptos have rough volatility and correlated jumps |
22.01.2025 | |
>HVP 11 a, R.313 |
|
29.01.2024 | |
HVP 11 a, R.313> | |
last reviewed: January 3, 2024 by Christine Schneider