Research Group "Stochastic Algorithms and Nonparametric Statistics"

Research Seminar "Mathematical Statistics" Winter Semester 2018/2019

  • Place: Weierstrass-Institute for Applied Analysis and Stochastics, Erhard-Schmidt-Hörsaal, Mohrenstraße 39, 10117 Berlin
  • Time: >Wednesdays, 10.00 a.m. - 12.30 p.m.
10.10.18

17.10.18 Mark Podolskij (Aarhus)
Statistical inference for fractional models
In recent years, fractional and moving average type models have gained popularity in economics and finance. Most popular examples include fractional Brownian/stable motion, rough volatility models and Hawkes processes. In this talk we will review some existing estimation methods and present new theoretical results.
24.10.18 Evgeny Stepanov (Steklov Mathem. Institute of the Russian Academy of Sciences, St. Petersburg)
Hybrid control in a model from molecular biology: Between continuous and discrete
The central argument of the talk will be the rather modern concept in automated control: hybrid controls through finite state machines in continuous dynamical models. It will be shown that hybrid systems present a highly complicated dynamics. Then a class of largely open problems arizing from molecular biology will be presented and studied, where the hybrid system comes out from a extra- ordinarily simple and completely classical models based on ODEs without any discrete object incorporated.
31.10.18 N. N.

07.11.18 N. N.

14.11.18 Jürgen Pilz (Universität Klagenfurt)
The interplay between random field models for Bayesian spatial prediction and the design of computer experiments
In the first part of my talk, I will give an overview of recent work with my colleagues G. Spoeck and H. Kazianka in the area of Bayesian spatial prediction and design [1]-[5]. The Bayesian approach not only offers more flexibility in modeling but also allows us to deal with uncertain distribution parameters, and it leads to more realistic estimates for the predicted variances. Moreover, I will demonstrate how to apply copula methodology to Bayesian spatial modeling and use it to derive predictive distributions. I will also report on recent results for determining objective priors for the crucial nugget and range parameters of the widely used Matern-family of covariance functions. Briefly, I will also consider the problem of choosing an "optimal" spatial design, i.e. finding an optimal spatial configuration of the observation sites minimizing the total mean squared error of prediction over an area of interest. Our results will be illustrated by modeling environmental phenomena and designing a hyrogeological monitoring network in Upper Austria. In the second part of my talk I will report on modifying and transferring spatial random field models to make them accessible to the analysis of complex computer code. Over the last three decades, the design of computer experiments has rapidly developed as a statistical discipline at the intersection of the well established theories of DoE, stochastic processes, stochastic simulation and statistical parameter estimation, with the aim of approximating complex computer models to reproduce the behaviour of engineering, physical, biological, environmental and social science processes. We will focus on the use of Gaussian Processes (GPs) for the approximation of computer models, thereby stepping from simple parametric setups to using GPs as basis functions of additive models. Then we discuss the numerical problems associated with the estimation of the model parameters, in particular, the second-order (variance and correlation) parameters. To overcome these problems I will highlight joint recent work with my colleague N. Vollert [6] using Bayesian regularization, based on objective (reference) priors for the parameters. Finally, we will consider design problems associated with the search for numerical robustness of the estimation procedures. We illustrate our findings by modeling the magnetic field of a magnetic linear position detection system as used in the automotive industry. (References)
21.11.18 Markus Bibinger (Universität Marburg)
4th floor, room 406 Statistical analysis of path properties of volatility
In this talk, we review recent contributions on statistical theory to infer path properties of volatility. The interest is in the latent volatility of an It^o semimartingale, the latter being discretely observed over a fixed time horizon. We consider tests to discriminate continuous paths from paths with volatility jumps. Both a local test for jumps at speciied times and a global test for jumps over the whole observation interval are discussed. We establish consistency and optimality properties under infill asymptotics, also for observations with additional additive noise. Recently, there is high interest in the smoothness regularity of the volatility process as confl icting models are proposed in the literature. To address this point, we consider inference on the Hurst exponent of fractional stochastic volatility processes. Even though the regularity of the volatility determines optimal spot volatility estimation methods, forecasting techniques and the volatility persistence, identifiability is an unsolved question in high-frequency statistics. We discuss a first approach which can reveal if path properties are stable over time or changing. Eventually, we discuss some recent considerations and conjectures on this open question. The related easier problem of inference on the Hurst exponent from direct discrete observations of a fractional Brownian motion is also visited.
28.11.18 Melanie Schienle (Karlsruher Institut für Technologie)
Determination of vector error correction models for different types of high-dimensionality
We provide a shrinkage type methodology which allows for simultaneous model selection and estimation of vector error correction models (VECM) when the dimension is large and can increase with sample size. Model determination is treated as a joint selection problem of cointegrating rank and autoregressive lags under respective practically valid sparsity assumptions. We show consistency of the selection mechanism by the resulting Lasso-VECM estimator under very general assumptions on dimension, rank and error terms. Moreover, with computational complexity of a linear programming problem only, the procedure remains computationally tractable in high dimensions. For the subcase of finite dimensions, we can modify and tailor the procedure with elementwise selection consistency. For ultra-high dimensions we suggest a completely diferent pre-screening approach. We demonstrate the effectiveness of the proposed techniques in simulations and an empirical application to recent CDS data after the financial crisis.
05.12.18 Matthias Löffler (Cambridge)
4th floor, room 406 Spectral thresholding for Markov chain transition operators
We consider estimation of the transition operator P of a Markov chain and its transition density p where the eigenvalues of P are assumed to decay exponentially fast. This is for instance the case for periodised multi-dimensional diffusions observed in low frequency. We investigate the performance of a spectral hard thresholded Galerkin-type estimator for P and p, discarding most of the estimated eigenpairs. Our main contribution is that we show its statistical optimality by establishing matching minimax upper and lower bounds in L^2-loss. Particularly, the effect of the dimension d on the nonparametric rate improves from 2d to d.
12.12.18 Botond Szabo (Leiden)
On the fundamental understanding of distributed computation
In recent years, the amount of available information has become so vast in certain fields of applications that it is infeasible or undesirable to carry out the computations on a single server. This has motivated the design and study of distributed statistical or learning methods. In distributed methods, the data is split amongst different administrative units and computations are carried out locally in parallel to each other. The outcome of the local computations are then aggregated into a final result on a central machine. In this talk we will consider the limitations and guarantees of distributed methods in general under communication constraints (i.e. only limited amount of bits are allowed to be transmitted between the machines) on the random design regression model. We derive minimax lower bounds, matching upper bounds and provide adaptive estimators reaching these limits. This is an ongoing joint work with Harry van Zanten.
19.12.18 N. N.

09.01.19 Frank Werner (Georg-August-Universität Göttingen)
Empirical risk minimization as parameter choice rule for filter-based regularization methods
We consider a-posteriori parameter choice rules for filter based linear regularization methods in the statistical inverse problem setting. In particular, we investigate the choice of the regularization parameter by minimizing an unbiased estimate of the predictive risk. This parameter choice rule and its usage are well-known in the literature, but oracle inequalities and optimality results in this general setting are unknown. We prove a (generalized) oracle inequality, which relates the direct risk with the minimal prediction risk. From this oracle inequality, we are then able to conclude that filter based regularization methods with the investigated parameter choice rule achieve convergence rates of optimal order with respect to the mean integrated squared error in mildly ill-posed problems. Finally, we also present numerical simulations, which support the order optimality of the method and the finite sample performance of the parameter choice. In these simulations, we also investigate the behavior of different a posteriori parameter choice methods in exponentially ill-posed problems. This is joint work with Housen Li (University of Göttingen).
16.01.19 Mathias Trabs (Universität Hamburg)
Parameter estimation for stochastic PDEs based on discrete observations in time and space
Motivated by random phenomena in natural science as well as by mathematical finance, stochastic partial differential equations (SPDEs) have been intensively studied during the last fifty years with a main focus on theoretical analytic and probabilistic aspects. Thanks to the exploding number of available data and the fast progress in information technology, SPDE models become nowadays increasingly popular for practitioners, for instance, to model neuronal systems or interest rate fluctuations to give only two examples. Consequently, statistical methods are required to calibrate this class of complex models. We study the parameter estimation for parabolic, linear, second order SPDEs observing a mild solution on a discrete grid in time and space. Focusing first on volatility estimation and assuming a high-frequency regime in time, we provide an explicit and easy to implement method of moments estimator based on squared time increments of the process. If the observation frequency in time is finer than in space, the estimator is consistent and admits a central limit theorem. This is established moreover for the estimation of the integrated volatility in a semi-parametric framework. In a second step, we consider not only time increments of the solution field but also space increments as well as space-time increments. This allows for the construction of estimators which are robust with respect to the sampling regime, i.e., they are also applicable if the observation grid in space is finer than in time. Finally, we discuss the estimation of the parameters in the differential operator which determines the SPDE. This talk is based on joint works with Markus Bibinger and Florian Hildebrandt.
23.01.19 N. N.

30.01.19 Jean-Pierre Florens (Toulouse)
Is completeness necessary? Penalized estimation in non identified linear models
06.02.19 Christian Clason (Essen)

13.02.19 Albert Cohen (Paris)
Optimal non-intrusive methods in high-dimension
Motivated by non-intrusive approaches for high-dimensional parametric PDEs, we consider the approximation of an unknown arbirary function in any dimension from the data of point samples, where the approximants are picked from given or adaptively chosen finite dimensional spaces. One principal objective is to obtain an approximation which performs as good as the orthogonal projection using a sampling budget that is linear in the dimension of the approximating space. Using a particular sampling measure, this objective turns out to be met by both least-squares and pseudo-spectral methods in some probabilistic sense, however with some notable distinctions that will be discussed in this talk.


last reviewed: January 18, 2019 by Christine Schneider