Research Group "Stochastic Algorithms and Nonparametric Statistics"

Seminar "Modern Methods in Applied Stochastics and Nonparametric Statistics" Winter Semester 2020/2021

01.09.2020 Franziska Bielert (Humboldt-Universität zu Berlin)
Estimation of quadratic functionals in a non-parametric regression model and gaussian white noise model
27.10.2020 Carsten Hartmann (BTU Cottbus-Senftenberg)
Convergence to equilibrium in underdamped Langevin dynamics: Fluctuation-dissipation relations, entropy production and optimal control
We study the convergence to equilibrium of an underdamped Langevin equation that is controlled by a linear feedback force. Specifically, we are interested in sampling the possibly multimodal invariant probability distribution of a Langevin system at small noise (or low temperature) that is characterised by metastability and slow convergence to equilibrium. We follow an approach proposed by Pavon and co-workers [1] and consider a Langevin equation that is simulated at a high temperature, with the control playing the role of a friction that balances the additional noise so as to restore the original invariant measure at a lower temperature. We discuss different limits as the temperature ratio goes to infinity and prove convergence to a limit dynamics. It turns out that, depending on whether the lower (``target'') or the higher (``simulation'') temperature is fixed, the controlled dynamics converges either to the overdamped Langevin equation or to a deterministic gradient flow. This implies that (a) the ergodic limit and the large temperature separation limit do not commute in general, and that (b) it is not possible to accelerate the speed of convergence to the ergodic limit by making the temperature separation larger and larger. We discuss the implications of these observation from the perspective of stochastic optimisation algorithms and enhanced sampling schemes in molecular dynamics. This is joint work with Tobias Breiten, Lara Neureither and Upanshu Sharma [2]. [1] Y. Chen, T.T. Georgiou and M. Pavon. Fast cooling for a system of stochastic oscillators. J. Math. Phys. 56:113302, 2015. [2] T. Breiten, C. Hartmann, L. Neureither and U. Sharma. Stochastic gradient descent and fast relaxation to thermodynamic equilibrium: a stochastic control approach. Preprint, 2020.
03.11.2020 Darina Dvinskikh (WIAS Berlin)
Improved complexity bounds in Wasserstein barycenter problem
We focus on computational aspects of Wasserstein barycenter problem. We provide two algorithms to compute Wasserstein barycenter. The first algorithm, based on mirror prox with some specific norm, meets the complexity of celebrated accelerated iterative Bregman projections (IBP), however, with no limitations unlike (accelerated) IBP, that is numerically unstable when regularization parameter is small. The second algorithm, based on area-convexity and dual extrapolation, improves the previously best-known convergence rates for Wasserstein barycenter problem.
10.11.2020 Simon Breneis (WIAS Berlin)
Functions of bounded variation in one and multiple dimensions
We first recall univariate functions of bounded variation and treat a new result on the Hölder continuities of functions of bounded variation and their variation functions. Then, we discuss possible extensions of the concept of bounded variation to functions of several (though finitely many) variables, compare those generalizations and discuss their uses and shortcomings.
17.11.2020 n.n.

24.11.2020 Alexander Gasnikov (WIAS Berlin)
Parallel and distributed algorithms for ML problems
01.12.2020 Roman Kravchenko (HU Berlin)
Distributed optimization with quantization for computing Wasserstein barycenters
08.12.2020 n.n.

15.12.2020 n.n.

12.01.2021 Pavel Dvurechensky (WIAS Berlin)
Accelerated alternating minimization methods
We combine alternating minimization (AM) and Nesterov-type momentum acceleration and propose a generic accelerated alternating minimization method with a $1/k^2$ convergence rate in terms of the objective for convex problems and $1/k$ in terms of the squared gradient norm for non-convex problems, where $k$ is the iteration counter. Our method does not require any knowledge of neither convexity of the problem nor function parameters such as smoothness constant, i.e. it is adaptive to convexity and smoothness. Further, we develop its primal-dual modification for convex problems with linear constraints. We consider two applications of our methods to highlight their properties. The first one is the non-convex collaborative filtering problem, and the second one is optimal transport, where our primal-dual method takes a form of accelerated Sinkhorn's algorithm or accelerated Iterative Bregman Projections algorithm. In both applications, we show numerically that our methods outperform the baselines. In the second application, we also obtain state-of-the-art complexity results for optimal transport problems.
19.01.2021 Dmitrii M. Ostrovskii (University of Southern California)
Please note: At 4 pm due to time shift! Near-optimal model discrimination with non-disclosure (online talk)
26.01.21 John Schoenmakers (WIAS Berlin)
From optimal martingales to randomized dual optimal stopping (online talk)
In this talk we study and classify dual optimal martingales in the dualformulation of optimal stopping problems. In this respect it is distinguished between optimal and surely optimal martingales. It is shown that the family of optimal and surely optimal martingales may generally be quite large. On the other hand it is shown that the Doob-martingale, i.e. the martingale part of the Snell envelope, is in a certain sense the most robust surely optimal martingale under random perturbations. This new insight has led to a novel randomized dual martingale minimization algorithm that doesn't require nested simulation. As a main feature, in a possibly large family of optimal martingales the algorithm sorts out efficiently a martingale that is close to the Doob martingale. As a result, one obtains in an efficient way a dual estimate for the optimal stopping problem with low variance. (joint work with Denis Belomestny).
02.02.21 Egor Gladin (Moscow Institute of Physics and Technology)
Solving convex min-min problems with smoothness and strong convexity in one variable group and small dimension of the other (online talk)
Logistic regression with the prior on one of the parameter groups is an example of the convex min-min problem with strong convexity in only one of the two variable groups. The talk is devoted to approaches to such problems that achieve linear convergence. The outer minimization problem is solved using Vaidya?s cutting plane method, and the inner problem (smooth and strongly convex) is solved using Nesterov's fast gradient method. Due to the importance of machine learning applications, we also consider the case when the objective function is a sum of a large number of functions. In this case, the variance-reduced accelerated gradient algorithm is used instead of Nesterov's fast gradient method.
09.02.21 Franz Besold (WIAS Berlin)
Adaptive weights community detection (online talk)
We study a likelihood-based approach for the stochastic block model proposed by Kirill Efimov, Larisa Adamyan and Vladimir Spokoiny. We propose a modification of the algorithm to improve the rate of consistency.
16.02.21 Alexandre Pannier (Imperial College London)
Large and moderate deviations for stochastic Volterra systems (online talk)
We provide a unified treatment of pathwise large and moderate deviations principles for a general class of multidimensional stochastic Volterra equations with singular kernels, not necessarily of convolution form. Our methodology is based on the weak convergence approach by Budhiraja, Dupuis and Ellis. We show in particular how this framework encompasses most rough volatility models used in mathematical finance, yields pathwise moderate deviations for the first time and generalises many recent results in the literature. This is joint work with Antoine Jacquier.
23.02.21 Christian Bayer (WIAS Berlin):
A pricing BSPDE for rough volatility (online talk)
In this talk, we study the option pricing problems for rough volatility models. As the framework is non-Markovian, the value function for a European option is not deterministic; rather, it is random and satisfies a backward stochastic partial differential equation (BSPDE). The existence and uniqueness of weak solution is proved for general nonlinear BSPDEs with unbounded random leading coefficients whose connections with certain forward-backward stochastic differential equations are derived as well. These BSPDEs are then used to approximate American option prices. A deep leaning-based method is also investigated for the numerical approximations to such BSPDEs and associated non-Markovian pricing problems. Finally, the examples of rough Bergomi type are numerically computed for both European and American options.
02.03.21 Yunfei Huang (LMU München)
Advanced data analysis for traction force microscopy and data-driven discovery of physical equations
The plummeting cost of collecting and storing data and the increasingly available computational power in the last decade have led to the emergence of new data analysis approaches in various scientific fields. Frequently, the new statistical methodology is employed for analyzing data involving incomplete or unknown information. In this thesis, new statistical approaches are developed for improving the accuracy of traction force microscopy (TFM) and data-driven discovery of physical equations. TFM is a versatile method for the reconstruction of a spatial image of the traction forces exerted by cells on elastic gel substrates. The traction force field is calculated from a linear mechanical model connecting the measured substrate displacements with the sought-for cell-generated stresses in real or Fourier space, which is an inverse and ill-posed problem. This inverse problem is commonly solved making use of regularization methods. Here, we systematically test the performance of new regularization methods and Bayesian inference for quantifying the parameter uncertainty in TFM. We compare two classical schemes, L1- and L2-regularization with three previously untested schemes, namely Elastic Net regularization, Proximal Gradient Lasso, and Proximal Gradient Elastic Net. We find that Elastic Net regularization, which combines L1 and L2 regularization, outperforms all other methods with regard to accuracy of traction reconstruction. Next, we develop two methods, Bayesian L2 regularization and Advanced Bayesian L2 regularization, for automatic, optimal L2 regularization.. We further combine the Bayesian L2 regularization with the computational speed of Fast Fourier Transform algorithms to develop a fully automated method for noise reduction and robust, standardized traction-force reconstruction that we call Bayesian Fourier transform traction cytometry (BFTTC). This method is made freely available as a software package with graphical user-interface for intuitive usage. Using synthetic data and experimental data, we show that these Bayesian methods enable robust reconstruction of traction without requiring a difficult selection of regularization parameters specifically for each data set. Next, we employ our methodology developed for the solution of inverse problems for automated, data-driven discovery of ordinary differential equations (ODEs), partial differential equations (PDEs), and stochastic differential equations (SDEs). To find the equations governing a measured time-dependent process, we construct dictionaries of non-linear candidate equations. These candidate equations are evaluated using the measured data. With this approach, one can construct a likelihood function for the candidate equations. Optimization yields a linear, inverse problem which is to be solved under a sparsity constraint. We combine Bayesian compressive sensing using Laplace priors with automated thresholding to develop a new approach, namely automatic threshold sparse Bayesian learning (ATSBL). ATSBL is a robust method to identify ODEs, PDEs, and SDEs involving Gaussian noise, which is also referred to as type I noise. We extensively test the method with synthetic datasets describing physical processes. For SDEs, we combine data-driven inference using ATSBL with a novel entropy-based heuristic for discarding data points with high uncertainty. Finally, we develop an automatic iterative sampling optimization technique akin to Umbrella sampling. Therewith, we demonstrate that data-driven inference of SDEs can be substantially improved through feedback during the inference process if the stochastic process under investigation can be manipulated either experimentally or in simulations.
09.03.21 Caroline Geiersbach (WIAS Berlin)
Stochastic approximation with applications to PDE-constrained optimization under uncertainty
16.03.21 Kostas Papafitsoros (WIAS Berlin)
Optimization with learning-informed differential equation constraints and its applications
23.03.21 n. n.

30.03.21 n. n.

06.04.21 n. n.

13.04.21 Yangwen Sun (HU Berlin)
Please note: At 4 pm due to time shift! Graph-spanning ratio test with application to change-point detection problem
20.04.21 Caroline Geiersbach (WIAS Berlin)
Stochastic approximation with applications to PDE-constrained optimization under uncertainty


last reviewed: March 30, 2021 by Christine Schneider