Research Group "Stochastic Algorithms and Nonparametric Statistics"
Seminar "Modern Methods in Applied Stochastics and Nonparametric Statistics" Winter Semester 19/20
|
|
08.10.2019 | Dr. Alexander Gasnikov |
Introduction Lecture in Non Convex Optimization In this talk we try to observe the modern numerical methods for non convex optimization problems. In particular, we concentrate on gradient type methods, high-order methods and randomization of sum type methods. Most of the results are taken from the papers that were published in the last 2-3 years. | |
15.10.2019 | Alexandra Suvorikova |
Shape-based domain adaptation | |
22.10.2019 | Dr. Oleg Butkovsky (WIAS Berlin) |
Regularization by noise for SDEs and SPDEs with applications to numerical methods | |
29.10.2019 | Tim Jahn (Universität Frankfurt am Main) |
Beyond the Bakushinskii veto |
|
05.11.2019 | Roman Kravchenko (WIAS und HU Berlin) |
Optimal transport for imaging | |
12.11.2019 | Dr. Olga Krivorotko (Russian Academy of Sciences Novosibirsk) |
Regularization of multi-parametric inverse problems for differential equations arising in immunology, epidemiology and economy Mathematical models in immunology, epidemiology and economy based on mass balance low are described by systems of nonlinear ordinary differential equations (ODE) or stochastic differential equations (SDE). Considered mathematical models are driven by a lot of parameters such as coefficients of ODE/SDE, initial conditions, source, etc., that play a key role in prediction properties of models. The most of parameters is unknown or can be rough estimated. The inverse problem consists in identification of parameters of mathematical models using additional measurements of some direct problem statements in fixed times. Considered inverse problems are ill-posed, i.e. its solutions could be non-unique and unstable to data errors. The identifiability analysis is used to construct the regularization method for solving of inverse problems. 
Inverse problems are formulated as minimization problems of loss function. To find the global minima of minimization problems the combination of machine learning (ML), heuristic and deterministic approaches are implemented. The ML methods such as artificial neural network, support vector machine, stochastic algorithms, etc., identify the global minima domain, could be easily paralleled and do not use the loss function features. Then the gradient deterministic methods identify the global minima in its area with guaranteed accuracy. On the other hand, the loss function can be represent as a multi-scale tensor to which the tensor train (TT) decomposition can be applied. TT method is easily paralleled and uses the structure of the loss function. The confidence intervals for control an accuracy of approximate inverse problem solution are constructed and analyzed. The numerical results for solving of inverse problems for mathematical models of immunology, epidemiology and economy are presented and discussed. |
|
19.11.2019 | Yangwen Sun (HU Berlin) |
Online change-point detection for high-dimensional data using graphs |
|
26.11.2019 | Franz Besold (WIAS Berlin) |
Manifold clustering |
|
03.12.2019 | No Talk |
|
|
10.12.2019 | Dr. Valeriy Avanesov (WIAS Berlin) | How to gamble with non-stationary X-armed bandits and have no regrets In X-armed bandit problem an agent sequentially interacts with environment which yields a reward based on the vector input the agent provides. The agent's goal is to maximise the sum of these rewards across some number of time steps. The problem and its variations have been a subject of numerous studies, suggesting sub-linear and sometimes optimal strategies. The given paper introduces a new variation of the problem. We consider an environment, which can abruptly change its behaviour an unknown number of times. To that end we propose a novel strategy and prove it attains sub-linear cumulative regret. Moreover, the obtained regret bound matches the best known bound for GP-UCB for a stationary case, and approaches the minimax lower bound in case of highly smooth relation between an action and the corresponding reward. The theoretical result is supported by experimental study. |
17.12.2019 | No Talk |
|
|
14.01.2020 | Dr. Alexander Gasnikov (MIPT) |
Lecture room: ESH | An overview of distributed optimization |
21.01.2020 | Dr. Pavel Dvurechensky (WIAS Berlin) |
On the complexity of optimal transport problems |
|
28.01.2020 | Anastasia Ivanova (MIPT) |
Optimization methods for resource allocation problem |
|
25.02.2020 | Håkan Hoel (RWTH Aachen) |
Multilevel ensemble Kalman filtering algorithms |
last reviewed: March 31, 2020 by Pavel Dvurechensky