Forschungsgruppe "Stochastische Algorithmen und Nichtparametrische Statistik"
Seminar "Modern Methods in Applied Stochastics and Nonparametric Statistics" Sommer Semester 2022
|
|
19.04.2022 | N.N. |
| |
26.04.2022 | N.N. |
|
|
03.05.2022 | Jun.-Prof. Dr. Martin Redmann (Martin-Luther-Universität Halle-Wittenberg) |
Solving high-dimensional optimal stopping problems using model order reduction (hybrid talk) Solving optimal stopping problems by backward induction in high dimensions is often very complex since the computation of conditional expectations is required. Typically, such computations are based on regression, a method that suffers from the curse of dimensionality. Therefore, the objective of this presentation is to establish dimension reduction schemes for large-scale asset price models and to solve related optimal stopping problems (e.g. Bermudan option pricing) in the reduced setting, where regression is feasible. We illustrate the benefit of our approach in several numerical experiments, in which Bermudan option prices are determined. | |
10.05.2022 | Prof. Vladimir Spokoiny (WIAS und HU Berlin) |
Laplace's approximation in high dimension (hybrid talk) | |
17.05.2022 | |
| |
24.05.2022 | |
31.05.2022 | Prof. Vladimir Spokoiny (WIAS und HU Berlin) |
Laplace's approximation in high dimension, Part 2 (hybrid talk) |
|
07.06.2022 | |
14.06.2022 | Priv. - Doz. Dr. John Schoenmakers (WIAS Berlin) |
Dual randomization and empirical dual optimization for optimal stopping (online talk) |
|
21.06.2022 | Grigory Malinovsky (KAUST) |
ProxSkip: Yes! Local gradient steps provably lead to communication
acceleration! Finally! (online talk) We introduce ProxSkip - a surprisingly simple and provably efficient method for minimizing the sum of a smooth and an expensive nonsmooth proximable function. The canonical approach to solving such problems is via the proximal gradient descent (ProxGD) algorithm, which is based on the evaluation of the gradient of the smooth part and the prox operator of the composite term in each iteration. In this work we are specifically interested in the regime in which the evaluation of prox is costly relative to the evaluation of the gradient, which is the case in many applications. ProxSkip allows for the expensive prox operator to be skipped in most iterations while preserving the iteration complexity. Our main motivation comes from federated learning, where evaluation of the gradient operator corresponds to taking a local GD step independently on all devices, and evaluation of prox corresponds to (expensive) communication in the form of gradient averaging. In this context, ProxSkip offers an effective acceleration of communication complexity. Unlike other local gradient-type methods, such as FedAvg, SCAFFOLD, S-Local-GD and FedLin, whose theoretical communication complexity is worse than, or at best matching, that of vanilla GD in the heterogeneous data regime, we obtain a provable and large improvement without any heterogeneity-bounding assumptions. |
|
28.06.2022 | Karsten Tabelow (WIAS Berlin) |
In-vivo tissue properties from magnetic resonance data (hybrid talk) |
|
05.07.2022 | Alexander Marx (Wias Berlin) |
Random interactions in the mean-field Ising model (hybrid talk) |
|
12.07.2022 | |
|
|
19.07.2022 | |
|
|
26.07.2022 | |
|
|
02.07.2022 | Vaios Laschos (WIAS Berlin) |
Risk-sensitive partially observable Markov decision processes as fully observable multivariate utility optimization problems |
|
09.08.2022 | |
|
|
16.08.2022 | Mathias Staudigl (Maastricht University) |
Stochastic relaxed inertial forward-backward-forward splitting for monotone inclusions in Hilbert spaces |
|
23.08.2022 | |
|
|
30.08.2022 | Adeline Fermanian (Mines ParisTech) |
Framing RNN as a kernel method: A neural ODE approach
Building on the interpretation of a recurrent neural network (RNN) as a continuous-time neural differential equation, we show, under appropriate conditions, that the solution of a RNN can be viewed as a linear function of a specific feature set of the input sequence, known as the signature. This connection allows us to frame a RNN as a kernel method in a suitable reproducing kernel Hilbert space. As a consequence, we obtain theoretical guarantees on generalization and stability for a large class of recurrent networks. Our results are illustrated on simulated datasets. |
|
06.09.2022 | |
|
|
13.09.2022 | |
|
|
20.09.2022 | |
|
|
27.09.2022 | PhD Mao Fabrice Djete (École Polytechnique, Paris) |
Non-regular McKean--Vlasov equations and calibration problem in local stochastic volatility models (online talk) In this talk, motivated by the calibration problem in local stochastic volatility models, we will investigate some McKean--Vlasov equations beyond the usual requirement of continuity of the coefficients in the measure variable for the Wasserstein topology. We will provide first an existence result for this type of McKean--Vlasov equations and explain the main idea behind the proof. In a second time, we will show an approximation by particle system for this type of equations, a result almost never rigorously proven in the literature in this context. |
last reviewed: October 12, 2022 by Christine Schneider