Publikationen
Monografien

P. Friz, M. Hairer, A Course on Rough Paths: With an Introduction to Regularity Structures, Universitext, Springer International Publishing, Basel, 2020, 346 pages, (Monograph Published), DOI 10.1007/9783030415563 .

J. Polzehl, K. Tabelow, Magnetic Resonance Brain Imaging: Modeling and Data Analysis using R, Series: Use R!, Springer International Publishing, Cham, 2019, 231 pages, (Monograph Published), DOI 10.1007/9783030291846 .
Abstract
This book discusses the modeling and analysis of magnetic resonance imaging (MRI) data acquired from the human brain. The data processing pipelines described rely on R. The book is intended for readers from two communities: Statisticians who are interested in neuroimaging and looking for an introduction to the acquired data and typical scientific problems in the field; and neuroimaging students wanting to learn about the statistical modeling and analysis of MRI data. Offering a practical introduction to the field, the book focuses on those problems in data analysis for which implementations within R are available. It also includes fully worked examples and as such serves as a tutorial on MRI analysis with R, from which the readers can derive their own data processing scripts. The book starts with a short introduction to MRI and then examines the process of reading and writing common neuroimaging data formats to and from the R session. The main chapters cover three common MR imaging modalities and their data modeling and analysis problems: functional MRI, diffusion MRI, and MultiParameter Mapping. The book concludes with extended appendices providing details of the nonparametric statistics used and the resources for R and MRI data.The book also addresses the issues of reproducibility and topics like data organization and description, as well as open data and open science. It relies solely on a dynamic report generation with knitr and uses neuroimaging data publicly available in data repositories. The PDF was created executing the R code in the chunks and then running LaTeX, which means that almost all figures, numbers, and results were generated while producing the PDF from the sources. 
P. Friz, W. König, Ch. Mukherjee, S. Olla, eds., Probability and Analysis in Interacting Physical Systems. In Honor of S.R.S. Varadhan, Berlin, August, 2016, 283 of Springer Proceedings in Mathematics & Statistics, Springer International Publishing, Cham, 2019, 294 pages, (Collection Published), DOI https://doi.org/10.1007/9783030153380 .
Artikel in Referierten Journalen

O. Butkovsky, A. Kulik, M. Scheutzow, Generalized couplings and ergodic rates for SPDEs and other Markov models, The Annals of Applied Probability, 30 (2020), pp. 139, DOI 10.1214/19AAP1485 .
Abstract
We establish verifiable general sufficient conditions for exponential or subexponential ergodicity of Markov processes that may lack the strong Feller property. We apply the obtained results to show exponential ergodicity of a variety of nonlinear stochastic partial differential equations with additive forcing, including 2D stochastic NavierStokes equations. Our main tool is a new version of the generalized coupling method. 
N. Tapia, L. Zambotti, The geometry of the space of branched rough paths, Proceedings of the London Mathematical Society. Third Series, 121 (2020), pp. 220251, DOI 10.1112/plms.12311 .
Abstract
We construct an explicit transitive free action of a Banach space of Hölder functions on the space of branched rough paths, which yields in particular a bijection between theses two spaces. This endows the space of branched rough paths with the structure of a principal homogeneous space over a Banach space and allows to characterize its automorphisms. The construction is based on the BakerCampbellHausdorff formula, on a constructive version of the LyonsVictoir extension theorem and on the HairerKelly map, which allows to describe branched rough paths in terms of anisotropic geometric rough paths. 
S. Athreya, O. Butkovsky, L. Mytnik, Strong existence and uniqueness for stable stochastic differential equations with distributional drift, The Annals of Probability, 48 (2020), pp. 178210, DOI 10.1214/19AOP1358 .

D. Belomestny, M. Kaledin, J.G.M. Schoenmakers, Semitractability of optimal stopping problems via a weighted stochastic mesh algorithm, Mathematical Finance. An International Journal of Mathematics, Statistics and Financial Economics, published online on 27.05.2020, DOI 10.1111/mafi.12271 .
Abstract
In this article we propose a Weighted Stochastic Mesh (WSM) algorithm for approximating the value of discrete and continuous time optimal stopping problems. It is shown that in the discrete time case the WSM algorithm leads to semitractability of the corresponding optimal stopping problem in the sense that its complexity is bounded in order by $varepsilon^4log^d+2(1/varepsilon)$ with $d$ being the dimension of the underlying Markov chain. Furthermore we study the WSM approach in the context of continuous time optimal stopping problems and derive the corresponding complexity bounds. Although we can not prove semitractability in this case, our bounds turn out to be the tightest ones among the complexity bounds known in the literature. We illustrate our theoretical findings by a numerical example. 
D. Belomestny, J.G.M. Schoenmakers, V. Spokoiny, B. Zharkynbay, Optimal stopping via reinforced regression, Communications in Mathematical Sciences, 18 (2020), pp. 109121, DOI 10.4310/CMS.2020.v18.n1.a5 .
Abstract
In this note we propose a new approach towards solving numerically optimal stopping problems via boosted regression based Monte Carlo algorithms. The main idea of the method is to boost standard linear regression algorithms in each backward induction step by adding new basis functions based on previously estimated continuation values. The proposed methodology is illustrated by several numerical examples from finance. 
D. Belomestny, J.G.M. Schoenmakers, Optimal stopping of McKeanVlasov diffusions via regression on particle systems, SIAM Journal on Control and Optimization, 58 (2020), pp. 529550, DOI 10.1137/18M1195590 .
Abstract
In this note we consider the problem of using regression on interacting particles to compute conditional expectations for McKeanVlasov SDEs. We prove general result on convergence of linear regression algorithms and establish the corresponding rates of convergence. Application to optimal stopping and variance reduction are considered. 
J. Diehl, E.F. Kurusch, N. Tapia, Timewarping invariants of multidimensional time series, Acta Applicandae Mathematicae. An International Survey Journal on Applying Mathematics and Mathematical Applications, (2020), published online on 14.05.2020, DOI 10.1007/s1044002000333x .
Abstract
In data science, one is often confronted with a time series representing measurements of some quantity of interest. Usually, in a first step, features of the time series need to be extracted. These are numerical quantities that aim to succinctly describe the data and to dampen the influence of noise. In some applications, these features are also required to satisfy some invariance properties. In this paper, we concentrate on timewarping invariants.We show that these correspond to a certain family of iterated sums of the increments of the time series, known as quasisymmetric functions in the mathematics literature. We present these invariant features in an algebraic framework, and we develop some of their basic properties. 
D. Kamzolov, P. Dvurechensky, A. Gasnikov, Universal intermediate gradient method for convex problems with inexact oracle, Optimization Methods & Software, published online on 17.01.2020, DOI 10.1080/10556788.2019.1711079 .

Y.Y. Park, J. Polzehl, S. Chatterjee, A. Brechmann, M. Fiecas, Semiparametric modeling of timevarying activation and connectivity in taskbased fMRI data, Computational Statistics & Data Analysis, 150 (2020), pp. 107006/1107006/14, DOI 10.1016/j.csda.2020.107006 .
Abstract
In functional magnetic resonance imaging (fMRI), there is a rise in evidence that timevarying functional connectivity, or dynamic functional connectivity (dFC), which measures changes in the synchronization of brain activity, provides additional information on brain networks not captured by timeinvariant (i.e., static) functional connectivity. While there have been many developments for statistical models of dFC in restingstate fMRI, there remains a gap in the literature on how to simultaneously model both dFC and timevarying activation when the study participants are undergoing experimental tasks designed to probe at a cognitive process of interest. A method is proposed to estimate dFC between two regions of interest (ROIs) in taskbased fMRI where the activation effects are also allowed to vary over time. The proposed method, called TVAAC (timevarying activation and connectivity), uses penalized splines to model both timevarying activation effects and timevarying functional connectivity and uses the bootstrap for statistical inference. Simulation studies show that TVAAC can estimate both static and timevarying activation and functional connectivity, while ignoring timevarying activation effects would lead to poor estimation of dFC. An empirical illustration is provided by applying TVAAC to analyze two subjects from an eventrelated fMRI learning experiment. 
N. Puchkin, V. Spokoiny, An adaptive multiclass nearest neighbor classifier, ESAIM. Probability and Statistics, 24 (2020), pp. 6999, DOI 10.1051/ps/2019021 .

CH. Bayer, D. Belomestny, M. Redmann, S. Riedel, J.G.M. Schoenmakers, Solving linear parabolic rough partial differential equations, Journal of Mathematical Analysis and Applications, 490 (2020), 124236, DOI 10.1016/j.jmaa.2020.124236 .
Abstract
We study linear rough partial differential equations in the setting of [Friz and Hairer, Springer, 2014, Chapter 12]. More precisely, we consider a linear parabolic partial differential equation driven by a deterministic rough path W of Hölder regularity α with ⅓ < α ≤ ½ . Based on a stochastic representation of the solution of the rough partial differential equation, we propose a regression Monte Carlo algorithm for spatiotemporal approximation of the solution. We provide a full convergence analysis of the proposed approximation method which essentially relies on the new bounds for the higher order derivatives of the solution in space. Finally, a comprehensive simulation study showing the applicability of the proposed algorithm is presented. 
CH. Bayer, Ch.B. Hammouda, R. Tempone, Hierarchical adaptive sparse grids and quasiMonte Carlo for option pricing under the rough Bergomi model, Quantitative Finance, published online on 20.04.2020, DOI 10.1080/14697688.2020.1744700 .
Abstract
The rough Bergomi (rBergomi) model, introduced recently in Bayer et al. [Pricing under rough volatility. Quant. Finance, 2016, 16(6), 887?904], is a promising rough volatility model in quantitative finance. It is a parsimonious model depending on only three parameters, and yet remarkably fits empirical implied volatility surfaces. In the absence of analytical European option pricing methods for the model, and due to the nonMarkovian nature of the fractional driver, the prevalent option is to use the Monte Carlo (MC) simulation for pricing. Despite recent advances in the MC method in this context, pricing under the rBergomi model is still a timeconsuming task. To overcome this issue, we have designed a novel, hierarchical approach, based on: (i) adaptive sparse grids quadrature (ASGQ), and (ii) quasiMonte Carlo (QMC). Both techniques are coupled with a Brownian bridge construction and a Richardson extrapolation on the weak error. By uncovering the available regularity, our hierarchical methods demonstrate substantial computational gains with respect to the standard MC method. They reach a sufficiently small relative error tolerance in the price estimates across different parameter constellations, even for very small values of the Hurst parameter. Our work opens a new research direction in this field, i.e. to investigate the performance of methods other than Monte Carlo for pricing and calibrating under the rBergomi model. 
CH. Bayer, R.F. Tempone , S. Wolfers, Pricing American options by exercise rate optimization, Quantitative Finance, published online on 07.07.2020, DOI 10.1080/14697688.2020.1750678 .
Abstract
We present a novel method for the numerical pricing of American options based on Monte Carlo simulation and the optimization of exercise strategies. Previous solutions to this problem either explicitly or implicitly determine socalled optimal exercise regions, which consist of points in time and space at which a given option is exercised. In contrast, our method determines the exercise rates of randomized exercise strategies. We show that the supremum of the corresponding stochastic optimization problem provides the correct option price. By integrating analytically over the random exercise decision, we obtain an objective function that is differentiable with respect to perturbations of the exercise rate even for finitely many sample paths. The global optimum of this function can be approached gradually when starting from a constant exercise rate. Numerical experiments on vanilla put options in the multivariate BlackScholes model and a preliminary theoretical analysis underline the efficiency of our method, both with respect to the number of timediscretization steps and the required number of degrees of freedom in the parametrization of the exercise rates. Finally, we demonstrate the flexibility of our method through numerical experiments on max call options in the classical BlackScholes model, and vanilla put options in both the Heston model and the nonMarkovian rough Bergomi model. 
P. Dvurechensky, E. Gorbunov, A. Gasnikov, An accelerated directional derivative method for smooth stochastic convex optimization, European Journal of Operational Research, (2020), published online 20.08.2020, DOI 10.1016/j.ejor.2020.08.027 .
Abstract
We consider smooth stochastic convex optimization problems in the context of algorithms which are based on directional derivatives of the objective function. This context can be considered as an intermediate one between derivativefree optimization and gradientbased optimization. We assume that at any given point and for any given direction, a stochastic approximation for the directional derivative of the objective function at this point and in this direction is available with some additive noise. The noise is assumed to be of an unknown nature, but bounded in the absolute value. We underline that we consider directional derivatives in any direction, as opposed to coordinate descent methods which use only derivatives in coordinate directions. For this setting, we propose a nonaccelerated and an accelerated directional derivative method and provide their complexity bounds. Despite that our algorithms do not use gradient information, our nonaccelerated algorithm has a complexity bound which is, up to a factor logarithmic in problem dimension, similar to the complexity bound of gradientbased algorithms. Our accelerated algorithm has a complexity bound which coincides with the complexity bound of the accelerated gradientbased algorithm up to a factor of square root of the problem dimension, whereas for existing directional derivative methods this factor is of the order of problem dimension. We also extend these results to strongly convex problems. Finally, we consider derivativefree optimization as a particular case of directional derivative optimization with noise in the directional derivative and obtain complexity bounds for nonaccelerated and accelerated derivativefree methods. Complexity bounds for these algorithms inherit the gain in the dimension dependent factors from our directional derivative methods. 
P. Friz, T. Nilssen, W. Stannat , Existence, uniqueness and stability of semilinear rough partial differential equations, Journal of Differential Equations, 268 (2020), pp. 16861721, DOI 10.1016/j.jde.2019.09.033 .

O. Butkovsky, L. Mytnik, Regularization by noise and flows of solutions for a stochastic heat equation, The Annals of Probability, 47 (2019), pp. 165212.

M. Coghi, B. Gess, Stochastic nonlinear FokkerPlanck equations, Nonlinear Analysis. An International Mathematical Journal, 187 (2019), pp. 259278, DOI 10.1016/j.na.2019.05.003 .
Abstract
The existence and uniqueness of measurevalued solutions to stochastic nonlinear, nonlocal FokkerPlanck equations is proven. This type of stochastic PDE is shown to arise in the mean field limit of weakly interacting diffusions with common noise. The uniqueness of solutions is obtained without any higher moment assumption on the solution by means of a duality argument to a backward stochastic PDE. 
P. Pigato, Extreme atthemoney skew in a local volatility model, Finance and Stochastics, 23 (2019), pp. 827859, DOI 10.1007/s00780019004062 .

D.R. Baimurzina, A. Gasnikov, E.V. Gasnikova, P. Dvurechensky, E.I. Ershov, M.B. Kubentaeva, A.A. Lagunovskaya, Universal method of searching for equilibria and stochastic equilibria in transportation networks, Computational Mathematics and Mathematical Physics, 59 (2019), pp. 1933.

H. Bessaih, M. Coghi, F. Flandoli, Mean field limit of interacting filaments for 3D Euler equations, Journal of Statistical Physics, 174 (2019), pp. 562578, DOI 10.1007/s1095501821894 .

M.F. Callaghan, A. Lutti, J. Ashburner, E. Balteau, N. Corbin, B. Draganski, G. Helms, F. Kherif, T. Leutritz, S. Mohammadi, Ch. Phillips, E. Reimer, L. Ruthotto, M. Seif, K. Tabelow, G. Ziegler, N. Weiskopf, Example dataset for the hMRI toolbox, Data in Brief, 25 (2019), pp. 104132/1104132/6, DOI 10.1016/j.dib.2019.104132 .

E.A. Vorontsova, A. Gasnikov, E.A. Gorbunov, P. Dvurechensky, Accelerated gradientfree optimization methods with a nonEuclidean proximal operator, Automation and Remote Control, 80 (2019), pp. 14871501.

C. Améndola, P. Friz, B. Sturmfels, Varieties of signature tensors, Forum of Mathematics. Sigma, 7 (2019), pp. e10/1e10/54, DOI 10.1017/fms.2019.3 .
Abstract
The signature of a parametric curve is a sequence of tensors whose entries are iterated integrals. This construction is central to the theory of rough paths in stochastic analysis. It is examined here through the lens of algebraic geometry. We introduce varieties of signature tensors for both deterministic paths and random paths. For the former, we focus on piecewise linear paths, on polynomial paths, and on varieties derived from free nilpotent Lie groups. For the latter, we focus on Brownian motion and its mixtures. 
L. Antoine, P. Pigato, Maximum likelihood drift estimation for a threshold diffusion, , published online on 23.10.2019, urlhttps://doi.org/10.1111/sjos.12417, DOI 10.1111/sjos.12417 .
Abstract
We study the maximum likelihood estimator of the drift parameters of a stochastic differential equation, with both drift and diffusion coefficients constant on the positive and negative axis, yet discontinuous at zero. This threshold diffusion is called the drifted Oscillating Brownian motion. The asymptotic behaviors of the positive and negative occupation times rule the ones of the estimators. Differently from most known results in the literature, we do not restrict ourselves to the ergodic framework: indeed, depending on the signs of the drift, the process may be ergodic, transient or null recurrent. For each regime, we establish whether or not the estimators are consistent; if they are, we prove the convergence in long time of the properly rescaled difference of the estimators towards a normal or mixed normal distribution. These theoretical results are backed by numerical simulations. 
D. Belomestny, R. Hildebrand, J.G.M. Schoenmakers, Optimal stopping via pathwise dual empirical maximisation, Applied Mathematics and Optimization. An International Journal with Applications to Stochastics, 79 (2019), pp. 715741, DOI 10.1007/s0024501794549 .
Abstract
The optimal stopping problem arising in the pricing of American options can be tackled by the so called dual martingale approach. In this approach, a dual problem is formulated over the space of martingales. A feasible solution of the dual problem yields an upper bound for the solution of the original primal problem. In practice, the optimization is performed over a finitedimensional subspace of martingales. A sample of paths of the underlying stochastic process is produced by a MonteCarlo simulation, and the expectation is replaced by the empirical mean. As a rule the resulting optimization problem, which can be written as a linear program, yields a martingale such that the variance of the obtained estimator can be large. In order to decrease this variance, a penalizing term can be added to the objective function of the pathwise optimization problem. In this paper, we provide a rigorous analysis of the optimization problems obtained by adding different penalty functions. In particular, a convergence analysis implies that it is better to minimize the empirical maximum instead of the empirical mean. Numerical simulations confirm the variance reduction effect of the new approach. 
Y. Bruned, I. Chevyrev, P. Friz, R. Preiss, A rough path perspective on renormalization, Journal of Functional Analysis, 277 (2019), pp. 108283/1108283/60, DOI 10.1016/j.jfa.2019.108283 .
Abstract
We develop the algebraic theory of rough path translation. Particular attention is given to the case of branched rough paths, whose underlying algebraic structure (ConnesKreimer, GrossmanLarson) makes it a useful model case of a regularity structure in the sense of Hairer. PreLie structures are seen to play a fundamental rule which allow a direct understanding of the translated (i.e. renormalized) equation under consideration. This construction is also novel with regard to the algebraic renormalization theory for regularity structures due to BrunedHairerZambotti (2016), the links with which are discussed in detail. 
I. Chevyrev, P. Friz, Canonical RDEs and general semimartingales as rough paths, The Annals of Probability, 47 (2019), pp. 420463.

K. Efimov, L. Adamyan, V. Spokoiny, Adaptive nonparametric clustering, IEEE Transactions on Information Theory, 65 (2019), pp. 48754892, DOI 10.1109/TIT.2019.2903113 .
Abstract
This paper presents a new approach to nonparametric cluster analysis called adaptive weights? clustering. The method is fully adaptive and does not require to specify the number of clusters or their structure. The clustering results are not sensitive to noise and outliers, and the procedure is able to recover different clusters with sharp edges or manifold structure. The method is also scalable and computationally feasible. Our intensive numerical study shows a stateoftheart performance of the method in various artificial examples and applications to text data. The idea of the method is to identify the clustering structure by checking at different points and for different scales on departure from local homogeneity. The proposed procedure describes the clustering structure in terms of weights $w_ij$ , and each of them measures the degree of local inhomogeneity for two neighbor local clusters using statistical tests of ?no gap? between them. The procedure starts from very local scale, and then, the parameter of locality grows by some factor at each step. We also provide a rigorous theoretical study of the procedure and state its optimal sensitivity to deviations from local homogeneity. 
A. Gasnikov, P. Dvurechensky, F. Stonyakin, A.A. Titov, An adaptive proximal method for variational inequalities, Computational Mathematics and Mathematical Physics, 59 (2019), pp. 836841.

F. Götze, A. Naumov, V. Spokoiny, V. Ulyanov, Large ball probabilities, Gaussian comparison and anticoncentration, Bernoulli. Official Journal of the Bernoulli Society for Mathematical Statistics and Probability, 25 (2019), pp. 25382563, DOI 10.3150/18BEJ1062 .
Abstract
We derive tight nonasymptotic bounds for the Kolmogorov distance between the probabilities of two Gaussian elements to hit a ball in a Hilbert space. The key property of these bounds is that they are dimensionfree and depend on the nuclear (Schattenone) norm of the difference between the covariance operators of the elements and on the norm of the mean shift. The obtained bounds significantly improve the bound based on Pinsker?s inequality via the Kullback?Leibler divergence. We also establish an anticoncentration bound for a squared norm of a noncentered Gaussian element in Hilbert space. The paper presents a number of examples motivating our results and applications of the obtained bounds to statistical inference and to highdimensional CLT. 
P. Goyal, M. Redmann, Timelimited H2optimal model order reduction, Applied Mathematics and Computation, 355 (2019), pp. 184197, DOI 10.1016/j.amc.2019.02.065 .

S. Guminov, Y. Nesterov, P. Dvurechensky, A. Gasnikov, Accelerated primaldual gradient descent with linesearch for convex, nonconvex, and nonsmooth optimization problems, Doklady Mathematics. Maik Nauka/Interperiodica Publishing, Moscow. English. Translation of the Mathematics Section of: Doklady Akademii Nauk. (Formerly: Russian Academy of Sciences. Doklady. Mathematics)., 99 (2019), pp. 125128.

B. Hofmann, S. Kindermann, P. Mathé, Penaltybased smoothness conditions in convex variational regularization, Journal of Inverse and IllPosed Problems, 27 (2019), pp. 283300, DOI 10.1515/jiip20180039 .
Abstract
The authors study Tikhonov regularization of linear illposed problems with a general convex penalty defined on a Banach space. It is well known that the error analysis requires smoothness assumptions. Here such assumptions are given in form of inequalities involving only the family of noisefree minimizers along the regularization parameter and the (unknown) penaltyminimizing solution. These inequalities control, respectively, the defect of the penalty, or likewise, the defect of the whole Tikhonov functional. The main results provide error bounds for a Bregman distance, which split into two summands: the first smoothnessdependent term does not depend on the noise level, whereas the second term includes the noise level. This resembles the situation of standard quadratic Tikhonov regularization Hilbert spaces. It is shown that variational inequalities, as these were studied recently, imply the validity of the assumptions made here. Several examples highlight the results in specific applications. 
A. Lejay, P. Pigato, A threshold model for local volatility: Evidence of leverage and mean reversion effects on historical data, International Journal of Theoretical and Applied Finance, 22 (2019), pp. 1950017/11950017/24, DOI 10.1142/S0219024919500171 .

A. Naumov, V. Spokoiny, V. Ulyanov, Bootstrap confidence sets for spectral projectors of sample covariance, Probability Theory and Related Fields, 174 (2019), pp. 10911132, DOI 10.1007/s0044001808772 .

CH. Bayer, J. Häppölä, R. Tempone, Implied stopping rules for American basket options from Markovian projection, Quantitative Finance, 19 (2019), pp. 371390.
Abstract
This work addresses the problem of pricing American basket options in a multivariate setting, which includes among others, the Bachelier and the BlackScholes models. In high dimensions, nonlinear partial differential equation methods for solving the problem become prohibitively costly due to the curse of dimensionality. Instead, this work proposes to use a stopping rule that depends on the dynamics of a lowdimensional Markovian projection of the given basket of assets. It is shown that the ability to approximate the original value function by a lowerdimensional approximation is a feature of the dynamics of the system and is unaffected by the pathdependent nature of the American basket option. Assuming that we know the density of the forward process and using the Laplace approximation, we first efficiently evaluate the diffusion coefficient corresponding to the lowdimensional Markovian projection of the basket. Then, we approximate the optimal earlyexercise boundary of the option by solving a HamiltonJacobiBellman partial differential equation in the projected, lowdimensional space. The resulting nearoptimal earlyexercise boundary is used to produce an exercise strategy for the highdimensional option, thereby providing a lower bound for the price of the American basket option. A corresponding upper bound is also provided. These bounds allow to assess the accuracy of the proposed pricing method. Indeed, our approximate earlyexercise strategy provides a straightforward lower bound for the American basket option price. Following a duality argument due to Rogers, we derive a corresponding upper bound solving only the lowdimensional optimal control problem. Numerically, we show the feasibility of the method using baskets with dimensions up to fifty. In these examples, the resulting option price relative errors are only of the order of few percent. 
CH. Bayer, P. Friz, P. Gassiat, J. Martin, B. Stemper, A regularity structure for rough volatility, Mathematical Finance. An International Journal of Mathematics, Statistics and Financial Economics, published online on 19.11.2019, DOI 10.1111/mafi.12233 .

CH. Bayer, P. Friz, A. Gulisashvili, B. Horvath, B. Stemper, Shorttime nearthemoney skew in rough fractional volatility models, Quantitative Finance, 19 (2019), pp. 779798, DOI 10.1080/14697688.2018.1529420 .
Abstract
We consider rough stochastic volatility models where the driving noise of volatility has fractional scaling, in the "rough" regime of Hurst parameter H < ½. This regime recently attracted a lot of attention both from the statistical and option pricing point of view. With focus on the latter, we sharpen the large deviation results of FordeZhang (2017) in a way that allows us to zoomin around the money while maintaining full analytical tractability. More precisely, this amounts to proving higher order moderate deviation estimates, only recently introduced in the option pricing context. This in turn allows us to push the applicability range of known atthemoney skew approximation formulae from CLT type logmoneyness deviations of order t^{1/2} (recent works of Alòs, León & Vives and Fukasawa) to the wider moderate deviations regime. 
P. Mathé, Bayesian inverse problems with noncommuting operators, Mathematics of Computation, 88 (2019), pp. 28972912, DOI 10.1090/mcom/3439 .
Abstract
The Bayesian approach to illposed operator equations in Hilbert space recently gained attraction. In this context, and when the prior distribution is Gaussian, then two operators play a significant role, the one which governs the operator equation, and the one which describes the prior covariance. Typically it is assumed that these operators commute. Here we extend this analysis to noncommuting operators, replacing the commutativity assumption by a link condition. We discuss its relation to the commuting case, and we indicate that this allows to use interpolation type results to obtain tight bounds for the contraction of the posterior Gaussian distribution towards the data generating element. 
V. Spokoiny, N. Willrich, Bootstrap tuning in Gaussian ordered model selection, The Annals of Statistics, 47 (2019), pp. 13511380, DOI 10.1214/18AOS1717 .
Abstract
In the problem of model selection for a given family of linear estimators, ordered by their variance, we offer a new “smallest accepted” approach motivated by Lepski's device and the multiple testing idea. The procedure selects the smallest model which satisfies the acceptance rule based on comparison with all larger models. The method is completely datadriven and does not use any prior information about the variance structure of the noise: its parameters are adjusted to the underlying possibly heterogeneous noise by the so called “propagation condition” using bootstrap multiplier method. The validity of the bootstrap calibration is proved for finite samples with an explicit error bound. We provide a comprehensive theoretical study of the method and describe in details the set of possible values of the selector ( hatm ). We also establish some precise oracle error bounds for the corresponding estimator ( hattheta = tildetheta_hatm ) which equally applies to estimation of the whole parameter vectors, its subvector or linear mapping, as well as estimation of a linear functional. 
K. Tabelow, E. Balteau, J. Ashburner, M.F. Callaghan, B. Draganski, G. Helms, F. Kherif, T. Leutritz, A. Lutti, Ch. Phillips, E. Reimer, L. Ruthotto, M. Seif, N. Weiskopf, G. Ziegler, S. Mohammadi, hMRI  A toolbox for quantitative MRI in neuroscience and clinical research, NeuroImage, 194 (2019), pp. 191210, DOI 10.1016/j.neuroimage.2019.01.029 .
Abstract
Quantitative magnetic resonance imaging (qMRI) finds increasing application in neuroscience and clinical research due to its sensitivity to microstructural properties of brain tissue, e.g. axon, myelin, iron and water concentration. We introduce the hMRItoolbox, an easytouse opensource tool for handling and processing of qMRI data presented together with an example dataset. This toolbox allows the estimation of highquality multiparameter qMRI maps (longitudinal and effective transverse relaxation rates R1 and R2*, proton density PD and magnetisation transfer MT) that can be used for calculation of standard and novel MRI biomarkers of tissue microstructure as well as improved delineation of subcortical brain structures. Embedded in the Statistical Parametric Mapping (SPM) framework, it can be readily combined with existing SPM tools for estimating diffusion MRI parameter maps and benefits from the extensive range of available tools for highaccuracy spatial registration and statistical inference. As such the hMRItoolbox provides an efficient, robust and simple framework for using qMRI data in neuroscience and clinical research.
Beiträge zu Sammelwerken

D. Dvinskikh, E. Gorbunov, A. Gasnikov, A. Dvurechensky, C.A. Uribe, On primal and dual approaches for distributed stochastic convex optimization over networks, in: 2019 IEEE 58th Conference on Decision and Control (CDC), IEEE Xplore, 2020, pp. 74357440, DOI 10.1109/CDC40024.2019 .
Abstract
We introduce a primaldual stochastic gradient oracle method for distributed convex optimization problems over networks. We show that the proposed method is optimal in terms of communication steps. Additionally, we propose a new analysis method for the rate of convergence in terms of duality gap and probability of large deviations. This analysis is based on a new technique that allows to bound the distance between the iteration sequence and the optimal point. By the proper choice of batch size, we can guarantee that this distance equals (up to a constant) to the distance between the starting point and the solution. 
P. Dvurechensky, A. Gasnikov, E. Nurminski, F. Stonyakin, Advances in lowmemory subgradient optimization, in: Numerical Nonsmooth Optimization, A.M. Bagirov, M. Gaudioso, N. Karmitsa, M.M. Mäkelä, S. Taheri, eds., Springer International Publishing, Cham, 2019, pp. 1959, DOI 10.1007/9783030349103_2 .

P. Dvurechensky, A. Gasnikov, S. Omelchenko, A. Tiurin, A stable alternative to Sinkhorn's algorithm for regularized optimal transport, in: Mathematical optimization theory and operations research, A. Kononov, M. Khachay, V.A. Kalyagin, P. Pardalos, eds., Theoretical Computer Science and General Issues, Springer International Publishing, Basel, 2020, pp. 406423, DOI 10.1007/9783030499884 .

E. Celledoni, P.E. Lystad, N. Tapia, Signatures in shape analysis: An efficient approach to motion identification, in: Geometric Science of Information. GSI 2019, F. Nielsen, F. Barbaresco, eds., 11712 of Lecture Notes in Computer Science, Springer International Publishing AG, Cham, pp. 2130 (published online on 02.08.2019), DOI 10.1007/9783030269807 .

J. Ebert, V. Spokoiny, A. Suvorikova, Elements of statistical inference in 2Wasserstein space, in: Topics in Applied Analysis and Optimisation, M. Hintermüller, J.F. Rodrigues, eds., CIM Series in Mathematical Sciences, Springer Nature Switzerland AG, Cham, 2019, pp. 139158, DOI 10.1007/9783030331160 .

F. Stonyakin, D. Dvinskikh, P. Dvurechensky, A. Kroshnin, O. Kuznetsova, A. Agafonov, A. Gasnikov, A. Tyurin, C.A. Uribe, D. Pasechnyuk, S. Artamonov, Gradient methods for problems with inexact model of the objective, in: Proceedings of the 18th International Conference on Mathematical Optimization Theory and Operations Research (MOTOR 2019), M. Khachay, Y. Kochetov, P. Pardalos, eds., 11548 of Lecture Notes in Computer Science, Springer Nature Switzerland AG 2019, Cham, Switzerland, 2019, pp. 97114, DOI 10.1007/9783030226299_8 .
Abstract
We consider optimization methods for convex minimization problems under inexact information on the objective function. We introduce inexact model of the objective, which as a particular cases includes inexact oracle [16] and relative smoothness condition [36]. We analyze gradient method which uses this inexact model and obtain convergence rates for convex and strongly convex problems. To show potential applications of our general framework we consider three particular problems. The first one is clustering by electorial model introduced in [41]. The second one is approximating optimal transport distance, for which we propose a Proximal Sinkhorn algorithm. The third one is devoted to approximating optimal transport barycenter and we propose a Proximal Iterative Bregman Projections algorithm. We also illustrate the practical performance of our algorithms by numerical experiments. 
TH. Koprucki, A. Maltsi, T. Niermann, T. Streckenbach, K. Tabelow, J. Polzehl, On a database of simulated TEM images for In(Ga)As/GaAs quantum dots with various shapes, in: Proceedings of the 19th International Conference on Numerical Simulation of Optoelectronic Devices  NUSOD 2019, J. Piprek, K. Hinze, eds., IEEE Conference Publications Management Group, Piscataway, 2019, pp. 1314, DOI 10.1109/NUSOD.2019.8807025 .
Preprints, Reports, Technical Reports

J. Diehl, K. EbrahimiFard, N. Tapia, Tropical time series, iteratedsum signatures and quasisymmetric functions, Preprint no. 2760, WIAS, Berlin, 2020, DOI 10.20347/WIAS.PREPRINT.2760 .
Abstract, PDF (236 kByte)
Driven by the need for principled extraction of features from time series, we introduce the iteratedsums signature over any commutative semiring. The case of the tropical semiring is a central, and our motivating, example, as it leads to features of (realvalued) time series that are not easily available using existing signaturetype objects. 
CH. Bayer, F. Harang, P. Pigato, Logmodulated rough stochastic volatility models, Preprint no. 2752, WIAS, Berlin, 2020, DOI 10.20347/WIAS.PREPRINT.2752 .
Abstract, PDF (595 kByte)
We propose a new class of rough stochastic volatility models obtained by modulating the powerlaw kernel defining the fractional Brownian motion (fBm) by a logarithmic term, such that the kernel retains square integrability even in the limit case of vanishing Hurst index H. The soobtained logmodulated fractional Brownian motion (logfBm) is a continuous Gaussian process even for H = 0. As a consequence, the resulting superrough stochastic volatility models can be analysed over the whole range of Hurst indices between 0 and 1/2, including H = 0, without the need of further normalization. We obtain the usual power law explosion of the skew as maturity T goes to 0, modulated by a logarithmic term, so no flattening of the skew occurs as H goes to 0. 
CH. Bayer, J. Qiu, Y. Yao, Pricing options under rough volatility with backward SPDEs, Preprint no. 2745, WIAS, Berlin, 2020, DOI 10.20347/WIAS.PREPRINT.2745 .
Abstract, PDF (553 kByte)
In this paper, we study the option pricing problems for rough volatility models. As the framework is nonMarkovian, the value function for a European option is not deterministic; rather, it is random and satisfies a backward stochastic partial differential equation (BSPDE). The existence and uniqueness of weak solutions is proved for general nonlinear BSPDEs with unbounded random leading coefficients whose connections with certain forwardbackward stochastic differential equations are derived as well. These BSPDEs are then used to approximate American option prices. A deep learningbased method is also investigated for the numerical approximations to such BSPDEs and associated nonMarkovian pricing problems. Finally, the examples of rough Bergomi type are numerically computed for both European and American options. 
J. Diehl, K. EbrahimiFard, N. Tapia, Iteratedsums signature, quasisymmetric functions and time series analysis, Preprint no. 2736, WIAS, Berlin, 2020, DOI 10.20347/WIAS.PREPRINT.2736 .
Abstract, PDF (261 kByte)
We survey and extend results on a recently defined character on the quasishuffle algebra. This character, termed iteratedsums signature, appears in the context of time series analysis and originates from a problem in dynamic time warping. Algebraically, it relates to (multidimensional) quasisymmetric functions as well as (deformed) quasishuffle algebras. 
S. Riedel, Semiimplicit Taylor schemes for stiff rough differential equations, Preprint no. 2734, WIAS, Berlin, 2020, DOI 10.20347/WIAS.PREPRINT.2734 .
Abstract, PDF (538 kByte)
We study a class of semiimplicit Taylortype numerical methods that are easy to implement and designed to solve multidimensional stochastic differential equations driven by a general rough noise, e.g. a fractional Brownian motion. In the multiplicative noise case, the equation is understood as a rough differential equation in the sense of T. Lyons. We focus on equations for which the drift coefficient may be unbounded and satisfies a onesided Lipschitz condition only. We prove wellposedness of the methods, provide a full analysis, and deduce their convergence rate. Numerical experiments show that our schemes are particularly useful in the case of stiff rough stochastic differential equations driven by a fractional Brownian motion. 
CH. Bayer, P. Friz, N. Tapia, Stability of deep neural networks via discrete rough paths, Preprint no. 2732, WIAS, Berlin, 2020, DOI 10.20347/WIAS.PREPRINT.2732 .
Abstract, PDF (365 kByte)
Using rough path techniques, we provide a priori estimates for the output of Deep Residual Neural Networks. In particular we derive stability bounds in terms of the total pvariation of trained weights for any p ≥ 1. 
V. Avanesov, Datadriven confidence bands for distributed nonparametric regression, Preprint no. 2729, WIAS, Berlin, 2020, DOI 10.20347/WIAS.PREPRINT.2729 .
Abstract, PDF (316 kByte)
Gaussian Process Regression and Kernel Ridge Regression are popular nonparametric regression approaches. Unfortunately, they suffer from high computational complexity rendering them inapplicable to the modern massive datasets. To that end a number of approximations have been suggested, some of them allowing for a distributed implementation. One of them is the divide and conquer approach, splitting the data into a number of partitions, obtaining the local estimates and finally averaging them. In this paper we suggest a novel computationally efficient fully datadriven algorithm, quantifying uncertainty of this method, yielding frequentist $L_2$confidence bands. We rigorously demonstrate validity of the algorithm. Another contribution of the paper is a minimaxoptimal highprobability bound for the averaged estimator, complementing and generalizing the known risk bounds. 
R.J.A. Laeven, J.G.M. Schoenmakers, N.F.F. Schweizer, M. Stadje, Robust multiple stopping  A pathwise duality approach, Preprint no. 2728, WIAS, Berlin, 2020, DOI 10.20347/WIAS.PREPRINT.2728 .
Abstract, PDF (576 kByte)
In this paper we develop a solution method for general optimal stopping problems. Our general setting allows for multiple exercise rights, i.e., optimal multiple stopping, for a robust evaluation that accounts for model uncertainty, and for general reward processes driven by multidimensional jumpdiffusions. Our approach relies on first establishing robust martingale dual representation results for the multiple stopping problem which satisfy appealing pathwise optimality (almost sure) properties. Next, we exploit these theoretical results to develop upper and lower bounds which, as we formally show, not only converge to the true solution asymptotically, but also constitute genuine upper and lower bounds. We illustrate the applicability of our general approach in a few examples and analyze the impact of model uncertainty on optimal multiple stopping strategies. 
A. Ivanova, A. Gasnikov, P. Dvurechensky, D. Dvinskikh, A. Tyurin, E. Vorontsova, D. Pasechnyuk, Oracle complexity separation in convex optimization, Preprint no. 2711, WIAS, Berlin, 2020, DOI 10.20347/WIAS.PREPRINT.2711 .
Abstract, PDF (424 kByte)
Ubiquitous in machine learning regularized empirical risk minimization problems are often composed of several blocks which can be treated using different types of oracles, e.g., full gradient, stochastic gradient or coordinate derivative. Optimal oracle complexity is known and achievable separately for the full gradient case, the stochastic gradient case, etc. We propose a generic framework to combine optimal algorithms for different types of oracles in order to achieve separate optimal oracle complexity for each block, i.e. for each block the corresponding oracle is called the optimal number of times for a given accuracy. As a particular example, we demonstrate that for a combination of a full gradient oracle and either a stochastic gradient oracle or a coordinate descent oracle our approach leads to the optimal number of oracle calls separately for the full gradient part and the stochastic/coordinate descent part. 
D. Kamzolov, A. Gasnikov, P. Dvurechensky, On the optimal combination of tensor optimization methods, Preprint no. 2710, WIAS, Berlin, 2020, DOI 10.20347/WIAS.PREPRINT.2710 .
Abstract, PDF (284 kByte)
We consider the minimization problem of a sum of a number of functions having Lipshitz p th order derivatives with different Lipschitz constants. In this case, to accelerate optimization, we propose a general framework allowing to obtain nearoptimal oracle complexity for each function in the sum separately, meaning, in particular, that the oracle for a function with lower Lipschitz constant is called a smaller number of times. As a building block, we extend the current theory of tensor methods and show how to generalize nearoptimal tensor methods to work with inexact tensor step. Further, we investigate the situation when the functions in the sum have Lipschitz derivatives of a different order. For this situation, we propose a generic way to separate the oracle complexity between the parts of the sum. Our method is not optimal, which leads to an open problem of the optimal combination of oracles of a different order. 
F. Stonyakin, A. Gasnikov, A. Tyurin, D. Pasechnyuk, A. Agafonov, P. Dvurechensky, D. Dvinskikh, S. Artamonov, V. Piskunova, Inexact relative smoothness and strong convexity for optimization and variational inequalities by inexact model, Preprint no. 2709, WIAS, Berlin, 2020, DOI 10.20347/WIAS.PREPRINT.2709 .
Abstract, PDF (463 kByte)
In this paper we propose a general algorithmic framework for firstorder methods in optimization in a broad sense, including minimization problems, saddlepoint problems and variational inequalities. This framework allows to obtain many known methods as a special case, the list including accelerated gradient method, composite optimization methods, levelset methods, Bregman proximal methods. The idea of the framework is based on constructing an inexact model of the main problem component, i.e. objective function in optimization or operator in variational inequalities. Besides reproducing known results, our framework allows to construct new methods, which we illustrate by constructing a universal conditional gradient method and universal method for variational inequalities with composite structure. These method works for smooth and nonsmooth problems with optimal complexity without a priori knowledge of the problem smoothness. As a particular case of our general framework, we introduce relative smoothness for operators and propose an algorithm for VIs with such operator. We also generalize our framework for relatively strongly convex objectives and strongly monotone variational inequalities. 
M. Redmann, S. Riedel, RungeKutta methods for rough differential equations, Preprint no. 2708, WIAS, Berlin, 2020, DOI 10.20347/WIAS.PREPRINT.2708 .
Abstract, PDF (393 kByte)
We study RungeKutta methods for rough differential equations which can be used to calculate solutions to stochastic differential equations driven by processes that are rougher than a Brownian motion. We use a Taylor series representation (Bseries) for both the numerical scheme and the solution of the rough differential equation in order to determine conditions that guarantee the desired order of the local error for the underlying RungeKutta method. Subsequently, we prove the order of the global error given the local rate. In addition, we simplify the numerical approximation by introducing a RungeKutta scheme that is based on the increments of the driver of the rough differential equation. This simplified method can be easily implemented and is computational cheap since it is derivativefree. We provide a full characterization of this implementable RungeKutta method meaning that we provide necessary and sufficient algebraic conditions for an optimal order of convergence in case that the driver, e.g., is a fractional Brownian motion with Hurst index 1/4 < H ≤ 1/2. We conclude this paper by conducting numerical experiments verifying the theoretical rate of convergence. 
M. Redmann, Ch. Bayer, P. Goya, Lowdimensional approximations of highdimensional asset price models, Preprint no. 2706, WIAS, Berlin, 2020, DOI 10.20347/WIAS.PREPRINT.2706 .
Abstract, PDF (363 kByte)
We consider highdimensional asset price models that are reduced in their dimension in order to reduce the complexity of the problem or the effect of the curse of dimensionality in the context of option pricing. We apply model order reduction (MOR) to obtain a reduced system. MOR has been previously studied for asymptotically stable controlled stochastic systems with zero initial conditions. However, stochastic differential equations modeling price processes are uncontrolled, have nonzero initial states and are often unstable. Therefore, we extend MOR schemes and combine ideas of techniques known for deterministic systems. This leads to a method providing a good pathwise approximation. After explaining the reduction procedure, the error of the approximation is analyzed and the performance of the algorithm is shown conducting several numerical experiments. Within the numerics section, the benefit of the algorithm in the context of option pricing is pointed out. 
M. Ghani Varzaneh, S. Riedel, A dynamical theory for singular stochastic delay differential equations II: Nonlinear equations and invariant manifolds, Preprint no. 2701, WIAS, Berlin, 2020, DOI 10.20347/WIAS.PREPRINT.2701 .
Abstract, PDF (383 kByte)
Building on results obtained in [GVRS], we prove Local Stable and Unstable Manifold Theorems for nonlinear, singular stochastic delay differential equations. The main tools are rough paths theory and a semiinvertible Multiplicative Ergodic Theorem for cocycles acting on measurable fields of Banach spaces obtained in [GVR]. 
CH. Bayer, D. Belomestny, P. Hager, P. Pigato, J.G.M. Schoenmakers, Randomized optimal stopping algorithms and their convergence analysis, Preprint no. 2697, WIAS, Berlin, 2020, DOI 10.20347/WIAS.PREPRINT.2697 .
Abstract, PDF (367 kByte)
In this paper we study randomized optimal stopping problems and consider corresponding forward and backward Monte Carlo based optimization algorithms. In particular we prove the convergence of the proposed algorithms and derive the corresponding convergence rates. 
C. Bellinger, A. Djurdjevac, P. Friz, N. Tapia, Transport and continuity equations with (very) rough noise, Preprint no. 2696, WIAS, Berlin, 2020, DOI 10.20347/WIAS.PREPRINT.2696 .
Abstract, PDF (320 kByte)
Existence and uniqueness for rough flows, transport and continuity equations driven by general geometric rough paths are established. 
S. Guminov, P. Dvurechensky, A. Gasnikov, On accelerated alternating minimization, Preprint no. 2695, WIAS, Berlin, 2020, DOI 10.20347/WIAS.PREPRINT.2695 .
Abstract, PDF (343 kByte)
Alternating minimization (AM) optimization algorithms have been known for a long time and are of importance in machine learning problems, among which we are mostly motivated by approximating optimal transport distances. AM algorithms assume that the decision variable is divided into several blocks and minimization in each block can be done explicitly or cheaply with high accuracy. The ubiquitous Sinkhorn's algorithm can be seen as an alternating minimization algorithm for the dual to the entropyregularized optimal transport problem. We introduce an accelerated alternating minimization method with a $1/k^2$ convergence rate, where $k$ is the iteration counter. This improves over known bound $1/k$ for general AM methods and for the Sinkhorn's algorithm. Moreover, our algorithm converges faster than gradienttype methods in practice as it is free of the choice of the stepsize and is adaptive to the local smoothness of the problem. We show that the proposed method is primaldual, meaning that if we apply it to a dual problem, we can reconstruct the solution of the primal problem with the same convergence rate. We apply our method to the entropy regularized optimal transport problem and show experimentally, that it outperforms Sinkhorn's algorithm. 
P. Dvurechensky, A. Gasnikov, P. Ostroukhov, A.C. Uribe, A. Ivanova, Nearoptimal tensor methods for minimizing gradient norm, Preprint no. 2694, WIAS, Berlin, 2020, DOI 10.20347/WIAS.PREPRINT.2694 .
Abstract, PDF (423 kByte)
Motivated by convex problems with linear constraints and, in particular, by entropyregularized optimal transport, we consider the problem of finding approximate stationary points, i.e. points with the norm of the objective gradient less than small error, of convex functions with Lipschitz pth order derivatives. Lower complexity bounds for this problem were recently proposed in [Grapiglia and Nesterov, arXiv:1907.07053]. However, the methods presented in the same paper do not have optimal complexity bounds. We propose two optimal up to logarithmic factors methods with complexity bounds with respect to the initial objective residual and the distance between the starting point and solution respectively 
P. Dvurechensky, M. Staudigl, C.A. Uribe , Generalized selfconcordant Hessianbarrier algorithms, Preprint no. 2693, WIAS, Berlin, 2020, DOI 10.20347/WIAS.PREPRINT.2693 .
Abstract, PDF (647 kByte)
Many problems in statistical learning, imaging, and computer vision involve the optimization of a nonconvex objective function with singularities at the boundary of the feasible set. For such challenging instances, we develop a new interiorpoint technique building on the Hessianbarrier algorithm recently introduced in Bomze, Mertikopoulos, Schachinger and Staudigl, [SIAM J. Opt. 2019 29(3), pp. 21002127], where the Riemannian metric is induced by a generalized selfconcordant function. This class of functions is sufficiently general to include most of the commonly used barrier functions in the literature of interior point methods. We prove global convergence to an approximate stationary point of the method, and in cases where the feasible set admits an easily computable selfconcordant barrier, we verify worstcase optimal iteration complexity of the method. Applications in nonconvex statistical estimation and Lpminimization are discussed to given the efficiency of the method. 
N. Tupitsa, P. Dvurechensky, A. Gasnikov, S. Guminov, Alternating minimization methods for strongly convex optimization, Preprint no. 2692, WIAS, Berlin, 2020, DOI 10.20347/WIAS.PREPRINT.2692 .
Abstract, PDF (387 kByte)
We consider alternating minimization procedures for convex optimization problems with variable divided in many block, each block being amenable for minimization with respect to its variable with freezed other variables blocks. In the case of two blocks, we prove a linear convergence rate for alternating minimization procedure under PolyakŁojasiewicz condition, which can be seen as a relaxation of the strong convexity assumption. Under strong convexity assumption in manyblocks setting we provide an accelerated alternating minimization procedure with linear rate depending on the square root of the condition number as opposed to condition number for the nonaccelerated method. 
E. Gorbunov, D. Dvinskikh, A. Gasnikov, Optimal decentralized distributed algorithms for stochastic convex optimization, Preprint no. 2691, WIAS, Berlin, 2020, DOI 10.20347/WIAS.PREPRINT.2691 .
Abstract, PDF (616 kByte)
We consider stochastic convex optimization problems with affine constraints and develop several methods using either primal or dual approach to solve it. In the primal case we use special penalization technique to make the initial problem more convenient for using optimization methods. We propose algorithms to solve it based on Similar Triangles Method with Inexact Proximal Step for the convex smooth and strongly convex smooth objective functions and methods based on Gradient Sliding algorithm to solve the same problems in the nonsmooth case. We prove the convergence guarantees in smooth convex case with deterministic firstorder oracle. We propose and analyze three novel methods to handle stochastic convex optimization problems with affine constraints: SPDSTM, RRRMAACSA and SSTM_sc. All methods use stochastic dual oracle. SPDSTM is the stochastic primaldual modification of STM and it is applied for the dual problem when the primal functional is strongly convex and Lipschitz continuous on some ball. RRRMAACSA is an accelerated stochastic method based on the restarts of RRMAACSA and SSTM_sc is just stochastic STM for strongly convex problems. Both methods are applied to the dual problem when the primal functional is strongly convex, smooth and Lipschitz continuous on some ball and use stochastic dual firstorder oracle. We develop convergence analysis for these methods for the unbiased and biased oracles respectively. Finally, we apply all aforementioned results and approaches to solve decentralized distributed optimization problem and discuss optimality of the obtained results in terms of communication rounds and number of oracle calls per node. 
F. Stonyakin, D. Dvinskikh, P. Dvurechensky, A. Kroshnin, O. Kuznetsova, A. Agafonov, A. Gasnikov, A. Tyurin, C.A. Uribe, D. Pasechnyuk, S. Artamonov, Gradient methods for problems with inexact model of the objective, Preprint no. 2688, WIAS, Berlin, 2020, DOI 10.20347/WIAS.PREPRINT.2688 .
Abstract, PDF (785 kByte)
We consider optimization methods for convex minimization problems under inexact information on the objective function. We introduce inexact model of the objective, which as a particular cases includes inexact oracle [19] and relative smoothness condition [43]. We analyze gradient method which uses this inexact model and obtain convergence rates for convex and strongly convex problems. To show potential applications of our general framework we consider three particular problems. The first one is clustering by electorial model introduced in [49]. The second one is approximating optimal transport distance, for which we propose a Proximal Sinkhorn algorithm. The third one is devoted to approximating optimal transport barycenter and we propose a Proximal Iterative Bregman Projections algorithm. We also illustrate the practical performance of our algorithms by numerical experiments. 
V. Avanesov, Nonparametric change point detection in regression, Preprint no. 2687, WIAS, Berlin, 2020, DOI 10.20347/WIAS.PREPRINT.2687 .
Abstract, PDF (329 kByte)
This paper considers the prominent problem of changepoint detection in regression. The study suggests a novel testing procedure featuring a fully datadriven calibration scheme. The method is essentially a black box, requiring no tuning from the practitioner. The approach is investigated from both theoretical and practical points of view. The theoretical study demonstrates proper control of firsttype error rate under H0 and power approaching 1 under H1. The experiments conducted on synthetic data fully support the theoretical claims. In conclusion, the method is applied to financial data, where it detects sensible changepoints. Techniques for changepoint localization are also suggested and investigated 
V. Avanesov, How to gamble with nonstationary Xarmed bandits and have no regrets, Preprint no. 2686, WIAS, Berlin, 2020, DOI 10.20347/WIAS.PREPRINT.2686 .
Abstract, PDF (287 kByte)
In Xarmed bandit problem an agent sequentially interacts with environment which yields a reward based on the vector input the agent provides. The agent's goal is to maximise the sum of these rewards across some number of time steps. The problem and its variations have been a subject of numerous studies, suggesting sublinear and sometimes optimal strategies. The given paper introduces a new variation of the problem. We consider an environment, which can abruptly change its behaviour an unknown number of times. To that end we propose a novel strategy and prove it attains sublinear cumulative regret. Moreover, the obtained regret bound matches the best known bound for GPUCB for a stationary case, and approaches the minimax lower bound in case of highly smooth relation between an action and the corresponding reward. The theoretical result is supported by experimental study. 
A. Maltsi, T. Niermann, T. Streckenbach, K. Tabelow, Th. Koprucki, Numerical simulation of TEM images for In(Ga)As/GaAs quantum dots with various shapes, Preprint no. 2682, WIAS, Berlin, 2020, DOI 10.20347/WIAS.PREPRINT.2682 .
Abstract, PDF (7946 kByte)
We present a mathematical model and a tool chain for the numerical simulation of TEM images of semiconductor quantum dots (QDs). This includes elasticity theory to obtain the strain profile coupled with the DarwinHowieWhelan equations, describing the propagation of the electron wave through the sample. We perform a simulation study on indium gallium arsenide QDs with different shapes and compare the resulting TEM images to experimental ones. This tool chain can be applied to generate a database of simulated TEM images, which is a key element of a novel concept for modelbased geometry reconstruction of semiconductor QDs, involving machine learning techniques. 
F. Stonyakin, A. Gasnikov, A. Tyurin, D. Pasechnyuk, A. Agafonov, P. Dvurechensky, D. Dvinskikh, V. Piskunova, Inexact model: A framework for optimization and variational inequalities, Preprint no. 2679, WIAS, Berlin, 2020, DOI 10.20347/WIAS.PREPRINT.2679 .
Abstract, PDF (414 kByte)
In this paper we propose a general algorithmic framework for firstorder methods in optimization in a broad sense, including minimization problems, saddlepoint problems and variational inequalities. This framework allows to obtain many known methods as a special case, the list including accelerated gradient method, composite optimization methods, levelset methods, proximal methods. The idea of the framework is based on constructing an inexact model of the main problem component, i.e. objective function in optimization or operator in variational inequalities. Besides reproducing known results, our framework allows to construct new methods, which we illustrate by constructing a universal method for variational inequalities with composite structure. This method works for smooth and nonsmooth problems with optimal complexity without a priori knowledge of the problem smoothness. We also generalize our framework for strongly convex objectives and strongly monotone variational inequalities. 
K. EbrahimiFard, F. Patras, N. Tapia, L. Zambotti, Wick polynomials in noncommutative probability: A grouptheoretical approach, Preprint no. 2677, WIAS, Berlin, 2020, DOI 10.20347/WIAS.PREPRINT.2677 .
Abstract, PDF (292 kByte)
Wick polynomials and Wick products are studied in the context of noncommutative probability theory. It is shown that free, boolean and conditionally free Wick polynomials can be defined and related through the action of the group of characters over a particular Hopf algebra. These results generalize our previous developments of a Hopf algebraic approach to cumulants and Wick products in classical probability theory. 
P. Dvurechensky, A. Gasnikov, E. Nurminski, F. Stonyakin, Advances in lowmemory subgradient optimization, Preprint no. 2676, WIAS, Berlin, 2020, DOI 10.20347/WIAS.PREPRINT.2676 .
Abstract, PDF (413 kByte)
One of the main goals in the development of nonsmooth optimization is to cope with high dimensional problems by decomposition, duality or Lagrangian relaxation which greatly reduces the number of variables at the cost of worsening differentiability of objective or constraints. Small or medium dimensionality of resulting nonsmooth problems allows to use bundletype algorithms to achieve higher rates of convergence and obtain higher accuracy, which of course came at the cost of additional memory requirements, typically of the order of n2, where n is the number of variables of nonsmooth problem. However with the rapid development of more and more sophisticated models in industry, economy, finance, et all such memory requirements are becoming too hard to satisfy. It raised the interest in subgradientbased lowmemory algorithms and later developments in this area significantly improved over their early variants still preserving O(n) memory requirements. To review these developments this chapter is devoted to the blackbox subgradient algorithms with the minimal requirements for the storage of auxiliary results, which are necessary to execute these algorithms. To provide historical perspective this survey starts with the original result of N.Z. Shor which opened this field with the application to the classical transportation problem. The theoretical complexity bounds for smooth and nonsmooth convex and quasiconvex optimization problems are briefly exposed in what follows to introduce to the relevant fundamentals of nonsmooth optimization. Special attention in this section is given to the adaptive stepsize policy which aims to attain lowest complexity bounds. Unfortunately the nondifferentiability of objective function in convex optimization essentially slows down the theoretical low bounds for the rate of convergence in subgradient optimization compared to the smooth case but there are different modern techniques that allow to solve nonsmooth convex optimization problems faster then dictate lower complexity bounds. In this work the particular attention is given to Nesterov smoothing technique, Nesterov Universal approach, and Legendre (saddle point) representation approach. The new results on Universal Mirror Prox algorithms represent the original parts of the survey. To demonstrate application of nonsmooth convex optimization algorithms for solution of hugescale extremal problems we consider convex optimization problems with nonsmooth functional constraints and propose two adaptive Mirror Descent methods. The first method is of primaldual variety and proved to be optimal in terms of lower oracle bounds for the class of Lipschitzcontinuous convex objective and constraints. The advantages of application of this method to sparse Truss Topology Design problem are discussed in certain details. The second method can be applied for solution of convex and quasiconvex optimization problems and is optimal in a sense of complexity bounds. The conclusion part of the survey contains the important references that characterize recent developments of nonsmooth convex optimization. 
A. Kroshnin, D. Dvinskikh, P. Dvurechensky, A. Gasnikov, N. Tupitsa, C.A. Uribe, On the complexity of approximating Wasserstein barycenter, Preprint no. 2665, WIAS, Berlin, 2019, DOI 10.20347/WIAS.PREPRINT.2665 .
Abstract, PDF (386 kByte)
We study the complexity of approximating Wassertein barycenter of discrete measures, or histograms by contrasting two alternative approaches, both using entropic regularization. We provide a novel analysis for our approach based on the Iterative Bregman Projections (IBP) algorithm to approximate the original nonregularized barycenter. We also get the complexity bound for alternative acceleratedgradientdescentbased approach and compare it with the bound obtained for IBP. As a byproduct, we show that the regularization parameter in both approaches has to be proportional to ", which causes instability of both algorithms when the desired accuracy is high. To overcome this issue, we propose a novel proximalIBP algorithm, which can be seen as a proximal gradient method, which uses IBP on each iteration to make a proximal step. We also consider the question of scalability of these algorithms using approaches from distributed optimization and show that the first algorithm can be implemented in a centralized distributed setting (master/slave), while the second one is amenable to a more general decentralized distributed setting with an arbitrary network topology. 
A. Ogaltsov, D. Dvinskikh, P. Dvurechensky, A. Gasnikov, V. Spokoiny, Adaptive gradient descent for convex and nonconvex stochastic optimization, Preprint no. 2655, WIAS, Berlin, 2019, DOI 10.20347/WIAS.PREPRINT.2655 .
Abstract, PDF (538 kByte)
In this paper we propose several adaptive gradient methods for stochastic optimization. Our methods are based on Armijotype line search and they simultaneously adapt to the unknown Lipschitz constant of the gradient and variance of the stochastic approximation for the gradient. We consider an accelerated gradient descent for convex problems and gradient descent for nonconvex problems. In the experiments we demonstrate superiority of our methods to existing adaptive methods, e.g. AdaGrad and Adam. 
M. Coghi, T. Nilssen, Rough nonlocal diffusions, Preprint no. 2619, WIAS, Berlin, 2019, DOI 10.20347/WIAS.PREPRINT.2619 .
Abstract, PDF (397 kByte)
We consider a nonlinear FokkerPlanck equation driven by a deterministic rough path which describes the conditional probability of a McKeanVlasov diffusion with "common" noise. To study the equation we build a selfcontained framework of nonlinear rough integration theory which we use to study McKeanVlasov equations perturbed by rough paths. We construct an appropriate notion of solution of the corresponding FokkerPlanck equation and prove wellposedness. 
M. Coghi, J.D. Deuschel, P. Friz, M. Maurelli, Pathwise McKeanVlasov theory with additive noise, Preprint no. 2618, WIAS, Berlin, 2019, DOI 10.20347/WIAS.PREPRINT.2618 .
Abstract, PDF (348 kByte)
We take a pathwise approach to classical McKeanVlasov stochastic differential equations with additive noise, as e.g. exposed in Sznitmann [34]. Our study was prompted by some concrete problems in battery modelling [19], and also by recent progress on roughpathwise McKeanVlasov theory, notably CassLyons [9], and then Bailleul, Catellier and Delarue [4]. Such a “pathwise McKeanVlasov theory” can be traced back to Tanaka [36]. This paper can be seen as an attempt to advertize the ideas, power and simplicity of the pathwise appproach, not so easily extracted from [4, 9, 36]. As novel applications we discuss mean field convergence without a priori independence and exchangeability assumption; common noise and reflecting boundaries. Last not least, we generalize DawsonGärtner large deviations to a nonBrownian noise setting.
Vorträge, Poster

V. Avanesov, Datadriven confidence bands for distributed nonparametric regression (online talk), The 33rd Annual Conference on Learning Theory (COLT 2020), July 9  12, 2020, Graz, Austria, July 10, 2020.

O. Butkovsky, Exponential ergodicity of orderpreserving SPDEs in the hypoelliptic setting via new coupling techniques (online talk), BernoulliIMS One World Symposium 2020 (Online Event), August 24  28, 2020, August 25, 2020.

O. Butkovsky, Regularization by noise for SDEs and related systems: A tale of two approaches, Eighth BielefeldSNU joint Workshop in Mathematics, February 24  26, 2020, Universität Bielefeld, Fakultät für Mathematik, February 24, 2020.

O. Butkovsky, Skew stochastic heat equation and regularization by noise for PDEs (online talk), BernoulliIMS One World Symposium 2020 (Online Event), August 24  28, 2020, August 27, 2020.

S. Riedel, Optimal stopping: a signature approach (online talk), 13th Annual ERC BerlinOxford Young Researchers Meeting on Applied Stochastic Analysis, WIAS Berlin, June 9, 2020.

S. Riedel, RungeKutta methods for rough differential equations (online talk), The DNA Seminar (spring 2020), Norwegian University of Science and Technology, Department of Mathematical Sciences, Trondheim, Norway, June 24, 2020.

N. Tapia, Free Wick polynomials (online talk), Arbeitsgruppenseminar Analysis, Universität Potsdam, Institut für Mathematik, April 24, 2020.

N. Tapia, Higher order iteratedsums signatures (online talk), DataSig Seminar, University of Oxford, Mathematical Institute, UK, April 2, 2020.

N. Tapia, Transport and continuity equations with rough noise (online talk), DNA Seminar, Norwegian University of Science and Technology, Department of Mathematical Sciences, Trondheim, Norway, April 22, 2020.

N. Tapia, Transport equations with low regularity rough noise, Young researchers between geometry and stochastic analysis, February 12  14, 2020, University of Bergen, Norway, February 13, 2020.

A. Kroshnin, A. Suvorikova, V. Spokoiny, Statistical inference on BuresWasserstein space: From theory to practice, Math of Machine Learning 2020, Sotschi, Russian Federation, February 19  22, 2020.

D. Dvinkikh, A. Gasnikov, Lecture 1: Two approaches for population Wasserstein barycenter problem: Stochastic averaging versus sample average approximation (online talk), XII Summer School on Operational Research, Data and Decision Making, Faculty of Informatics, Mathematics and Computer Science, Faculty of Informatics, Mathematics and Computer Science, Nizhny Novgorod, Russian Federation, May 20, 2020.

D. Dvinkikh, A. Gasnikov, Lecture 2: Two approaches for population Wasserstein barycenter problem: Stochastic averaging versus sample average approximation (online talk), XII Summer School on Operational Research, Data and Decision Making, Faculty of Informatics, Mathematics and Computer Science, Faculty of Informatics, Mathematics and Computer Science, Nizhny Novgorod, Russian Federation, May 20, 2020.

D. Dvinskikh, A. Gasnikov, SA vs SAA for population Wasserstein barycenter calculation, Math of Machine Learning 2020, Sotschi, Russian Federation, February 19  22, 2020.

A. Suvorikova, Change point detection in hightdimensional data (online talk), Joint AramcoHSE Reserach Seminar, Higher School of Economics, Faculty of Computer Science, Moscow, Russian Federation, April 15, 2020.

A. Suvorikova, Shapebased domain Adaptation, Statistical Seminar of HDI Lab, Higher School of Economics, Faculty of Computer Science, Moscow, Russian Federation, March 2, 2020.

A. Suvorikova, Shapebased domain adaptation via optimal transportation (online talk), Machine Learning Online Seminar, MaxPlanckInstitut für Mathematik in den Naturwissenschaften (MiS), Leipzig, April 1, 2020.

CH. Bayer, Pricing american options by exercise rate optimization, Research Seminar on Insurance Mathematics and Stochastic Finance, Eidgenössische Technische Hochschule Zürich, Switzerland, January 9, 2020.

CH. Bayer, Pricing american options by exercise rate optimization, Mathrisk INRIA/ LPSM ParisDiderot Seminar, Inria Paris Research Centre, Mathrisk Research Team, France, February 6, 2020.

CH. Bayer, Pricing american options by exercise rate optimization, Lunch at the Lab, University of Calgary, Department of Mathematics and Statistics, Canada, March 3, 2020.

P. Mathé, Bayesian inverse problems with noncommuting operators, University of Edinburgh, School of Mathematics, UK, February 14, 2020.

V. Spokoiny, Advanced Statistical Methods, February 11  March 3, 2020, Higher School of Economics, Faculty of Computer Science, Moskau, Russian Federation.

V. Spokoiny, Bayes inference for nonlinear inverse problems, Statistics meets Machine Learning, January 26  31, 2020, Mathematisches Forschungsinstitut Oberwolfach (MFO), January 28, 2020.

A. Maltsi, Th. Koprucki, T. Streckenbach, K. Tabelow, J. Polzehl, Modelbased geometry reconstruction of quantum dots from TEM, Microscopy Conference 2019, Poster session IM 4, Berlin, September 1  5, 2019.

A. Maltsi, Th. Koprucki, T. Streckenbach, K. Tabelow, J. Polzehl, Modelbased geometry reconstruction of quantum dots from TEM, BMS Summer School 2019: Mathematics of Deep Learning, Berlin, August 19  30, 2019.

V. Avanesov, Nonparametric change point detection in regression, SFB 1294 Spring School 2019, Dierhagen, March 18  22, 2019.

F. Besold, Adaptive manifold clustering, Rencontres de Statistiques Mathématiques, December 16  20, 2019, Centre International de Rencontres Mathématiques (CIRM), Luminy, France, December 19, 2019.

F. Besold, Manifold clustering, Pennsylvania State University, Department of Mathematics, University Park, PA, USA, October 28, 2019.

F. Besold, Manifold clustering with adaptive weights, Structural Inference in HighDimensional Models 2, National Research University Higher School of Economics, HDILab, St. Petersburg, Russian Federation, August 26  30, 2019.

F. Besold, Manifold clustering with adaptive weights, Joint Workshop of BBDC, BZML and RIKEN AIP, Fraunhofer Institute HHI, September 9  10, 2019.

F. Besold, Minimax clustering with adaptive weights, New Frontiers in Highdimensional Probability and Statistics 2, February 20  23, 2019, Higher School of Economics, Moscow, Russian Federation, February 23, 2019.

O. Butkovsky, Approximation of SDE, LSA Winter Meeting 2019, December 2  6, 2019, Higher School of Economics, National Research University, Laboratory of Stochastic Analysis and its Applications, Moscow, Russian Federation, December 3, 2019.

O. Butkovsky, New coupling techniques for exponential ergodicity of stochastic delay equations and SPDEs, Probability Seminar, Swansea University, Department of Mathematics, UK, December 9, 2019.

O. Butkovsky, New coupling techniques for exponential ergodicity of SPDEs in hypoelliptic and effectively elliptic settings, Oberseminar Stochastik, Universität Bonn, Hausdorff Research Center, Institut für Angewandte Mathematik (IAM), November 28, 2019.

O. Butkovsky, Numerical methods for SDEs: A stochastic sewing approach, 12th OxfordBerlin Young Researchers Meeting on Applied Stochastic Analysis, December 5  6, 2019, University of Oxford, Mathematical Institute, UK, December 6, 2019.

O. Butkovsky, Regularization by noise for SDEs and SPDEs with applications to numerical methods, Seminar Wahrscheinlichkeitstheorie, Universität Mannheim, Probability & Statistics Group, October 16, 2019.

O. Butkovsky, Regularization by noise for SDEs and related systems: A tale of two approaches, Hausdorff Junior Trimester, Universität Bonn, Hausdorff Research Institute for Mathematics (HIM), November 26, 2019.

M. Coghi, Mean field limit of interacting filaments for 3D Euler equations, Second Italian Meeting on Probability and Mathematical Statistics, June 17  20, 2019, Università degli Studi di Salerno, Dipartimento di Matematica, Vietri sul Mare, Italy, June 20, 2019.

M. Coghi, Pathwise McKeanVlasov theory, Oberseminar Partielle Differentialgleichungen, Universität Konstanz, Fachbereich Mathmatik und Statistik, February 6, 2019.

M. Coghi, Rough nonlocal diffusions, Recent Trends in Stochastic Analysis and SPDEs, July 17  20, 2019, University of Pisa, Department of Mathematics, Italy, July 18, 2019.

M. Coghi, Stochastic nonlinear FokkerPlanck equations, 11th Annual ERC BerlinOxford Young Researchers Meeting on Applied Stochastic Analysis, May 23  25, 2019, WIAS Berlin, May 23, 2019.

P. Pigato, Applications of stochastic analysis to volatility modelling, Università degli Studi di Roma ``Tor Vergata'', Dipartimento di Economia e Finanza, Italy, September 27, 2019.

P. Pigato, Density and tube estimates for diffusion processes under Hormandertype conditions, Statistics Seminars, Università di Bologna, Italy, February 28, 2019.

P. Pigato, Parameters estimation in a threshold diffusion, 62nd ISI World Statistics Congress 2019, IPS26 ``Perspectives on Statistical Methods for Time Dependent Processes'', August 18  23, 2019, Kuala Lumpur, Malaysia, August 21, 2019.

P. Pigato, Precise asymptotics of rough stochastic volatility models, 11th Annual ERC BerlinOxford Young Researchers Meeting on Applied Stochastic Analysis, May 23  25, 2019, WIAS Berlin, May 23, 2019.

P. Pigato, Precise asymptotics: Robust stochastic volatility models, Forschungsseminar Wahrscheinlichkeitstheorie, Universität Potsdam, July 1, 2019.

P. Pigato, Rough stochastic volatility models, Università degli Studi di Roma ``Tor Vergata'', Dipartimento di Economia e Finanza, Italy, June 26, 2019.

M. Redmann, Energy estimates and model order reduction for stochastic bilinear systems, 12th International Workshop on Stochastic Models and Control, March 19  22, 2019, Cottbus, March 21, 2019.

M. Redmann, Model reduction for stochastic bilinear systems, 9th International Congress on Industrial and Applied Mathematics ICIAM), July 15  19, 2019, Valencia, Spain, July 17, 2019.

M. Redmann, Numerical approximations for rough and stochastic differential equations, Technische Universität Bergakademie Freiberg, Fakultät für Mathematik und Informatik, April 1, 2019.

M. Redmann, Numerical approximations for rough and stochastic differential equations, Technische Universität Dresden, Fakultät Mathematik, April 12, 2019.

N. Tapia, Noncommutative Wick polynomials, Rencontre GDR Renormalisation, September 30  October 4, 2019, L'Université du Littoral Côte d'Opale, Laboratoire de Mathématiques Pures et Appliquées Joseph Liouville, Calais, France, October 3, 2019.

N. Tapia, Algebraic aspects of signatures, SciCADE 2019, International Conference on Scientific Computation and Differential Equations, July 22  26, 2019, Innsbruck, Austria, July 24, 2019.

N. Tapia, Iteratedsums signature, quasisymmetric functions and time series analysis, 12th OxfordBerlin Young Researchers Meeting on Applied Stochastic Analysis, December 4  6, 2019, University of Oxford, Mathematical Institute, UK, December 5, 2019.

N. Tapia, Signatures in shape analysis, 4th Conference on Geometric Science of Information (GSI 2019), August 27  29, 2019, École Nationale de l'Aviation Civile, Toulouse, France, August 27, 2019.

A. Gasnikov, P. Dvurechensky, E. Gorbunov, E. Vorontsova, D. Selikhanovych, C.A. Uribe, Optimal tensor methods in smooth convex and uniformly convex optimization, Conference on Learning Theory, COLT 2019, Phoenix, Arizona, USA, June 24  28, 2019.

A. Kroshnin, N. Tupitsa, D. Dvinskikh, P. Dvurechensky, A. Gasnikov, C.A. Uribe , On the complexity of approximating Wasserstein barycenters, Thirtysixth International Conference on Machine Learning, ICML 2019, Long Beach, CA, USA, June 9  15, 2019.

M. Opper, S. Reich, V. Spokoiny, V. Avanesov, D. Maoutsa , P. Rozdeba, Approximative Bayesian inference and model selection for stochastic differential equations, CRC 1294 Annual Meeting 2019, Universität Potsdam, Campus Griebnitzsee, September 23, 2019.

D. Dvinskikh, Complexity bounds for optimal distributed primal and dual methods for finite sum minimization problems, New frontiers in highdimensional probability and statistics 2, February 22  23, 2019, Higher School of Economics, Moskau, Russian Federation, February 23, 2019.

D. Dvinskikh, Complexity rates for accelerated primaldual gradient method for stochastic optimisation problem, ICCOPT 2019  Sixth International Conference on Continuous Optimization, Session ``PrimalDual Methods for Structured Optimization'', August 5  8, 2019, Berlin, August 7, 2019.

D. Dvinskikh, Decentralized and parallelized primal and dual accelerated methods, Structural Inference in HighDimensional Models 2, National Research University Higher School of Economics, HDILab, St. Petersburg, Russian Federation, August 26  30, 2019.

D. Dvinskikh, Distributed decentralized (stochastic) optimization for dual friendly functions, Optimization and Statistical Learning, Les Houches, France, March 24  29, 2019.

D. Dvinskikh, Introduction to decentralized optimization, Summer School ``Big Data'', July 15  18, 2019, Sirius Educational Centre, Sochi, Russian Federation, July 16, 2019.

CH. Bayer, A regularity structure for rough volatility, Vienna Seminar in Mathematical Finance and Probability, Technische Universität Wien, Research Unit of Financial and Actuarial Mathematics, Austria, January 10, 2019.

CH. Bayer, Calibration of rough volatility models by deep learning, Rough Workshop 2019, September 4  6, 2019, Technische Universität Wien, Financial and Actuarial Mathematics, Austria.

CH. Bayer, Deep calibration of rough volatility models, New Directions in Stochastic Analysis: Rough Paths, SPDEs and Related Topics, WIAS und TU Berlin, March 18, 2019.

CH. Bayer, Deep calibration of rough volatility models, SIAM Conference on Financial Mathematics & Engineering, June 4  7, 2019, Society for Industrial and Applied Mathematics, Toronto, Ontario, Canada, June 7, 2019.

CH. Bayer, Learning rough volatility, Algebraic and Analytic Perspectives in the Theory of Rough Paths and Signatures, November 14  15, 2019, University of Oslo, Department of Mathematics, Norway, November 14, 2019.

CH. Bayer, Numerics for rough volatility, Stochastic Processes and Related Topics, February 21  22, 2019, Kansai University, Senriyama Campus, Osaka, Japan, February 22, 2019.

CH. Bayer, Pricing American options by exercise rate optimization, Workshop on Financial Risks and Their Management, February 19  20, 2019, Ryukoku University, Wagenkan, Kyoto, Japan, February 19, 2019.

CH. Bayer, Pricing American options by exercise rate optimization, ICCOPT 2019  Sixth International Conference on Continuous Optimization, Session ``Stochastic Optimization and Its Applications (Part III)'', August 5  8, 2019, Berlin, August 7, 2019.

CH. Bayer, Pricing American options by exercise rate optimization, Seminar, Imperial College London, Mathematical Finance Department, UK, December 16, 2019.

P. Dvurechensky, A unifying framework for accelerated randomized optimization methods, ICCOPT 2019  Sixth International Conference on Continuous Optimization, Session ``LargeScale Stochastic FirstOrder Optimization (Part I)'', August 5  8, 2019, Berlin, August 6, 2019.

P. Dvurechensky, Distributed calculation of Wasserstein barycenters, Huawei, Shanghai, China, June 6, 2019.

P. Dvurechensky, Distributed optimization for Wasserstein barycenter, Optimization and Statistical Learning, Les Houches, France, March 24  29, 2019.

P. Dvurechensky, HDI Lab: Optimization methods for optimal transport, HSEYandex Autumn School on Generative Models, November 26  29, 2019, Higher School of Economics, National Research University, Moscow, Russian Federation.

P. Dvurechensky, Nearoptimal method for highly smooth convex optimization, Conference on Learning Theory, COLT 2019, June 24  28, 2019, Phoenix, Arizona, USA, June 27, 2019.

P. Dvurechensky, On the complexity of approximating Wasserstein barycenters, Thirtysixth International Conference on Machine Learning, ICML 2019, June 9  15, 2019, Long Beach, CA, USA, June 12, 2019.

P. Dvurechensky, On the complexity of optimal transport problems, Computational and Mathematical Methods in Data Science, Berlin, October 24  25, 2019.

P. Dvurechensky, On the complexity of optimal transport problems, Optimal Transportation Meeting, September 23  27, 2019, Higher School of Economics, Moscow, Russian Federation, September 26, 2019.

P. Friz, Multiscale systems, homogenization and rough paths, CRC 1114 Colloquium & Lectures, Collaborative Research Center CRC 1114 ``Scaling Cascades in Complex Systems'', Freie Universität Berlin, June 13, 2019.

P. Friz, On differential equations with singular forcing, Berliner Oberseminar Nichtlineare partielle Differentialgleichungen (LangenbachSeminar), WIAS Berlin, January 9, 2019.

P. Friz, Rough paths, rough volatility and regularity structures, Minicourse consisting of two sessions, Mathematics and CS Seminar, July 4  5, 2019, Institute of Science and Technology Austria, Klosterneuburg, Austria.

P. Friz, Rough paths, rough volatility, regularity structures, Rough Workshop 2019, September 4  6, 2019, Technische Universität Wien, Financial and Actuarial Mathematics, Austria.

P. Friz, Rough semimartingales, Paths between Probability, PDEs, and Physics: Conference 2019, July 1  5, 2019, Imperial College London, July 2, 2019.

P. Friz, Rough transport, revisited, Algebraic and Analytic Perspectives in the Theory of Rough Paths and Signatures, November 14  15, 2019, University of Oslo, Department of Mathematics, Norway, November 14, 2019.

P. Friz, Some perspectives on harmonic analysis and rough paths, Harmonic Analysis and Rough Paths, November 18  19, 2019, Hausdorff Research Institute for Mathematics, Bonn, November 18, 2019.

P. Mathé, Relating direct and inverse Bayesian problems via the modulus of continuity, Stochastic Computation and Complexity (ibcparis2019), April 15  16, 2019, Institut Henri Poincaré, Paris, France, April 16, 2019.

P. Mathé, Relating direct and inverse problems via the modulus of continuity, The Chemnitz Symposium on Inverse Problems 2019, September 30  October 2, 2019, Technische Universität Chemnitz, Fakultät für Mathematik, Frankfurt a. M., October 1, 2019.

P. Mathé, The role of the modulus of continuity in inverse problems, Forschungsseminar Inverse Probleme, Technische Universität Chemnitz, Fachbereich Mathematik, August 13, 2019.

J. Polzehl, K. Tabelow, Analyzing neuroimaging experiments within R, 2019 OHBM Annual Meeting, Organization for Human Brain Mapping, Rome, Italy, June 9  13, 2019.

J. Polzehl, R Introduction, visualization and package management / Exploring functional data, Leibniz MMS Summer School 2019, October 28  November 1, 2019, Mathematisches Forschungsinstitut Oberwolfach.

J.G.M. Schoenmakers, Tractability of continuous time optimal stopping problems, DynStoch 2019, June 12  15, 2019, Delft University of Technology, Institute of Applied Mathematics, Netherlands, June 14, 2019.

J.G.M. Schoenmakers, Tractability of continuous time optimal stopping problems, Séminaire du Groupe de Travail ``Finance Mathématique, Probabilités Numériques et Statistique des Processus'', Université Paris Diderot, LPSMEquipe Mathématiques Financières et Actuarielles, Probabilités Numériques, France, June 27, 2019.

V. Spokoiny, Advanced statistical methods, April 9  11, 2019, Higher School of Economics, National Research University, Moscow, Russian Federation.

V. Spokoiny, Bayesian inference for nonlinear inverse problems, Rencontres de Statistiques Mathématiques, December 16  20, 2019, Centre International de Rencontres Mathématiques (CIRM), Luminy, France, December 19, 2019.

V. Spokoiny, Bayesian inference vs stochastic optimization, HSEYandex Autumn School on Generative Models, November 26  29, 2019, Higher School of Economics, National Research University, Moscow, Russian Federation, November 29, 2019.

V. Spokoiny, Inference for spectral projectors, RTG Kolloquium, Universität Heidelberg, Institut für Angewandte Mathematik, January 10, 2019.

V. Spokoiny, Optimal stopping and control via reinforced regression, Optimization and Statistical Learning, March 25  28, 2019, Les Houches School of Physics, France, March 26, 2019.

V. Spokoiny, Optimal stopping via reinforced regression, HUBNUS FinTech Workshop, March 18  21, 2019, National University of Singapore, Institute for Mathematical Science, Singapore, March 21, 2019.

V. Spokoiny, Statistical inference for barycenters, Optimal Transportation Meeting, September 23  27, 2019, Higher School of Economics, National Research University, Moscow, Russian Federation, September 26, 2019.

K. Tabelow, Adaptive smoothing data from multiparameter mapping, 7th NordicBaltic Biometric Conference, June 3  5, 2019, Vilnius University, Faculty of Medicine, Lithuania, June 5, 2019.

K. Tabelow, Modelbased imaging for quantitative MRI, KoMSO ChallengeWorkshop Mathematical Modeling of Biomedical Problems, December 12  13, 2019, FriedrichAlexanderUniversität ErlangenNürnberg, December 12, 2019.

K. Tabelow, Quantitative MRI for invivo histology, Neuroimmunological Colloquium, CharitéUniversitätsmedizin Berlin, November 11, 2019.

K. Tabelow, Quantitative MRI for invivo histology, Doktorandenseminar, Berlin School of Mind and Brain, April 1, 2019.

K. Tabelow, Speaker of Neuroimaging Workshop, Workshop in Advanced Statistics: Good Scientific Practice for Neuroscientists, February 13  14, 2019, University of Zurich, Center for Reproducible Science, Switzerland.

K. Tabelow, Version control using git / Dynamic documents in R, Leibniz MMS Summer School 2019, October 28  November 1, 2019, Mathematisches Forschungsinstitut Oberwolfach.
Preprints im Fremdverlag

O. Butkovsky, K. Dareiotis , M. Gerencsér, Approximation of SDEsa stochastic sewing approach, Preprint no. arXiv:1909.07961, Cornell University, 2020.

C. Bellingeri, P. Friz, M. Gerencsér, Singular paths spaces and applications, Preprint no. arXiv:2003.03352, Cornell University, 2020.

A. Rastogi, P. Mathé, Inverse learning in Hilbert scales, Preprint no. arxiv:2002.10208, Cornell University Library, arXiv.org, 2020.

I. Shibaev, P. Dvurechensky, A. Gasnikov, Zerothorder methods for noisy Höldergradient functions, Preprint no. arXiv:2006.11857, Cornell University, 2020.

D. Tiapkin, A. Gasnikov, P. Dvurechensky, Stochastic saddlepoint optimization for Wasserstein Barycenters, Preprint no. arXiv:2006.06763, Cornell University, 2020.
Abstract
We study the computation of nonregularized Wasserstein barycenters of probability measures supported on the finite set. The first result gives a stochastic optimization algorithm for the discrete distribution over the probability measures which is comparable with the current best algorithms. The second result extends the previous one to the arbitrary distribution using kernel methods. Moreover, this new algorithm has a total complexity better than the Stochastic Averaging approach via the Sinkhorn algorithm in many cases. 
N. Tupitsa, P. Dvurechensky, A. Gasnikov, C.A. Uribe , Multimarginal optimal transport by accelerated alternating minimization, Preprint no. arXiv:2004.02294, Cornell University Library, arXiv.org, 2020.
Abstract
We consider a multimarginal optimal transport, which includes as a particular case the Wasserstein barycenter problem. In this problem one has to find an optimal coupling between m probability measures, which amounts to finding a tensor of the order m. We propose an accelerated method based on accelerated alternating minimization and estimate its complexity to find the approximate solution to the problem. We use entropic regularization with sufficiently small regularization parameter and apply accelerated alternating minimization to the dual problem. A novel primaldual analysis is used to reconstruct the approximately optimal coupling tensor. Our algorithm exhibits a better computational complexity than the stateoftheart methods for some regimes of the problem parameters. 
D. Dvinskikh, A. Ogaltsov, A. Gasnikov, P. Dvurechensky, A. Tyurin, V. Spokoiny, Adaptive gradient descent for convex and nonconvex stochastic optimization, Preprint no. arXiv:1911.08380, Cornell University, 2020.

D. Dvinskikh, Stochastic approximation versus sample average approximation for population Wasserstein barycenter calculation, Preprint no. arXiv:2001.07697, Cornell University, 2020.

P. Dvurechensky, S. Shtern, M. Staudigl, P. Ostroukhov, K. Safin, Selfconcordant analysis of FrankWolfe algorithms, Preprint no. arXiv:2002.04320, Cornell University, 2020.
Abstract
Projectionfree optimization via different variants of the FrankWolfe (FW) method has become one of the cornerstones in optimization for machine learning since in many cases the linear minimization oracle is much cheaper to implement than projections and some sparsity needs to be preserved. In a number of applications, e.g. Poisson inverse problems or quantum state tomography, the loss is given by a selfconcordant (SC) function having unbounded curvature, implying absence of theoretical guarantees for the existing FW methods. We use the theory of SC functions to provide a new adaptive step size for FW methods and prove global convergence rate O(1/k), k being the iteration counter. If the problem can be represented by a local linear minimization oracle, we are the first to propose a FW method with linear convergence rate without assuming neither strong convexity nor a Lipschitz continuous gradient. 
P. Friz, J. Gatheral, R. Radoičić, Forests, cumulants, martingales, Preprint no. arXiv:2002.01448, Cornell University, 2020.
Abstract
This work is concerned with forest and cumulant type expansions of general random variables on a filtered probability spaces. We establish a "broken exponential martingale" expansion that generalizes and unifies the exponentiation result of Alòs, Gatheral, and Radoičić and the cumulant recursion formula of Lacoin, Rhodes, and Vargas. Specifically, we exhibit the two previous results as lower dimensional projections of the same generalized forest expansion, subsequently related by forest reordering. Our approach also leads to sharp integrability conditions for validity of the cumulant formula, as required by many of our examples, including iterated stochastic integrals, Lévy area, Bessel processes, KPZ with smooth noise, WienerItô chaos and "rough" stochastic (forward) variance models. 
P. Friz, P. Pigato, J. Seibel, The step stochastic volatility model (SSVM), Preprint no. May 7, Available at SSRN's eLibrary: urlhttps://ssrn.com/abstract=3595408 or urlhttp://dx.doi.org/10.2139/ssrn.3595408, 2020.
Abstract
Stochastic Volatility Models (SVMs) are ubiquitous in quantitative finance. But is there a Markovian SVM capable of producing extreme (T^(1/2)) shortdated implied volatility skew? We here propose a modification of a given SVM "backbone", Heston for instance, to achieve just this  without adding jumps or nonMarkovian "rough" fractional volatility dynamics. This is achieved via nonsmooth leverage function, such as a step function. The resulting Step Stochastic Volatility Model (SSVM) is thus a parametric example of local stochastic volatility model (LSVM). From an IT perspective, SSVM amounts to trivial modifications in the code of existing SVM implementations. From a QF perspective, SSVM offers new flexibility in smile modelling and towards assessing model risk. For comparison, we then exhibit the marketinduced leverage function for LSVM, calibrated with the particle method. 
V. Avanesov, How to gamble with nonstationary xarmed bandits and have no regrets, Preprint no. arXiv:1908.07636, Cornell University Library, arXiv.org, 2019.
Abstract
In Xarmed bandit problem an agent sequentially interacts with environment which yields a reward based on the vector input the agent provides. The agent's goal is to maximise the sum of these rewards across some number of time steps. The problem and its variations have been a subject of numerous studies, suggesting sublinear and some times optimal strategies. The given paper introduces a novel variation of the problem. We consider an environment, which can abruptly change its behaviour an unknown number of times. To that end we propose a novel strategy and prove it attains sublinear cumulative regret. Moreover, in case of highly smooth relation between an action and the corresponding reward, the method is nearly optimal. The theoretical result are supported by experimental study. 
V. Avanesov, Nonparametric change point detection in regression, Preprint no. arXiv:1903.02603, Cornell University Library, arXiv.org, 2019.
Abstract
This paper considers the prominent problem of changepoint detection in regression. The study suggests a novel testing procedure featuring a fully datadriven calibration scheme. The method is essentially a black box, requiring no tuning from the practitioner. The approach is investigated from both theoretical and practical points of view. The theoretical study demonstrates proper control of firsttype error rate under H0 and power approaching 1 under H1. The experiments conducted on synthetic data fully support the theoretical claims. In conclusion, the method is applied to financial data, where it detects sensible changepoints. Techniques for changepoint localization are also suggested and investigated. 
F. Besold, V. Spokoiny, Adaptive manifold clustering, Preprint no. arXiv:1912.04869, Cornell University Library, arXiv.org, 2019.

O. Butkovsky, K. Dareiotis, M. Gerencsér, Approximation of SDEs  A stochastic sewing approach, Preprint no. arXiv:1909.07961, Cornell University Library, arXiv.org, 2019.

O. Butkovsky, A. Kulik, M. Scheutzow, Generalized couplings and ergodic rates for SPDEs and other Markov models, Preprint no. arXiv:1806.00395, Cornell University Library, arXiv.org, 2019.
Abstract
We establish verifiable general sufficient conditions for exponential or subexponential ergodicity of Markov processes that may lack the strong Feller property. We apply the obtained results to show exponential ergodicity of a variety of nonlinear stochastic partial differential equations with additive forcing, including 2D stochastic NavierStokes equations. Our main tool is a new version of the generalized coupling method. 
O. Butkovsky, M. Scheutzow, Couplings via comparison principle and exponential ergodicity of SPDEs in the hypoelliptic setting, Preprint no. arXiv:1907.03725, Cornell University Library, arXiv.org, 2019.

O. Butkovsky, F. Wunderlich, Asymptotic strong Feller property and local weak irreducibility via generalized couplings, Preprint no. arXiv:1912.06121, Cornell University Library, arXiv.org, 2019.
Abstract
In this short note we show how the asymptotic strong Feller property (ASF) and local weak irreducibility can be established via generalized couplings. We also prove that a stronger form of ASF together with local weak irreducibility implies uniqueness of an invariant measure. The latter result is optimal in a certain sense and complements some of the corresponding results of Hairer, Mattingly (2008). 
M. Redmann, An $L^2_T$error bound for timelimited balanced truncation, Preprint no. arXiv:1907.05478, Cornell University Library, arXiv.org, 2019.

Y.W. Sun, K. Papagiannouli, V. Spokoiny, Online graphbased changepoint detection for high dimensional data, Preprint no. arXiv:1906.03001, Cornell University Library, arXiv.org, 2019.
Abstract
Online changepoint detection (OCPD) is important for application in various areas such as finance, biology, and the Internet of Things (IoT). However, OCPD faces major challenges due to highdimensionality, and it is still rarely studied in literature. In this paper, we propose a novel, online, graphbased, changepoint detection algorithm to detect change of distribution in low to highdimensional data. We introduce a similarity measure, which is derived from the graphspanning ratio, to test statistically if a change occurs. Through numerical study using artificial online datasets, our datadriven approach demonstrates high detection power for highdimensional data, while the false alarm rate (type I error) is controlled at a nominal significant level. In particular, our graphspanning approach has desirable power with small and multiple scanning window, which allows timely detection of changepoint in the online setting. 
M. Alkousa, D. Dvinskikh, F. Stonyakin, A. Gasnikov, Accelerated methods for composite nonbilinear saddle point problem, Preprint no. arXiv:1906.03620, Cornell University Library, arXiv.org, 2019.

S. Athreya, O. Butkovsky, L. Mytnik, Strong existence and uniqueness for stable stochastic differential equations with distributional drift, Preprint no. arXiv:1801.03473, Cornell University Library, arXiv.org, 2019.

J. Diehl, K. EbrahimiFard, N. Tapia, Time warping invariants of multidimensional time series, Preprint no. arXiv:1906.05823, Cornell University Library, arXiv.org, 2019.
Abstract
In data science, one is often confronted with a time series representing measurements of some quantity of interest. Usually, in a first step, features of the time series need to be extracted. These are numerical quantities that aim to succinctly describe the data and to dampen the influence of noise. In some applications, these features are also required to satisfy some invariance properties. In this paper, we concentrate on timewarping invariants. We show that these correspond to a certain family of iterated sums of the increments of the time series, known as quasisymmetric functions in the mathematics literature. We present these invariant features in an algebraic framework, and we develop some of their basic properties. 
E. Gorbunov, D. Dvinskikh, A. Gasnikov, Optimal decentralized distributed algorithms for stochastic convex optimization, Preprint no. arXiv:1911.07363, Cornell University Library, arXiv.org, 2019.

A. Kroshnin, V. Spokoiny, A. Suvorikova, Statistical inference for BuresWasserstein barycenters, Preprint no. arXiv:1901.00226, Cornell University Library, arXiv.org, 2019.

N. Puchkin, V. Spokoiny, Structureadaptive manifold estimation, Preprint no. arXiv:1906.05014, Cornell University Library, arXiv.org, 2019.
Abstract
We consider a problem of manifold estimation from noisy observations. Many manifold learning procedures locally approximate a manifold by a weighted average over a small neighborhood. However, in the presence of large noise, the assigned weights become so corrupted that the averaged estimate shows very poor performance. We suggest a novel computationally efficient structureadaptive procedure, which simultaneously reconstructs a smooth manifold and estimates projections of the point cloud onto this manifold. The proposed approach iteratively refines the weights on each step, using the structural information obtained at previous steps. After several iterations, we obtain nearly öracle" weights, so that the final estimates are nearly efficient even in the presence of relatively large noise. In our theoretical study we establish tight lower and upper bounds proving asymptotic optimality of the method for manifold estimation under the Hausdorff loss. Our finite sample study confirms a very reasonable performance of the procedure in comparison with the other methods of manifold estimation. 
A. Rastogi, G. Blanchard, P. Mathé, Convergence analysis of Tikhonov regularization for nonlinear statistical inverse learning problems, Preprint no. arXiv:1902.05404, Cornell University Library, arXiv.org, 2019.
Abstract
We study a nonlinear statistical inverse learning problem, where we observe the noisy image of a quantity through a nonlinear operator at some random design points. We consider the widely used Tikhonov regularization (or method of regularization, MOR) approach to reconstruct the estimator of the quantity for the nonlinear illposed inverse problem. The estimator is defined as the minimizer of a Tikhonov functional, which is the sum of a data misfit term and a quadratic penalty term. We develop a theoretical analysis for the minimizer of the Tikhonov regularization scheme using the ansatz of reproducing kernel Hilbert spaces. We discuss optimal rates of convergence for the proposed scheme, uniformly over classes of admissible solutions, defined through appropriate source conditions. 
F. Stonyakin, A. Gasnikov, A. Tyurin, D. Pasechnyuk, A. Agafonov, P. Dvurechensky, D. Dvinskikh, A. Kroshnin, V. Piskunova, Inexact model: A framework for optimization and variational inequalities, Preprint no. arXiv:1902.00990, Cornell University Library, arXiv.org, 2019.

F. Stonyakin, D. Dvinskikh, P. Dvurechensky, A. Kroshnin, O. Kuznetsova, A. Agafonov, A. Gasnikov, A. Tyurin, C.A. Uribe, D. Pasechnyuk, S. Artamonov, Gradient methods for problems with inexact model of the objective, Preprint no. arXiv:1902.09001, Cornell University Library, arXiv.org, 2019.

N. Tupitsa, P. Dvurechensky, A. Gasnikov, S. Guminov, Alternating minimization methods for strongly convex optimization, Preprint no. arXiv:1911.08987, Cornell University Library, arXiv.org, 2019.
Abstract
We consider alternating minimization procedures for convex optimization problems with variable divided in many block, each block being amenable for minimization with respect to its variable with freezed other variables blocks. In the case of two blocks, we prove a linear convergence rate for alternating minimization procedure under PolyakŁojasiewicz condition, which can be seen as a relaxation of the strong convexity assumption. Under strong convexity assumption in manyblocks setting we provide an accelerated alternating minimization procedure with linear rate depending on the square root of the condition number as opposed to condition number for the nonaccelerated method. 
D. Dvinskikh, A. Gasnikov, Decentralized and parallelized primal and dual accelerated methods for stochastic convex programming problems, Preprint no. arXiv:1904.09015, Cornell University Library, arXiv.org, 2019.
Abstract
We introduce primal and dual stochastic gradient oracle methods for decentralized convex optimization problems. The proposed methods are optimal in terms of communication steps for primal and dual oracles. However, optimality in terms of oracle calls per node takes place in all the cases up to a logarithmic factor and the notion of smoothness (the worst case vs the average one). All the methods for stochastic oracle can be additionally parallelized on each node due to the batching technique. 
D. Dvinskikh, E. Gorbunov, A. Gasnikov, P. Dvurechensky, C.A. Uribe, On dual approach for distributed stochastic convex optimization over networks, Preprint no. arXiv:1903.09844, Cornell University Library, arXiv.org, 2019.
Abstract
We introduce dual stochastic gradient oracle methods for distributed stochastic convex optimization problems over networks. We estimate the complexity of the proposed method in terms of probability of large deviations. This analysis is based on a new technique that allows to bound the distance between the iteration sequence and the solution point. By the proper choice of batch size, we can guarantee that this distance equals (up to a constant) to the distance between the starting point and the solution. 
P. Dvurechensky, A. Gasnikov, P. Ostroukhov, C.A. Uribe, A. Ivanova, Nearoptimal tensor methods for minimizing the gradient norm of convex function, Preprint no. arXiv:1912.03381, Cornell University Library, arXiv.org, 2019.

V. Spokoiny, M. Panov, Accuracy of Gaussian approximation in nonparametric Bernsteinvon Mises theorem, Preprint no. arXiv:1910.06028, Cornell University Library, arXiv.org, 2019.
Forschungsgruppen
 Partielle Differentialgleichungen
 Laserdynamik
 Numerische Mathematik und Wissenschaftliches Rechnen
 Nichtlineare Optimierung und Inverse Probleme
 Stochastische Systeme mit Wechselwirkung
 Stochastische Algorithmen und Nichtparametrische Statistik
 Thermodynamische Modellierung und Analyse von Phasenübergängen
 Nichtglatte Variationsprobleme und Operatorgleichungen