Research Group "Stochastic Algorithms and Nonparametric Statistics"
Seminar "Modern Methods in Applied Stochastics and Nonparametric
Statistics" Summer Semester 2012
last reviewed: Jun, 05, 2012, Christine Schneider
06.03.12
Speaker: Niklas Willrich (HU Berlin)
Solutions of martingale problems for Lévy-type operators with discontinuous parameters and existence of weak solutions for associated stochastic differential equations
Abstract:
Starting from the idea of modelling a one-dimensional stochastic process
(Xt)t0, which behaves like one Levy process (L1 t )t0 on the upper half-line
and like another Levy process (L1 t )t0 on the lower half-line, we study the
SDE Xt = X0 + Zt 0 I(%Gô%@1;0](Xs%Gô%@)dL1
s + Zt0 I(0;+1)(Xs%Gô%@)dL2 s ; t 0: (1)
We show the existence of a weak solution of (1) by means of the existence
of a solution for an associated martingale problem under the condition
lim jj!1 Re qi() log(1 + jj) = 1; i 2 f1; 2g; (2)
where q1; q2 denote the characteristic exponents of the Levy processes. The
theorem developed to show the existence of a solution for the associated
martingale problem can also be used to show the existence of solutions of
martingale problems associated with the operators
Af(x) = Z Rnf0g (f(x + y) %Gô%@ f(x)) h(x) jyj1+(x) dy; x 2 R; (3)
with domain of denition D(A) = C1c (R) and : R ! R a measurable
function with 0 < inf x2R (x) sup x2R (x) < 2 whose set of discontinuities
has countable closure. In the case of a smooth enough function the
martingale problem has already been shown to be well-posed and the
solutions are called stable-like processes ( in the sense of Bass [1]).
10.04.12
Speaker: tba ()
T I T L E
Abstract:
24.04.12
Speaker: Marcel Ladkau (WIAS Berlin)
Multilevel policy iteration for pricing American options
Abstract: In this talk we propose a multilevel simulation based approach for
computing lower bounds of American options.
By constructing a sequence of Monte Carlo based policy iterations due
different levels of accuracy we construct a multilevel version of
policy iteration with significantly improved complexity. In this
context, we will present new convergence results regarding the bias and
variance of simulation based Howard iteration and show that the multilevel
complexity is superior to the standard one. This is joint work
with Denis Belomestny and John Schoenmakers.
01.05.12
Speaker: Christian Bayer (WIAS Berlin)
Utility maximization in a binomial model with proportional transaction costs
Abstract: We study the classical
problem of maximizing the expected utility of
the terminal value of a portfolio in a binomial (Cox-Ross-Rubinstein)
model. By classical results [Merton 1969] both in discrete and
continuous time, the optimal portfolio strategy in a friction-less
market is is given by keeping the proportion between the wealth
invested in the stock and the total portfolio wealth constant. For
markets with proportional transaction costs in continuous time, it is
known that the optimal trading strategy consists in keeping the wealth
proportion inside an interval, in the sense that no trade takes place
while the proportion remains in the interior of the interval. The size
of this interval is asymptotically proportional to the bid-ask-spread
to the power 1/3. In the discrete-time case, we show that the size of
the no-trade-region is now asymptotically proportional to the
bid-ask-spread itself.
15.05.12
Weining Wang (HU Berlin)
Tieing the straps: uniform bootstrap confidence interval for generalized linear models
Abstract: We consider a bootstrap ``coupling" technique for nonparametric robust smoothers and quantile regression, and verify the bootstrap improvement. To cope with curse of dimensionality, a different "coupling" bootstrap technique is developed for additive models with either symmetric error distributions and further extension to
the quantile regression framework. Our bootstrap method can be used in many situations like constructing
confidence intervals and bands. We demonstrate the bootstrap improvement in simulations and in applications
to firm expenditures and the interaction of economic sectors and the stock market.
Keywords: Nonparametric Regression, Bootstrap, Quantile Regression, Confidence Bands, Additive Model, Robust Statistics
This is joint work with Wolfgang Karl Härdle and Yaacov Ritov.
22.05.12 NOTE (changed room and time!): 11:00 at ESH (Erhard-Schmidt-Hörsaal)
Peter Friz (WIAS Berlin/TU Berlin)
Rough Stochastic PDEs, a Primer
Abstract: In [Hairer, Comm.
Pure Appl. Math. 2011] the author studies a class of
non-linear stochastic partial differential equations, subject to
severe noise which causes standard theory to fail. As it turns out,
the well-posedness of such equations hinges on the interpretation of
the linearized equation as evolution in rough path space.
The talk will be aimed at introducing, in some detail, the necessary
rough path background.
Note: On Wed 6.6. Jan Maas (Bonn) will speak on related topics in the
Langenbach Seminar.
05.06.12
Peter Mathé (WIAS Berlin)
Regularization of statistical inverse problems in Hilbert space
Abstract: We review linear inverse problems of the form
yσ = A x + σ ξ
in Hilbert space, i.e., when the operator A: X \to Y acts
between Hilbert spaces. The data yσ are noisy, and the noise
ξ is assumed to be Gaussian white noise, σ denotes the
noise level. The stable reconstruction of x based on noisy data
requires regularization. This can be achieved by either
discretization, or by classical regularization
schemes. In general, the solution is sought within a parametric
family, say xασ, α >0,
where α is a
regularization parameter. The quality of xασ strongly
depends on a correct choice of α. In statistics this is related
to model selection. The survey study [2]
highlights such problems from a statistical perspective.
We will discuss two types of model selection. First we review the
Lepski parameter choice under discretization. However, if the operator
A is a Hilbert-Schmidt operator, then the (symmetrized) data
zσ := A*yσ belong to the Hilbert space X
almost surely, and the discrepancy ||A*( A xασ -
yσ)|| is well defined. In this case classical regularization
theory proposes to use the discrepancy principle, i.e.,
choosing α such that ||A*( A xασ -
yσ)||≈ σ.
It turns out that this simple usage will not be optimal, and a
weighted discrepancy should be used, instead.
This approach has recently be analyzed for
general linear regularization and also for conjugate
gradient iteration[1], and in [3].
The analysis of (variants of the) discrepancy principle highlights
interesting aspects of regularization theory, and we shall explain
this.
References:
[1] Gilles Blanchard and Peter Mathé. Discrepancy principle for
statistical inverse problems with application to conjugate gradient
regularization. Technical report, University of Potsdam, 2011.
[2] L. Cavalier. Nonparametric statistical inverse problems.
Inverse Problems, 24(3):034004, 19, 2008.
[3] Shuai Lu and Peter Mathé. Varying discrepancy principle as an
adaptive parameter selection in statistical inverse problems.
submitted, 2012.
12.06.12
Joscha Diehl (TU Berlin)
Robust Filtering: Correlated Noise and
Multidimensional Observation
Abstract: The filtering problem is concerned
with the distribution of an Ito diffusion (the signal)
conditioned on another process (the observation).
In the case of independent signal and observation it was shown in Clark-Crisan
[On a robust version of the integral representation formula of
nonlinear filtering.
Probability Theory and Related Fields, 133, 2005.]
that there exist a version of the conditional distribution that depends
continuously (in supremem norm) on the observation process.
In the current work we show that in the
case of dependent signal and (multidimensional) observation
there exists a version that is continuous in rough path metric.
This is joint work with
Dan Crisan (Imperial College London),
Peter Friz (TU Berlin) and
Harald Oberhauser (TU Berlin).
No prior knowledge of rough path theory will be essential to follow the talk.
20.06.12 14:00 (CHANGED date and time!)
Lorenzo Rosasco (IIT-MIT)
Learning Sets with Separating Kernels and Spectral Regularization
Abstract: Stability and regularization are often the key to successfully learn from random/noisy samples of high dimensional data. Different paradigms, beyond penalized empirical risk minimization, can be used to design learning algorithms that ensure stability and hence generalization. Indeed, these ideas, which are classical in the theory of ill-posed inverse problems, have been successfully applied in supervised learning (scalar or vector valued). In this talk we show that they can also be applied in the context of unsupervised learning.
We consider the problem of learning a set from random samples. We show how relevant geometric and topological properties of a set can be studied analytically using concepts from the theory of reproducing kernel Hilbert spaces. A new notion of reproducing kernel, that we call separating kernel, plays a crucial role in our study and is analyzed in detail. We prove a new analytic characterization of the support of a distribution, that naturally leads to a family of provably consistent regularized learning algorithms. The stability of these methods with respect to random sampling is studied under suitable prior assumptions. Numerical experiments show that the approach is competitive, and often better, than other state of the art techniques.
Biography:
Lorenzo Rosasco is team leader of the IIT-MIT Computational and Statistical Learning Lab, a joint laboratory between the Istituto Italiano di Tecnologia (IIT) and Massachusetts Institute of Technology (MIT). He is assistant professor at the Computer Science Department of the University of Genova, Italy, and currently on leave of absence.
He has received his PhD from the University of Genova in 2006 where he worked under the supervision of Alessandro Verri and Ernesto De Vito in the SLIPGURU group. He was a visiting student with Tomaso Poggio at the Center for Biological and Computational Learning (CBCL) at MIT, with Steve Smale at the Toyota Technological Institute at Chicago (TTI-Chicago) and with Sergei Pereverzev at the Johann Radon Institute for Computational and Applied Mathematics. Between 2006 and 2009 he has been postdoctoral fellow at CBCL working with Tomaso Poggio. His research focuses on studying theory and algorithms for computational learning where he has developed and analyzed methods to learn from small as well as large samples of high dimensional data, using analytic and probabilistic tools.
26.06.12
Ismael Bailleul (University of Cambridge)
Flows driven by rough paths
Abstract:We show in this work how the familiar Taylor formula can be used
in a simple way to reprove from scratch the main existence and
well-posedness results from rough paths theory; the explosion
question, convergence of Euler schemes and Taylor expansion are also
dealt with. Unlike other approaches, we work mainly with flows of maps
rather than with paths. We illustrate our approach by proving a
well-posedness result for some mean field stochastic rough
differential equation.
tba
03.07.12
Sebastian Riedel (TU Berlin)
Gaussian rough paths and multilevel Monte Carlo
Abstract: We briefly review the structural conditions for a multi-dimensional Gaussian process to yield a naturally associated (random) rough path in the sense of Lyons. The resulting stochastic differential equations (or random rough differential equations) have been subject to intense investigations, especially in the case of fractional Brownian motion (fBM). Our first contribution is to establish somewhat definite a.s. and, in a second step, Lp rates of convergence for an implementable Milstein-type scheme, sharpening previous results of Hu-Nualart and Deya-Neuenkirch-Tindel. We then present a multilevel algorithm, introduced by Giles, which reduces the computational complexity when estimating expected values of functionals of SDE solutions. Adapting his analysis, we give an upper bound for the complexity of this multilevel Monte Carlo method when replacing the Brownian driving signal by random Gaussian rough paths.
This is joint work with Christian Bayer, Peter Friz and John Schoenmakers.
10.07.12
Jianing Zhang (WIAS Berlin)
tba
Abstract: tba