|
|
|
[Contents] | [Index] |
Collaborator: D. Belomestny, C. Bender, A. Kolodko, D. Kolyukhin, D. Mercurio, M. Reiß, O. Reiß, K.K. Sabelfeld, J. Schoenmakers, V. Spokoiny
Cooperation with: Part (A):
B. Coffey (Merrill Lynch, London, UK),
R. Denk (Universität Konstanz),
R.J. Elliott (University of Calgary, Canada),
H. Föllmer, P. Imkeller, W. Härdle, U. Küchler (Humboldt-Universität
(HU) zu Berlin),
H. Friebel (Finaris, Frankfurt am Main),
H. Haaf, U. Wystup (Commerzbank AG, Frankfurt am Main),
A.W. Heemink, D. Spivakovskaya (Technical University Delft, The
Netherlands),
F. Jamshidian (NIB Capital Den Haag, The Netherlands),
J. Kampen (Ruprecht-Karls-Universität, Heidelberg)
J. Kienitz (Postbank, Bonn),
P. Kloeden (Johann Wolfgang Goethe-Universität Frankfurt am Main),
M. Kohlmann (Universität Konstanz),
C. März, T. Sauder, S. Wernicke (Bankgesellschaft Berlin AG, Berlin),
S. Nair (Chapman & Hall, London, UK),
M. Schweizer (Universität München / ETH Zürich),
S. Schwalm (Reuters FS, Paris, France),
T. Sottinen (University of Helsinki, Finland),
D. Tasche (Deutsche Bundesbank, Frankfurt am Main),
E. Valkeila (Helsinki University of Technology, Finland),
G.N. Milstein (Ural State University, Russia)
Part (B):
P. Kramer (Rensselaer Polytechnic Institute, USA),
O. Kurbanmuradov (Physical Technical Institute, Turkmenian Academy of
Sciences, Ashkhabad, Turkmenistan),
T. Foken (Universität Bayreuth),
A. Levykin, I. Shalimova, N. Simonov (Institute of Computational
Mathematics and Mathematical Geophysics, Russian Academy of Sciences,
Novosibirsk, Russia),
M. Mascagni (Florida State University, Tallahassee, USA),
A. Onishuk (Institute of Chemical Kinetics and Combustion, Russian
Academy of Sciences, Novosibirsk, Russia),
O. Smidts (Université Libre de Bruxelles (ULB), Belgium),
H. Vereecken (Forschungszentrum (FZ) Jülich, Institut für Chemie und
Dynamik der Geosphäre),
T. Vesala, U. Rannik (Helsinki University, Finland),
W. Wagner (WIAS: Research Group 5)
Supported by: DFG Research Center MATHEON, project E5; Reuters Financial Software, Paris; Bankgesellschaft Berlin AG; NATO Linkage Grant N978912; DFG
Description:
The central theme of the project Applied mathematical finance and stochastic simulation is the quantitative treatment of problems raised by industry, based on innovative methods and algorithms developed in accordance with fundamental stochastic principles. The project concentrates on two main areas: Applications in financial industry (A) and the development of basic stochastic models and algorithms in computational physics (B). The problems in area (A) include stochastic modeling of financial data, valuation of complex derivative instruments (options), risk analysis, whereas in area (B), stochastic algorithms for boundary value problems of electrostatic and elasticity potentials, random velocity field simulation, transport in porous media, and behavior of complex particle systems like the formation of charged nanoparticles in a combustion process are studied.
The valuation of financial derivatives based on abritrage-free asset pricing involves non-trivial mathematical problems in martingale theory, stochastic differential equations, and partial differential equations. While its main principles are established (Harrison, Pliska, 1981), many numerical problems remain such as the numerical valuation of (multidimensional) American equity options and the valuation of Bermudan-style derivatives involving the term structure of interest rates (LIBOR models), [11]. The valuation and optimal exercise of American and Bermudan derivatives is one of the most important problems both in theory and practice, see e.g., [1]. American options are options contingent on a set of underlyings which can be exercised at any time in some prespecified future time interval, whereas Bermudan options may be exercised at a prespecified discrete set of future exercise dates. A new development is the valuation of multiple callable products. In general, the fair price of an American- or Bermudan-style derivative can be represented as the solution of an optimal stopping problem.
In Kolodko and Schoenmakers (2004),
a new iterative procedure for solving the discrete
optimal stopping problem is developed.
The method is based on improving a canonical suboptimal policy
for Bermudan options: Exercise as soon as the cash flow dominates all
underlying Europeans ahead.
An iteration of this policy
produces monotonically increasing
approximations of the Snell envelope from below, which coincide with the
Snell envelope after finitely many steps. The method may be sketched as
follows:
The Bermudan option:
Let
{0, 1,..., k} be the set of Bemudan exercise dates. At one of these dates, say i, the Bermudan option holder may exercise a cash flow Zi.
Construction of the optimal stopping time:
Take a
fixed integer ,
1k, and an input
stopping family =
(, 0ik);
is the stopping (exercise)
time due to policy at exercise date i
provided the Bermudan option was not exercised
before date i.
Then consider the intermediate process
Using
as an exercise
criterion, we define a new family of stopping indexes
: | = | inf{j : i j k, Zj} | |
= | inf{j : i j k, EjZ Zj}, 0 i k, |
By duality, the iterative method induces a convergent sequence of upper bounds as well. Contrary to backward dynamic programming, the presented iterative procedure allows to calculate approximative solutions with only a few nestings of conditional expectations and is, therefore, tailor-made for a plain Monte Carlo implementation. The power of the procedure is demonstrated for high-dimensional Bermudan products, in particular, for Bermudan swaptions in a full factor Libor market model.
In a subsequent study, Bender and Schoenmakers (2004), the iterative procedure of Kolodko and Schoenmakers is extended for multiple callable options. Moreover, the stability of the algorithm is proved.
An approach for pricing both continuous-time and discrete-time American options, which is based on the fact that an American option is equivalent to a European option with a consumption process involved, is developed in Belomestny and Milstein (2004). This approach admits the construction of an upper bound (a lower bound) on the true price using some lower bound (some upper bound). The method requires the ability of computing the price of a lower bound at every position, for instance, by a Monte Carlo procedure. As an example, let f (t, x) be the payoff function of an American option, then a lower bound v(t, x) is given by
In a research cooperation with J. Kampen at Heidelberg University we aim to value Bermudan-style derivatives in the Libor market model based on higher order approximation of Greenian kernels. Greenian kernels are connected with the (high-dimensional) Libor process and integration with respect to these kernels will be implemented on sparse grids.
The classical theory of option pricing presupposes a frictionless market. For example, the market is assumed to be liquid, fractions of shares can be traded, and there are no restrictions on short selling stocks and borrowing money from the money market account. To incorporate constraints into a model, one can penalize violations of the constraints or prohibit violations completely. In Bender and Kohlmann (2004), several constructions of penalization processes for different classes of constraints are presented in the context of a generalized Black-Scholes market, which yield arbitrage-free, nonlinear pricing systems. The constraints may be time-dependent, random, and nonconvex. It is shown that with increasing weight of the penalization the option prices monotonically tend to the superhedging price (i.e. the seller's price, when violation of the constraint is prohibited) due to the monotonic limit theorem for BSDEs.
Fractional analogues of the Black-Scholes model have been proposed in recent years in order to incorporate long memory into market models. In Bender and Elliot (2004), a binary version of the Wick-fractional Black-Scholes model is considered and an arbitrage is constructed in the binary model. It is explained within the binary model why the no-arbitrage results in continuous time rather stem from replacing the self-financing condition by an economically doubtful condition than by the use of the Wick product in the stock price modeling.
Nowadays, the main focus of financial engineering is the development and application of probabilistic models that at the same time capture the main stylized facts of financial time series and allow robust and fast calibration and pricing methods. Soon after the introduction of the classical Black & Scholes model for asset prices, Merton (1976) proposed a more realistic model incorporating random jumps of the price process. Since then, the idea of incorporating jumps into the models has become very popular in mathematical finance, see, e.g., Cont and Tankov (2004), but the empirical work only concentrated on some very specific parametric models.
In the work of Reiß and Belomestny (2004), a calibration method is developed for the classical exponential Lévy model of the asset price
Our method is based upon an explicit nonlinear inversion formula for the price function in the spectral domain and uses the FFT algorithm. Due to the ill-posedness of the inversion, a spectral cut-off scheme is employed as regularization method. For an increasing number of observations the method is provably rate-optimal, but the rates, even for the parametric drift and diffusion part, correspond in general to a severely ill-posed problem. Although this seems discouraging for practical purposes, simulation studies show that for realistic model assumptions and observation designs, the calibration works reasonably well and captures the main features of the underlying parameters; see picture.
Since the seminal papers of [66] and [70], modeling the dynamic features of the variance of financial time series has become one of the most active fields of research in econometrics. New models, different applications and extensions have been proposed as it can be seen by consulting, for example, the monographs [71]. The main idea behind this strain of research is that the volatility clustering effect that is displayed by stock or exchange rate returns can be modeled globally by a stationary process. This approach is somehow restrictive and it does not fit some characteristics of the data, in particular the fact that the volatility process appears to be ``almost integrated'' as it can be seen by usual estimation results and by the very slow decay of the autocorrelations of squared returns. Other global parametric approaches have been proposed by [65] and by [72] in order to include these features in the model. Furthermore, continuous time models, and in particular diffusions and jump diffusions, have also been considered; see, for example, [69] and [64].
However, [79] showed that long memory effects of financial time series can be artificially generated by structural breaks in the parameters. This motivates another modeling approach which borrows its philosophy mainly from the nonparametric statistics. The main idea consists in describing the volatility clustering effect only by a locally stationary process. Therefore, only the most recent data are considered and weighting schemes, which can be themselves either global or local and data driven, are suggested in order to decrease the dependence of the estimate on the older observations. Some examples of this approach can be found in [67], in [68], and in [73]. Furthermore, [76] proposes a new locally adaptive volatility estimation (LAVE) of the unknown volatility from the conditionally heteroskedastic returns. The method is based on pointwise data-driven selection of the interval of homogeneity for every time point. The numerical results demonstrate a reasonable performance of the new method. In particular, it slightly outperforms the standard GARCH(1,1) approach. [74] extend this method to estimating the volatility matrix of the multiple returns and [78] apply the same idea in the context of a regression problem.
Mercurio and Spokoiny (2004b) developed another procedure which, however, applies a similar idea of pointwise adaptive choice of the interval of homogeneity. The main differences between the LAVE approach and the new procedure is in the way of testing the homogeneity of the interval candidate and in the definition of the selected interval. The new approach is based on the local change-point analysis. This means that every interval is tested on homogeneity against a change-point alternative. If the hypothesis is not rejected, a larger interval candidate is taken. If the change point is detected, then the location of the change point is used for defining the adaptive interval while [76] suggested to take the latest non-rejected interval. The modified procedure allows to improve the sensitivity of the method to changes of volatility by using the more powerful likelihood ratio test statistic with the careful choice of the critical level. In addition, the use of the additional information about the location of the change point which is delivered by the change-point test, helps to reduce the estimation bias. Finally, the interpretation of the procedure as a multiple test against a change-point alternative leads to a very natural method of tuning the parameters of the procedure.
Another important feature of the proposed procedure is that it can be easily extended to multiple volatility modeling, cf. [74]. Suppose that a number of financial time series is observed and the goal is to estimate the corresponding time-depending volatility matrix. We again assume that the volatility matrix is nearly constant within some historical time interval which we identify from the data. The volatility matrix is estimated in a usual way from the observations which belong to the detected interval.
The new procedure is shown to be rate-optimal in detecting the volatility changes and in estimating the smoothly varying volatility function. The numerical results indicate quite reasonable performance of the procedure.
The assumption of a local constant volatility is very useful in applications to Value-at-Risk analysis. Indeed, the volatility matrix is assumed nearly constant in some historical interval and this value of volatility can be extrapolated on some short time interval of ten working days in the future which is only relevant in the VaR analysis. Mercurio and Spokoiny (2004) systematically applied this approach and compared the results with what the standard GARCH (1,1) modeling delivers, see [75]. It turns out that the local constant method delivers better accuracy in volatility forecasting leading to a better fit of the prescribed excess rate.
Many financial portfolios are influenced by a variety of underlyings, but which are only locally correlated. The valuation of such products leads to high-dimensional integration, but the integrands possess a small effective dimension. Prototypically, we exhibit the following situation, describing a portfolio of a set {S1, S2,..., Sm}, of m underlyings with respective shares ,,...,, determining the present value
In cooperation with S. Jaschke, Bundesanstalt für Finanzdienstleistungsaufsicht Bonn, previous analysis of valuation of the --normal approximation by means of stratification, in particular using randomized orthogonal arrays, of large portfolios was extended to risk analysis. It could be shown that stratification is a general tool for enhancing known algorithms. This holds true for randomized orthogonal arrays and also for randomized digital nets, where recently codes became available for high dimensions. Specifically, the authors introduce a numerical quantity (meth, x, n), which describes the speedup of any chosen method ``meth'', using sample size n, against plain Monte Carlo sampling at location x. For specific methods, this quantity can be shown to be less than 1; thus such methods enhance Monte Carlo simulations.
In Jaschke and Mathé (2004), the theoretical results are combined with an extensive case study, see Figure 3, based on a portfolio of a German mid-sized bank.
The method of Milstein, Schoenmakers and Spokoiny [41]
developed in cooperation with the project Statistical
data analysis provides a new transition
density estimator for diffusion processes which is basically root-N
consistent for any dimension of the diffusion process. This estimator,
The forward-reverse density estimator in this year was successfully applied to pollution particle models in the North Sea environment by Spivakovskaya, Heemink, Milstein and Schoenmakers (2004). In this work, particle models are used to simulate the spreading of a pollutant in coastal waters in case of a calamity at sea. Many different particle trajectories starting at the point of release are generated to determine the particle concentration at some critical locations after the release. A standard Monte Carlo method for this essentially density problem would consume a long CPU time. However, here it is shown that the forward-reverse estimator gives superior results, see the figure below.
In 2004, the research concentrated on the development of new stochastic models and simulation techniques for solving high-dimensional boundary value problems with random coefficients related to the transport in porous media, potential and elasticity problems, and to the numerical solution of the Smoluchowski equation governing ensembles of interacting charged particles generated in a combustion process. New fundamental results were obtained in the random field simulation: Novel spectral randomization methods and Fourier/wavelet-based representations for divergenceless random fields were developed. These models have nice ergodic properties and can be efficiently implemented in numerical calculations.
The results are presented in the published papers [25], [28], [55], [63], in the papers [13], [29], [30] in press, and in five talks at international conferences.
We analyze and compare the efficiency and accuracy of two simulation methods for homogeneous random fields with multiscale resolution: the Fourier-wavelet method and three variants of the randomization method: (A) without any stratified sampling of wavenumber space, (B) stratified sampling of wavenumbers with equal energy subdivision, (C) stratified sampling with a logarithmically uniform subdivision. We focus on fractal Gaussian random fields with Kolmogorov-type spectra. As known from the previous work [52], variants (A) and (B) of the randomization method are only able to generate a self-similar structure function over three to four decades with reasonable computational effort. By contrast, variant (C), along with the Fourier-wavelet method, is able to reproduce accurate self-similar scaling of the structure function over a number of decades increasing linearly with computational effort (for our examples we show that twelve decades can be reproduced). We provide some conceptual and numerical comparison of the various cost contributions to each random field simulation method (overhead, cost per realization, cost per evaluation). We expect ergodic properties to be important in simulating the solutions to partial differential equations with random field coefficients, and to this end, it is interesting to compare the Fourier-wavelet and various randomization methods in simulating the solution to the Darcy equation for porous media flow with the heterogeneous conductivity modeled as a homogeneous random field.
Random functions (generally referred to as random fields)
provide a useful mathematical framework for representing
disordered heterogeneous media in theoretical and computational
studies, [52]. One example is in turbulent transport, where
the velocity field
() representing the
turbulent flow is modeled as a random field with statistics
encoding important empirical features, and the temporal dynamics
of the position
(t) of immersed particles is then
governed by equations involving this random field such as
() | = | - K()grad(), | |
div | = | 0, |
Interesting insights into the dynamics of transport in disordered media can be achieved already through relatively simple random models for the velocity field, such as a finite superposition of Fourier modes, with each amplitude independently evolving according to an Ornstein-Uhlenbeck process. Here, efficient and accurate numerical simulations of the flow can be achieved through application of the well-developed literature on simulating stochastic ordinary differential equations. We focus instead on the question of simulating random fields which involve challenging multiscale structures such as those relevant to porous media and turbulent flow simulations. Many questions remain open for the case of Gaussian multiscale random fields, so we confine our attention to this class. The paper [30] devoted to this study is to appear in 2005.
In this project, a stochastic model of a flow in incompressible porous media without the assumption of smallness of the hydraulic conductivity is constructed. The hydraulic conductivity is modeled as a divergenceless random field with a lognormal distribution by a Monte Carlo method based on the spectral representation of homogeneous Gaussian random fields described in the previous section of the present report. The Darcy law and continuity equation are solved numerically by the successive over-relaxation method which provides us with detailed statistical characteristics of the flow. The constructed numerical method is highly efficient so that we were able to outline the applicability of the spectral models derived in the first-order approximation. These results are published partly in [28] and the results of further study are to be published in [29].
Stochastic algorithms for solving high-dimensional Dirichlet problems for the Laplace equation and the system of Lamé equations are developed. The approach presented is based on the Poisson integral formula written down for each of a family of overlapping discs in 2D and spheres in 3D. The original differential boundary value problem is reformulated in the form of an equivalent system of integral equations defined on the intersection surfaces (arcs, in 2D, and caps, in 3D). A random walk algorithm can be applied then directly to the obtained system of integral equations where the random walks are living on the intersecting surfaces (arcs, in 2D, or caps, in 3D). Another version of a stochastic method is constructed as a discrete random walk for a system of linear equations approximating our system of integral equations. We develop two classes of special Monte Carlo iterative methods for solving these systems of equations which are a kind of stochastic versions of the Chebyshev iteration method and SOR method. In the case of classical potential theory, this approach improves the convergence of the well-known random walk on spheres method. What is, however, much more important is that this approach suggests a first construction of a fast convergent finite-variance Monte Carlo method for the system of Lamé equations.
Consider a homogeneous isotropic medium G I Rn with a boundary whose state in the absence of body forces is governed by the boundary value problem, see, e.g., [53]:
where (x) = (u1(x1,..., xn),..., un(x1,..., xn)) is a vector of displacements, whose components are real-valued regular functions. The elastic constant = ( + )/ is expressed through the Lamé constants of elasticity and .Consider an arbitrary point x with polar coordinates (r,) inside a disk K(x0, R). The point y situated on the circle S(x0, R) has the coordinates (R,), where = + , and z is defined by z = y - x; note that is the angle between the vectors x and y; is the angle between x and z. Define also the angle by = + .
Our method is based on the following solution representation which we obtained in [53], [54]:
The solution to Equation (2 ) satisfies the following mean value relation, x being an arbitrary point in K(x0, R):
B = . |
Relation (3 ) reads in the matrix form:
where p(y;x) = [R2 - | x - x0|2]/(2R(| x - y|2) is the kernel of the Poisson formula for the Laplace equation.
We apply this representation for two overlapping discs,
being the inner arcs of intersection,
and are the corresponding
external arcs. We now can derive a system of four
integral equations defined on the arcs
and . Indeed, let us introduce the notations:
v1(1)(x) = u1(x) and
v1(2)(x) = u2(x) for
x , and
v2(1)(x) = u1(x) and
v2(2)(x) = u2(x) for
x . Then the system
(4 ) can be written as
= + , or, in more detail,
We have proved in [55] the following result:
The integral operator of the system (5 ) is a Fredholm operator with kernels continuous on x and y , with the same type of singularities at the points of intersection of the arcs and as the singularities in the case of the Laplace equation. The spectral radius of is less than 1 which ensures the equivalence of system (5 ) and problem (2 ).
This scheme is extended to the case of n overlapping spheres. The numerical algorithm based on this representation is highly efficient. For illustration, we have made calculations for an elastic polymer consisting of 17 overlapping discs. The accuracy of about 0.1 % was achieved in 4.5 min on a personal computer.
The use of footprint functions in complex flows has been questioned because of anomalous behavior reported in recent model studies. We show that the concentration footprint can be identified with Green's function of the scalar concentration equation or the transition probability of a Lagrangian formulation of the same equation and so is well behaved and bounded by 0 and 1 in both simple and complex flows. The flux footprint in contrast is not a Green function but a functional of the concentration footprint and is not guaranteed to be similarly well behaved. We show that in simple, homogeneous shear flows, the flux footprint, defined as the vertical eddy flux induced by a unit point source, is bounded by 0 and 1 but that this is not true in more general flow fields. An analysis of recent model studies also shows that the negative flux footprints reported in homogeneous plant canopy flows are an artefact of reducing a canopy with a complex source-sink distribution in the vertical to a single layer but that in canopies on hilly topography the problems are more fundamental. Finally, we compare footprint inversion with the direct mass-balance method of measuring surface exchange. We conclude first that the flux footprint is an appropriate measure of the area influencing both eddy and advective fluxes on a tower but that the concentration footprint is the correct measure when the storage term is important. Second, we deduce that there are serious obstacles to inverting flux footprints in complex terrain. The results obtained in [31], [33], [34] were generalized and published in [63]. The work is done in cooperation with Prof. Dr. T. Vesala and Dr. U. Rannik (Helsinki University, Finland), Prof. T. Foken (Universität Bayreuth), and A. Levykin (Institute of Computational Mathematics and Mathematical Geophysics, Russian Academy of Sciences, Novosibirsk).
We deal here with the coagulation of charged particles, and the concentration is described by the Smoluchowski equation
= Kijmknimnjk - nlzKlizmnim, l > 2; | (6) |
It is seen from the formulation that the dimension of the problem is enormously increased, compared to the standard Smoluchowski equation, [26]. Therefore, we suggest a new method of stochastic simulation, which can be described as follows. First, we divide the set of all pairs of particles into subclasses so that in each subclass, the Neumann rejection method can be effectively applied. The sampling of the subclass is made according to the direct simulation from a discrete distribution where the Walker method is applied, see [50].
Some examples of simulations that we have made for the formation of charged aggregates of nanoparticles by combustion of aluminum droplets in air are given in [25], and some other aerosol formation applications are presented by us in [13].
References:
|
|
|
[Contents] | [Index] |