|
|
|
[Contents] | [Index] |
Collaborator: S. Jaschke , A. Kolodko , G.N. Milstein , O. Reiß , J. Schoenmakers , V. Spokoiny , J.-H. Zacharias-Langhans
Cooperation with: P. Annesly (Riskwaters Group, London, UK), H. Föllmer, W. Härdle, U. Küchler, R. Stehle (Humboldt-Universität (HU) zu Berlin), H. Haaf, U. Wystup (Commerzbank AG, Frankfurt am Main), A.W. Heemink (Technical University Delft, The Netherlands), J. Kienitz, S. Schwalm (Reuters AG, Düsseldorf/Paris), K. Sundermann (Postbank AG, Bonn), P. Kloeden (Johann Wolfgang Goethe-Universität Frankfurt am Main), C. März, D. Dunuschat, T. Sauder, T. Valette, S. Wernicke (Bankgesellschaft Berlin AG, Berlin), O. Kurbanmuradov (Physics and Mathematics Research Center, Turkmenian State University, Ashkhabad), M. Schweizer (Technische Universität Berlin/Universität München), G. Stahl (Bundesaufsichtsamt für das Kreditwesen (BAFin) Bonn)
Supported by:
BMBF: ``Effiziente
Methoden zur Bestimmung von
Risikomaßen'' (Efficient methods for
valuation of risk measures),
DFG: DFG-Forschungszentrum ``Mathematik für Schlüsseltechnologien''
(Research Center ``Mathematics for Key Technologies'');
SFB 373 ``Quantifikation und Simulation ökonomischer Prozesse''
(Quantification and simulation of economic processes),
Bankgesellschaft Berlin AG
Description:
The project Applied mathematical finance of the research group ``Stochastic Algorithms and Nonparametric Statistics'' is concerned with the stochastic modeling of financial data, the valuation of derivative instruments (options), and risk management for banks. The implementation of the developed models and their application in practice is done in cooperation with financial institutions.
Since the Basel Committee's proposal for ``An internal model-based approach to market risk capital requirements'' (1995) was implemented in national laws, banks have been allowed to use internal models for estimating their market risk and have been able to compete in the innovation of risk management methodology. Since all banks are required to hold adequate capital reserves with regard to their outstanding risks, there has been a tremendous demand for risk management solutions. A similar ``internal ratings-based approach'' is planned for the controlling of credit risk in the ``Basel II'' process, which is due to be implemented in national laws by 2006. Meanwhile, credit derivatives play an important role as vehicle for banks to transform credit risk into de jure market risk and to potentially lower the required reserves. Such problems of risk measurement and risk modeling are the subject of the research on ``Mathematical methods for risk management''. This research is supported by the BMBF project ``Efficient methods for valuation of risk measures'', which continued in 2002 in cooperation with and with support of Bankgesellschaft Berlin AG. Problems of both market and credit risk from the viewpoint of supervisory authorities are being worked on in cooperation with the BAFin.
The valuation of financial derivatives involves non-trivial mathematical problems in martingale theory, stochastic differential equations, and partial differential equations. While its main principles are established (Harrison, Pliska 1981), many numerical problems remain, such as the numerical valuation of American options and the valuation of financial derivatives involving the term structure of interest rates (LIBOR models) or volatility surfaces. By the continuing innovations in the financial industry new problems arise again and again. In the progressing research on interest rate (LIBOR) modeling and calibration [19, 32, 33] a crucial stability problem has been uncovered with respect to direct least-squares calibration of LIBOR models. As a solution a stabilized procedure is proposed in [31] and was presented at Risk Europe 2002 (Paris) and at Risk Quantitative Finance 2002 (London). On this subject a consulting contract with Reuters Financial Software (Paris) has been set up.
The project ``Applied mathematical finance'' took part in the formation of the DFG Research Center ``Mathematics for Key Technologies''.
Although the basic principles of the evaluation of market risks are now more or less settled, in practice many thorny statistical and numerical issues remain to be solved. Specifically the industry standard, the approximation of portfolio risk by the so-called ``delta-gamma normal'' approach, can be criticized because of the quadratic loss approximation and the Gaussian assumptions. Further, in the context of the ``Basel II'' consultations fundamental questions arise in the area of Credit Risk Modeling.
One of the problems that arose in the consulting with Bankgesellschaft Berlin led to a study of the Cornish-Fisher approximation in the context of delta-gamma normal approximations. This study was enhanced and completed [14]. The analysis shows a series of qualitative shortcomings of the method, while its quantitative behavior is satisfactory in specific situations. Regarding Bankgesellschaft's use of the Cornish-Fisher approximation, it is concluded that the method is a competitive technique if the portfolio distribution is relatively close to normal. It achieves a sufficient accuracy potentially faster than the other numerical techniques (mainly Fourier inversion, saddle-point methods, and partial Monte Carlo) over a certain range of practical cases. One should beware, however, of the many qualitative shortcomings and its bad worst-case behavior. If one takes the worst-case view and cares about the corner cases--as we believe one should in the field of risk management--the potential errors from the quadratic approximation are much larger than the errors from the Cornish-Fisher expansion. Hence a full-valuation Monte Carlo technique should be used anyway to frequently check the suitability of the quadratic approximation. This will also take care of the ``bad'' cases for the Cornish-Fisher approximation.
In the context of delta-gamma approximations, the study of Fourier inversion techniques was continued. [15] is a worst-case error analysis of non-adaptive, FFT-based approximations to the Fourier inversion integral of the cumulative distribution function (minus the Gaussian CDF) in this context. The error analysis allows to optimize certain parameters to achieve the asymptotically optimal rate of convergence. Empirical evidence is presented to show how the results of the error analysis can improve the performance over a plain-vanilla FFT inversion.
K=26 evaluations of the characteristic function suffice to ensure an accuracy of one digit in the approximation of the 1% quantile over a sample of one- and two-factor cases. K=29 function evaluations are needed for two digits accuracy. In comparison, a straightforward (non-optimized) FFT inversion of the probability density needs about 213 function calls to achieve one digit accuracy (see Figure 1).
This error analysis required the characterization of the tail behavior of the probability distribution of quadratic forms of Gaussian vectors, which is the subject of [16]. It provides a complete analysis of the tail behavior of this class of distributions and solves a problem that remained open in [15].
An overview of the Fourier inversion, Monte Carlo simulation and Cornish-Fisher expansion in the context of delta-gamma normal models is given by [18].
In joint work with Gerhard Stahl (BAFin Bonn) and Richard Stehle (HU Berlin), an empirical analysis of the forecast quality of VaR models of the 13 German banks that use internal models for regulatory market risk capital is performed. The goal of the analysis is to answer the following questions:
In preparation of the lecture ``Risk Management for Financial Institutions'' (Risikomanagement für Banken), given by S. Jaschke in the winter semester 2001/2002, an extensive review of the general literature on the subject was done. The practical implementation of an enterprise-wide risk management system needs an understanding of the economic, statistical, numerical, social, and information technology aspects of the problem. The insights gained from the study of the general literature allow to assess not only the inner-mathematical relevancy, but also the practical relevancy of new ideas and open problems. The lecture notes are available from http://www.cofi.de/risk-lecture.html.
In the context of the BMBF project ``Efficient methods for the valuation of risk measures'', which is treated in cooperation with the Bankgesellschaft Berlin AG, we focus on the problem of estimating the market risk of large portfolios by the Monte Carlo method. Our first goal is the efficient estimation of the above-mentioned quantile-VaR, which is from a practical point of view the most important risk measure. The results obtained for the VaR shall subsequently be used for estimating more complex risk measures, like the conditional Value at Risk and related quantities.
There are only two possibilities for accelerating the convergence of the Monte Carlo procedure. First, to reduce the number of steps (i.e. portfolio evaluations) necessary for reaching a given error level by variance reduction (important sampling, stratified sampling, etc.). Second, to reduce the time needed for a single step, by using fast algorithms for pricing the portfolio's components. The latter possibility is described below. Concerning the first point, a well-known ([9]) technique for variance reduction, which is based on the delta-gamma normal approximation has been implemented. By construction, this method fails to reduce variance, if the portfolio is very different from its delta-gamma approximation, which may happen for example if the portfolio is hedged. Therefore, we develop an adaptive sampling algorithm, where a Markov chain of Metropolis-Hastings type is used to create scenarios according to any given profit and loss distribution. Because of intrinsic difficulties, like the lack of global information, it is not clear whether this procedure can be used as a method for variance reduction. On the other hand, it allows for a detailed analysis of critical scenarios, giving for example information about the implied correlation structure, and the risk, inherent in changes of the correlation structure, the underlying processes are originally supposed to follow. Even small fluctuations of the correlation can cause huge losses, as the 1998 crash of hedge fund LTCM has impressively shown. Part of our work was also the programming of a Java-based graphical interface, which allows to dynamically value ``real-life'' portfolios. It organizes dependent market data and supplies the mathematical structures for the developed numerical routines, and also monitors and exploits statically the outcome of the Monte Carlo scheme during its run. Most of the numerical routines developed in context with this project are used in this program, which is actually running on a test level in the bank.
In cooperation with and supported by the Bankgesellschaft Berlin AG we worked on the efficient valuation of complex financial instruments, for example American options and convertible bonds. By some modifications of the standard binomial tree model, we could significantly increase speed and accuracy of the algorithm by reducing the computational effort of order N2 to N1.5, where N is the number of time steps used. Especially in context with high-accuracy calculations, which are necessary at the trading level for a stable treatment of sensitivities, the effect of this improvement becomes significant. So the use of very large numbers of time steps is now allowed, which is far beyond typical numbers used in comparative studies [1, 4]. Another modification, concerning the position of the tree nodes and which is interesting, for example, in view of the applicability of Richardson extrapolation techniques, led to an interpolation problem which could partially be solved, resulting in a smoothed convergence behavior of the algorithm. For standard American options, this method gives results comparable to the BBSR model described in [4]. By further improving the interpolation procedure, it seems to be possible to get even better results. A completely different algorithm, based on the Fast Fourier Transform, was also developed and analyzed. It was shown to be useful for the valuation of Bermudan-type options, but also for the standard American call, on an underlying paying of a large number of dividends, as for example index options. Another problem tackled in this context was how to incorporate credit risk in the valuation of instruments like, e.g., convertible bonds or ASCOTs. We implemented three distinct models, enabling therefore our cooperation partner to switch to the most adapted model for the specific situation.
The analysis of the delta-gamma normal algorithm to determine the Value at Risk has made substantial progress. In order to deal with rank deficient or perturbed correlation matrices a generalized Cholesky decomposition algorithm was developed [26]. To obtain the distribution of the profit-and-loss distribution, adapted Fourier inversion algorithms have been designed, and the use of the double exponential integration has been analyzed. Error bounds concerning the decay properties of the involved functions in x-space as well as in Fourier space have been developed.
One industrial standard to handle credit risk is CreditRisk+, which was developed by Credit Suisse First Boston in 1997. Based on the improving techniques we developed in the context of the delta-gamma normal method, this model has been analyzed and it turned out that similar Fourier inversion techniques can improve this model, too. Furthermore, generalizations of this model have been introduced, and an incorporation of credit risk and market risk within such a generalized framework has been established [27]. The research on this topic is related to the research topic E5 ``Statistical and numerical methods in modeling and valuation of financial derivatives and portfolio risk'' of the DFG Research Center ``Mathematics for Key Technologies''.
A very popular interest rate model is the LIBOR market model
[3, 13, 24] which
is given by
Calibration of a LIBOR market model to liquidly traded instruments such as caps and swaptions has been a challenging problem for several years. In particular, calibration methods which avoid the use of historical data are very desirable, both from a practical and a more fundamental point of view. Previously we derived, on a conceptual basis, a variety of parsimonious correlation structures suitable for implementation in the LIBOR/EurIBOR market model (1). Here is an example of a realistic two-parametric structure,
These correlation structures combined with suitable parametrizations of the volatility norms form the corner-stones of our calibration procedure. However, we detected an intrinsic stability problem in joint calibration of a multi-factor LIBOR market model with time-dependent volatility norms using the standard least-squares approach. This has led to incorporation of a new concept, the so-called ``Market Swaption Formula'' which is an intuition-based approximation of swaptions, in the objective function of the calibration routine [31]. By the method described in [31] stable calibration of a LIBOR market model to a whole system of caplet and swaption volatilities turns out to be feasible with only four parameters volatilities. Most of all this calibration remains stable even if the quality of the market data is less. Further, a refined approximation procedure for swaption prices has been derived. This method takes into account the issue of differently settled caps and swaptions and improves upon the method of Jäckel and Rebonato (2001). The respective algorithms are implemented as Excel add-ins and are currently used for consulting purposes (Reuters Financial Software).In a more economically motivated study we previously developed the concept of dealing with assets and interest rates in a unified model which is completely specified by the assets alone. This allowed for endogenous derivations of dynamic relations between assets and interest rates from global structural assumptions (homogeneity and some spherical symmetry) on the market. In particular, with respect to a rather general well-structured model we derived a relationship between the so-called spherical index and the short rate which may be regarded as an extension of earlier results and has the following interpretation:
Further, when the spherical index satisfied the assumptions of the Capital Asset Pricing Model we obtained that
- If R(t0,T)>r0, hence the yield curve goes up (most usual), then the local correlation of short rate r and (spherical) index I is negative.
- If R(t0,T)<r0, hence the yield curve goes down, then the local correlation of short rate r and (spherical) index I is positive.
(2) |
|
The above research is currently subject of empirical study and was presented at the Bachelier Congress 2002 [28].
For computing of option sensitivities we developed in [29] an analytical method and in [21] a Monte Carlo approach. The methods in [21] have been extended to determine the price and hedge of certain American options [20]. In this research we utilize more sophisticated algorithms for the simulation of stochastic differential equations in the neighborhood of a boundary [23].
The research on Bermudan-style interest rate derivatives has been placed in the context of the DFG Research Center ``Mathematics for Key Technologies''. In particular, we are currently investigating a connection of these types of derivatives with a method proposed by Rogers [30].
References:
|
|
|
[Contents] | [Index] |