In probability theory and related fields, Malliavin calculus is a set of mathematical techniques and ideas that extend the mathematical field of calculus of variations from deterministic functions to stochastic processes. In particular, it allows the computation of derivatives of random variables. Malliavin calculus is also called the stochastic calculus of variations. P. Malliavin first initiated the calculus on infinite dimensional space. Then, the significant contributors such as S. Kusuoka, D. Stroock, J-M. Bismut, Shinzo Watanabe, I. Shigekawa, and so on finally completed the foundations.

Malliavin calculus is named after Paul Malliavin whose ideas led to a proof that Hörmander's condition implies the existence and smoothness of a density for the solution of a stochastic differential equation; Hörmander's original proof was based on the theory of partial differential equations. The calculus has been applied to stochastic partial differential equations as well.

The calculus allows integration by parts with random variables; this operation is used in mathematical finance to compute the sensitivities of financial derivatives. The calculus has applications in, for example, stochastic filtering.

Overview and history edit

Malliavin introduced Malliavin calculus to provide a stochastic proof that Hörmander's condition implies the existence of a density for the solution of a stochastic differential equation; Hörmander's original proof was based on the theory of partial differential equations. His calculus enabled Malliavin to prove regularity bounds for the solution's density. The calculus has been applied to stochastic partial differential equations.

Invariance principle edit

The usual invariance principle for Lebesgue integration over the whole real line is that, for any real number ε and integrable function f, the following holds

  and hence  

This can be used to derive the integration by parts formula since, setting f = gh, it implies

 

A similar idea can be applied in stochastic analysis for the differentiation along a Cameron-Martin-Girsanov direction. Indeed, let   be a square-integrable predictable process and set

 

If   is a Wiener process, the Girsanov theorem then yields the following analogue of the invariance principle:

 

Differentiating with respect to ε on both sides and evaluating at ε=0, one obtains the following integration by parts formula:

 

Here, the left-hand side is the Malliavin derivative of the random variable   in the direction   and the integral appearing on the right hand side should be interpreted as an Itô integral.

Gaussian probability space edit

The toy model of Malliavin calculus is an irreducible Gaussian probability space  . This is a (complete) probability space   together with a closed subspace[disambiguation needed]   such that all   are mean zero Gaussian variables and  . If one chooses a basis for   then one calls   a numerical model. On the other hand, for any separable Hilbert space   exists a canonical irreducible Gaussian probability space   named the Segal model having   as its Gaussian subspace. Properties of a Gaussian probability space that do not depend on the particular choice of basis are called intrinsic and such that do depend on the choice extrensic.[1] We denote the countably infinite product of real spaces as  .

Let   be the canonical Gaussian measure, by transferring the Cameron-Martin theorem from   into a numerical model  , the additive group of   will define a quasi-automorphism group on  . A construction can be done as follows: choose an orthonormal basis in  , let   denote the translation on   by  , denote the map into the Cameron-Martin space by  , denote

  and  

we get a canonical representation of the additive group   acting on the endomorphisms by defining

 

One can show that the action of   is extrinsic meaning it does not depend on the choice of basis for  , further   for   and for the infinitesimal generator of   that

 

where   is the identity operator and   denotes the multiplication operator by the random variable on   associated to   (acting on the endomorphisms).[2]

Clark–Ocone formula edit

One of the most useful results from Malliavin calculus is the Clark–Ocone theorem, which allows the process in the martingale representation theorem to be identified explicitly. A simplified version of this theorem is as follows:

Consider the standard Wiener measure on the canonical space  , equipped with its canonical filtration. For   satisfying   which is Lipschitz and such that F has a strong derivative kernel, in the sense that for   in C[0,1]

 

then

 

where H is the previsible projection of F'(x, (t,1]) which may be viewed as the derivative of the function F with respect to a suitable parallel shift of the process X over the portion (t,1] of its domain.

This may be more concisely expressed by

 

Much of the work in the formal development of the Malliavin calculus involves extending this result to the largest possible class of functionals F by replacing the derivative kernel used above by the "Malliavin derivative" denoted   in the above statement of the result. [citation needed]

Skorokhod integral edit

The Skorokhod integral operator which is conventionally denoted δ is defined as the adjoint of the Malliavin derivative in the white noise case when the Hilbert space is an   space, thus for u in the domain of the operator which is a subset of  , for F in the domain of the Malliavin derivative, we require

 

where the inner product is that on   viz

 

The existence of this adjoint follows from the Riesz representation theorem for linear operators on Hilbert spaces.

It can be shown that if u is adapted then

 

where the integral is to be understood in the Itô sense. Thus this provides a method of extending the Itô integral to non adapted integrands.

Applications edit

The calculus allows integration by parts with random variables; this operation is used in mathematical finance to compute the sensitivities of financial derivatives. The calculus has applications for example in stochastic filtering.

References edit

  1. ^ Malliavin, Paul (1997). Stochastic Analysis. Grundlehren der mathematischen Wissenschaften. Berlin, Heidelberg: Springer. pp. 4–15. ISBN 3-540-57024-1.
  2. ^ Malliavin, Paul (1997). Stochastic Analysis. Grundlehren der mathematischen Wissenschaften. Berlin, Heidelberg: Springer. pp. 20–22. ISBN 3-540-57024-1.

External links edit