Exponential tilting

Monte Carlo distribution shifting technique

Exponential Tilting (ET), Exponential Twisting, or Exponential Change of Measure (ECM) is a distribution shifting technique used in many parts of mathematics. The different exponential tiltings of a random variable X {\displaystyle X} is known as the natural exponential family of X {\displaystyle X} .

Exponential Tilting is used in Monte Carlo Estimation for rare-event simulation, and rejection and importance sampling in particular. In mathematical finance [1] Exponential Tilting is also known as Esscher tilting (or the Esscher transform), and often combined with indirect Edgeworth approximation and is used in such contexts as insurance futures pricing.[2]

The earliest formalization of Exponential Tilting is often attributed to Esscher[3] with its use in importance sampling being attributed to David Siegmund.[4]

Overview

Given a random variable X {\displaystyle X} with probability distribution P {\displaystyle \mathbb {P} } , density f {\displaystyle f} , and moment generating function (MGF) M X ( θ ) = E [ e θ X ] < {\displaystyle M_{X}(\theta )=\mathbb {E} [e^{\theta X}]<\infty } , the exponentially tilted measure P θ {\displaystyle \mathbb {P} _{\theta }} is defined as follows:

P θ ( X d x ) = E [ e θ X I [ X d x ] ] M X ( θ ) = e θ x κ ( θ ) P ( X d x ) , {\displaystyle \mathbb {P} _{\theta }(X\in dx)={\frac {\mathbb {E} [e^{\theta X}\mathbb {I} [X\in dx]]}{M_{X}(\theta )}}=e^{\theta x-\kappa (\theta )}\mathbb {P} (X\in dx),}

where κ ( θ ) {\displaystyle \kappa (\theta )} is the cumulant generating function (CGF) defined as

κ ( θ ) = log E [ e θ X ] = log M X ( θ ) . {\displaystyle \kappa (\theta )=\log \mathbb {E} [e^{\theta X}]=\log M_{X}(\theta ).}

We call

P θ ( X d x ) = f θ ( x ) {\displaystyle \mathbb {P} _{\theta }(X\in dx)=f_{\theta }(x)}

the θ {\displaystyle \theta } -tilted density of X {\displaystyle X} . It satisfies f θ ( x ) e θ x f ( x ) {\displaystyle f_{\theta }(x)\propto e^{\theta x}f(x)} .

The exponential tilting of a random vector X {\displaystyle X} has an analogous definition:

P θ ( X d x ) = e θ T x κ ( θ ) P ( X d x ) , {\displaystyle \mathbb {P} _{\theta }(X\in dx)=e^{\theta ^{T}x-\kappa (\theta )}\mathbb {P} (X\in dx),}

where κ ( θ ) = log E [ exp { θ T X } ] {\displaystyle \kappa (\theta )=\log \mathbb {E} [\exp\{\theta ^{T}X\}]} .

Example

The exponentially tilted measure in many cases has the same parametric form as that of X {\displaystyle X} . One-dimensional examples include the normal distribution, the exponential distribution, the binomial distribution and the Poisson distribution.

For example, in the case of the normal distribution, N ( μ , σ 2 ) {\displaystyle N(\mu ,\sigma ^{2})} the tilted density f θ ( x ) {\displaystyle f_{\theta }(x)} is the N ( μ + θ σ 2 , σ 2 ) {\displaystyle N(\mu +\theta \sigma ^{2},\sigma ^{2})} density. The table below provides more examples of tilted density.

Original distribution[5][6] θ-Tilted distribution
G a m m a ( α , β ) {\displaystyle \mathrm {Gamma} (\alpha ,\beta )} G a m m a ( α , β θ ) {\displaystyle \mathrm {Gamma} (\alpha ,\beta -\theta )}
B i n o m i a l ( n , p ) {\displaystyle \mathrm {Binomial} (n,p)} B i n o m i a l ( n , p e θ 1 p + p e θ ) {\displaystyle \mathrm {Binomial} \left(n,{\frac {pe^{\theta }}{1-p+pe^{\theta }}}\right)}
P o i s s o n ( λ ) {\displaystyle \mathrm {Poisson} (\lambda )} P o i s s o n ( λ e θ ) {\displaystyle \mathrm {Poisson} (\lambda e^{\theta })}
E x p o n e n t i a l ( λ ) {\displaystyle \mathrm {Exponential} (\lambda )} E x p o n e n t i a l ( λ θ ) {\displaystyle \mathrm {Exponential} (\lambda -\theta )}
N ( μ , σ 2 ) {\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})} N ( μ + θ σ 2 , σ 2 ) {\displaystyle {\mathcal {N}}(\mu +\theta \sigma ^{2},\sigma ^{2})}
N ( μ , Σ ) {\displaystyle {\mathcal {N}}(\mu ,\Sigma )} N ( μ + Σ θ , Σ ) {\displaystyle {\mathcal {N}}(\mu +\Sigma \theta ,\Sigma )}
χ 2 ( κ ) {\displaystyle \chi ^{2}(\kappa )} G a m m a ( κ 2 , 2 1 2 θ ) {\displaystyle \mathrm {Gamma} \left({\frac {\kappa }{2}},{\frac {2}{1-2\theta }}\right)}

For some distributions, however, the exponentially tilted distribution does not belong to the same parametric family as f {\displaystyle f} . An example of this is the Pareto distribution with f ( x ) = α / ( 1 + x ) α , x > 0 {\displaystyle f(x)=\alpha /(1+x)^{\alpha },x>0} , where f θ ( x ) {\displaystyle f_{\theta }(x)} is well defined for θ < 0 {\displaystyle \theta <0} but is not a standard distribution. In such examples, the random variable generation may not always be straightforward.[7]

In statistical mechanics, the energy of a system in equilibrium with a heat bath has the Boltzmann distribution: P ( E d E ) e β E d E {\displaystyle \mathbb {P} (E\in dE)\propto e^{-\beta E}dE} , where β {\displaystyle \beta } is the inverse temperature. Exponential tilting then corresponds to changing the temperature: P θ ( E d E ) e ( β θ ) E d E {\displaystyle \mathbb {P} _{\theta }(E\in dE)\propto e^{-(\beta -\theta )E}dE} .

Similarly, the energy and particle number of a system in equilibrium with a heat and particle bath has the grand canonical distribution: P ( ( N , E ) ( d N , d E ) ) e β μ N β E d N d E {\displaystyle \mathbb {P} ((N,E)\in (dN,dE))\propto e^{\beta \mu N-\beta E}dNdE} , where μ {\displaystyle \mu } is the chemical potential. Exponential tilting then corresponds to changing both the temperature and the chemical potential.

Advantages

In many cases, the tilted distribution belongs to the same parametric family as the original. This is particularly true when the original density belongs to the exponential family of distribution. This simplifies random variable generation during Monte-Carlo simulations. Exponential tilting may still be useful if this is not the case, though normalization must be possible and additional sampling algorithms may be needed.

In addition, there exists a simple relationship between the original and tilted CGF,

κ θ ( η ) = log ( E θ [ e η X ] ) = κ ( θ + η ) κ ( θ ) . {\displaystyle \kappa _{\theta }(\eta )=\log(\mathbb {E} _{\theta }[e^{\eta X}])=\kappa (\theta +\eta )-\kappa (\theta ).}

We can see this by observing that

F θ ( x ) = x exp { θ y κ ( θ ) } f ( y ) d y . {\displaystyle F_{\theta }(x)=\int \limits _{\infty }^{x}\exp\{\theta y-\kappa (\theta )\}f(y)dy.}

Thus,

κ θ ( η ) = log e η x d F θ ( x ) = log e η x e θ x κ ( θ ) d F ( x ) = log E [ e ( η + θ ) X κ ( θ ) ] = log ( e κ ( η + θ ) κ ( θ ) ) = κ ( η + θ ) κ ( θ ) {\displaystyle {\begin{aligned}\kappa _{\theta }(\eta )&=\log \int e^{\eta x}dF_{\theta }(x)\\&=\log \int e^{\eta x}e^{\theta x-\kappa (\theta )}dF(x)\\&=\log \mathbb {E} [e^{(\eta +\theta )X-\kappa (\theta )}]\\&=\log(e^{\kappa (\eta +\theta )-\kappa (\theta )})\\&=\kappa (\eta +\theta )-\kappa (\theta )\end{aligned}}} .

Clearly, this relationship allows for easy calculation of the CGF of the tilted distribution and thus the distributions moments. Moreover, it results in a simple form of the likelihood ratio. Specifically,

= d P d P θ = f ( x ) f θ ( x ) = e θ x + κ ( θ ) {\displaystyle \ell ={\frac {d\mathbb {P} }{d\mathbb {P} _{\theta }}}={\frac {f(x)}{f_{\theta }(x)}}=e^{-\theta x+\kappa (\theta )}} .

Properties

  • If κ ( η ) = log E [ exp ( η X ) ] {\displaystyle \kappa (\eta )=\log \mathrm {E} [\exp(\eta X)]} is the CGF of X {\displaystyle X} , then the CGF of the θ {\displaystyle \theta } -tilted X {\displaystyle X} is
κ θ ( η ) = κ ( θ + η ) κ ( θ ) . {\displaystyle \kappa _{\theta }(\eta )=\kappa (\theta +\eta )-\kappa (\theta ).}
This means that the i {\displaystyle i} -th cumulant of the tilted X {\displaystyle X} is κ ( i ) ( θ ) {\displaystyle \kappa ^{(i)}(\theta )} . In particular, the expectation of the tilted distribution is
E θ [ X ] = d d η κ θ ( η ) | η = 0 = κ ( θ ) {\displaystyle \mathrm {E} _{\theta }[X]={\tfrac {d}{d\eta }}\kappa _{\theta }(\eta )|_{\eta =0}=\kappa '(\theta )} .
The variance of the tilted distribution is
V a r θ [ X ] = d 2 d η 2 κ θ ( η ) | η = 0 = κ ( θ ) {\displaystyle \mathrm {Var} _{\theta }[X]={\tfrac {d^{2}}{d\eta ^{2}}}\kappa _{\theta }(\eta )|_{\eta =0}=\kappa ''(\theta )} .
  • Repeated tilting is additive. That is, tilting first by θ 1 {\displaystyle \theta _{1}} and then θ 2 {\displaystyle \theta _{2}} is the same as tilting once by θ 1 + θ 2 {\displaystyle \theta _{1}+\theta _{2}} .
  • If X {\displaystyle X} is the sum of independent, but not necessarily identical random variables X 1 , X 2 , {\displaystyle X_{1},X_{2},\dots } , then the θ {\displaystyle \theta } -tilted distribution of X {\displaystyle X} is the sum of X 1 , X 2 , {\displaystyle X_{1},X_{2},\dots } each θ {\displaystyle \theta } -tilted individually.
  • If μ = E [ X ] {\displaystyle \mu =\mathrm {E} [X]} , then κ ( θ ) θ μ {\displaystyle \kappa (\theta )-\theta \mu } is the Kullback–Leibler divergence
D KL ( P P θ ) = E [ log P P θ ] {\displaystyle D_{\text{KL}}(P\parallel P_{\theta })=\mathrm {E} \left[\log {\tfrac {P}{P_{\theta }}}\right]}
between the tilted distribution P θ {\displaystyle P_{\theta }} and the original distribution P {\displaystyle P} of X {\displaystyle X} .
  • Similarly, since E θ [ X ] = κ ( θ ) {\displaystyle \mathrm {E} _{\theta }[X]=\kappa '(\theta )} , we have the Kullback-Leibler divergence as
D KL ( P θ P ) = E θ [ log P θ P ] = θ κ ( θ ) κ ( θ ) {\displaystyle D_{\text{KL}}(P_{\theta }\parallel P)=\mathrm {E} _{\theta }\left[\log {\tfrac {P_{\theta }}{P}}\right]=\theta \kappa '(\theta )-\kappa (\theta )} .

Applications

Rare-event simulation

The exponential tilting of X {\displaystyle X} , assuming it exists, supplies a family of distributions that can be used as proposal distributions for acceptance-rejection sampling or importance distributions for importance sampling. One common application is sampling from a distribution conditional on a sub-region of the domain, i.e. X | X A {\displaystyle X|X\in A} . With an appropriate choice of θ {\displaystyle \theta } , sampling from P θ {\displaystyle \mathbb {P} _{\theta }} can meaningfully reduce the required amount of sampling or the variance of an estimator.

Saddlepoint approximation

The saddlepoint approximation method is a density approximation methodology often used for the distribution of sums and averages of independent, identically distributed random variables that employs Edgeworth series, but which generally performs better at extreme values. From the definition of the natural exponential family, it follows that

f θ ( x ¯ ) = f ( x ¯ ) exp { n ( θ x ¯ κ ( θ ) ) } {\displaystyle f_{\theta }({\bar {x}})=f({\bar {x}})\exp\{n(\theta {\bar {x}}-\kappa (\theta ))\}} .

Applying the Edgeworth expansion for f θ ( x ¯ ) {\displaystyle f_{\theta }({\bar {x}})} , we have

f θ ( x ¯ ) = ψ ( z ) ( V a r [ X ¯ ] ) 1 / 2 { 1 + ρ 3 ( θ ) h 3 ( z ) 6 + ρ 4 ( θ ) h 4 ( z ) 24 } , {\displaystyle f_{\theta }({\bar {x}})=\psi (z)(\mathrm {Var} [{\bar {X}}])^{-1/2}\left\{1+{\frac {\rho _{3}(\theta )h_{3}(z)}{6}}+{\frac {\rho _{4}(\theta )h_{4}(z)}{24}}\dots \right\},}

where ψ ( z ) {\displaystyle \psi (z)} is the standard normal density of

z = x ¯ κ x ¯ ( θ ) κ x ¯ ( θ ) {\displaystyle z={\frac {{\bar {x}}-\kappa _{\bar {x}}'(\theta )}{\kappa _{\bar {x}}''(\theta )}}} ,
ρ n ( θ ) = κ ( n ) ( θ ) { κ ( θ ) n / 2 } {\displaystyle \rho _{n}(\theta )=\kappa ^{(n)}(\theta )\{\kappa ''(\theta )^{n/2}\}} ,

and h n {\displaystyle h_{n}} are the hermite polynomials.

When considering values of x ¯ {\displaystyle {\bar {x}}} progressively farther from the center of the distribution, | z | {\displaystyle |z|\rightarrow \infty } and the h n ( z ) {\displaystyle h_{n}(z)} terms become unbounded. However, for each value of x ¯ {\displaystyle {\bar {x}}} , we can choose θ {\displaystyle \theta } such that

κ ( θ ) = x ¯ . {\displaystyle \kappa '(\theta )={\bar {x}}.}

This value of θ {\displaystyle \theta } is referred to as the saddle-point, and the above expansion is always evaluated at the expectation of the tilted distribution. This choice of θ {\displaystyle \theta } leads to the final representation of the approximation given by

f ( x ¯ ) ( n 2 π κ ( θ ) ) 1 / 2 exp { n ( κ ( θ ) θ x ¯ ) } . {\displaystyle f({\bar {x}})\approx \left({\frac {n}{2\pi \kappa ''(\theta )}}\right)^{1/2}\exp\{n(\kappa (\theta )-\theta {\bar {x}})\}.} [8][9]

Rejection sampling

Using the tilted distribution P θ {\displaystyle \mathbb {P} _{\theta }} as the proposal, the rejection sampling algorithm prescribes sampling from f θ ( x ) {\displaystyle f_{\theta }(x)} and accepting with probability

1 c exp ( θ x + κ ( θ ) ) , {\displaystyle {\frac {1}{c}}\exp(-\theta x+\kappa (\theta )),}

where

c = sup x X d P d P θ ( x ) . {\displaystyle c=\sup \limits _{x\in X}{\frac {d\mathbb {P} }{d\mathbb {P} _{\theta }}}(x).}

That is, a uniformly distributed random variable p Unif ( 0 , 1 ) {\displaystyle p\sim {\mbox{Unif}}(0,1)} is generated, and the sample from f θ ( x ) {\displaystyle f_{\theta }(x)} is accepted if

p 1 c exp ( θ x + κ ( θ ) ) . {\displaystyle p\leq {\frac {1}{c}}\exp(-\theta x+\kappa (\theta )).}

Importance sampling

Applying the exponentially tilted distribution as the importance distribution yields the equation

E ( h ( X ) ) = E θ [ ( X ) h ( X ) ] {\displaystyle \mathbb {E} (h(X))=\mathbb {E} _{\theta }[\ell (X)h(X)]} ,

where

( X ) = d P d P θ {\displaystyle \ell (X)={\frac {d\mathbb {P} }{d\mathbb {P} _{\theta }}}}

is the likelihood function. So, one samples from f θ {\displaystyle f_{\theta }} to estimate the probability under the importance distribution P ( d X ) {\displaystyle \mathbb {P} (dX)} and then multiplies it by the likelihood ratio. Moreover, we have the variance given by

Var ( X ) = E [ ( ( X ) h ( X ) 2 ] {\displaystyle {\mbox{Var}}(X)=\mathbb {E} [(\ell (X)h(X)^{2}]} .

Example

Assume independent and identically distributed { X i } {\displaystyle \{X_{i}\}} such that κ ( θ ) < {\displaystyle \kappa (\theta )<\infty } . In order to estimate P ( X 1 + + X n > c ) {\displaystyle \mathbb {P} (X_{1}+\cdots +X_{n}>c)} , we can employ importance sampling by taking

h ( X ) = I ( i = 1 n X i > c ) {\displaystyle h(X)=\mathbb {I} (\sum _{i=1}^{n}X_{i}>c)} .

The constant c {\displaystyle c} can be rewritten as n a {\displaystyle na} for some other constant a {\displaystyle a} . Then,

P ( i = 1 n X i > n a ) = E θ a [ exp { θ a i = 1 n X i + n κ ( θ a ) } I ( i = 1 n X i > n a ) ] {\displaystyle \mathbb {P} (\sum _{i=1}^{n}X_{i}>na)=\mathbb {E} _{\theta _{a}}\left[\exp\{-\theta _{a}\sum _{i=1}^{n}X_{i}+n\kappa (\theta _{a})\}\mathbb {I} (\sum _{i=1}^{n}X_{i}>na)\right]} ,

where θ a {\displaystyle \theta _{a}} denotes the θ {\displaystyle \theta } defined by the saddle-point equation

κ ( θ a ) = a {\displaystyle \kappa '(\theta _{a})=a} .

Stochastic processes

Given the tilting of a normal R.V., it is intuitive that the exponential tilting of X t {\displaystyle X_{t}} , a Brownian motion with drift μ {\displaystyle \mu } and variance σ 2 {\displaystyle \sigma ^{2}} , is a Brownian motion with drift μ + θ σ 2 {\displaystyle \mu +\theta \sigma ^{2}} and variance σ 2 {\displaystyle \sigma ^{2}} . Thus, any Brownian motion with drift under P {\displaystyle \mathbb {P} } can be thought of as a Brownian motion without drift under P θ {\displaystyle \mathbb {P} _{\theta ^{*}}} . To observe this, consider the process X t = B t + μ t {\displaystyle X_{t}=B_{t}+\mu _{t}} . f ( X t ) = f θ ( X t ) d P d P θ = f ( B t ) exp { μ B T 1 2 μ 2 T } {\displaystyle f(X_{t})=f_{\theta ^{*}}(X_{t}){\frac {d\mathbb {P} }{d\mathbb {P} _{\theta ^{*}}}}=f(B_{t})\exp\{\mu B_{T}-{\frac {1}{2}}\mu ^{2}T\}} . The likelihood ratio term, exp { μ B T 1 2 μ 2 T } {\displaystyle \exp\{\mu B_{T}-{\frac {1}{2}}\mu ^{2}T\}} , is a martingale and commonly denoted M T {\displaystyle M_{T}} . Thus, a Brownian motion with drift process (as well as many other continuous processes adapted to the Brownian filtration) is a P θ {\displaystyle \mathbb {P} _{\theta ^{*}}} -martingale.[10][11]

Stochastic Differential Equations

The above leads to the alternate representation of the stochastic differential equation d X ( t ) = μ ( t ) d t + σ ( t ) d B ( t ) {\displaystyle dX(t)=\mu (t)dt+\sigma (t)dB(t)} : d X θ ( t ) = μ θ ( t ) d t + σ ( t ) d B ( t ) {\displaystyle dX_{\theta }(t)=\mu _{\theta }(t)dt+\sigma (t)dB(t)} , where μ θ ( t ) {\displaystyle \mu _{\theta }(t)} = μ ( t ) + θ σ ( t ) {\displaystyle \mu (t)+\theta \sigma (t)} . Girsanov's Formula states the likelihood ratio d P d P θ = exp { 0 T μ θ ( t ) μ ( t ) σ 2 ( t ) d B ( t ) + 0 T ( σ 2 ( t ) 2 ) d t } {\displaystyle {\frac {d\mathbb {P} }{d\mathbb {P} _{\theta }}}=\exp\{-\int \limits _{0}^{T}{\frac {\mu _{\theta }(t)-\mu (t)}{\sigma ^{2}(t)}}dB(t)+\int \limits _{0}^{T}({\frac {\sigma ^{2}(t)}{2}})dt\}} . Therefore, Girsanov's Formula can be used to implement importance sampling for certain SDEs.

Tilting can also be useful for simulating a process X ( t ) {\displaystyle X(t)} via rejection sampling of the SDE d X ( t ) = μ ( X ( t ) ) d t + d B ( t ) {\displaystyle dX(t)=\mu (X(t))dt+dB(t)} . We may focus on the SDE since we know that X ( t ) {\displaystyle X(t)} can be written 0 t d X ( t ) + X ( 0 ) {\displaystyle \int \limits _{0}^{t}dX(t)+X(0)} . As previously stated, a Brownian motion with drift can be tilted to a Brownian motion without drift. Therefore, we choose P p r o p o s a l = P θ {\displaystyle \mathbb {P} _{proposal}=\mathbb {P} _{\theta ^{*}}} . The likelihood ratio d P θ d P ( d X ( s ) : 0 s t ) = {\displaystyle {\frac {d\mathbb {P} _{\theta ^{*}}}{d\mathbb {P} }}(dX(s):0\leq s\leq t)=} τ t exp { μ ( X ( τ ) ) d X ( τ ) μ ( X ( τ ) ) 2 2 } d t = exp { 0 t μ ( X ( τ ) ) d X ( τ ) 0 t μ ( X ( s ) ) 2 2 } d t {\displaystyle \prod \limits _{\tau \geq t}\exp\{\mu (X(\tau ))dX(\tau )-{\frac {\mu (X(\tau ))^{2}}{2}}\}dt=\exp\{\int \limits _{0}^{t}\mu (X(\tau ))dX(\tau )-\int \limits _{0}^{t}{\frac {\mu (X(s))^{2}}{2}}\}dt} . This likelihood ratio will be denoted M ( t ) {\displaystyle M(t)} . To ensure this is a true likelihood ratio, it must be shown that E [ M ( t ) ] = 1 {\displaystyle \mathbb {E} [M(t)]=1} . Assuming this condition holds, it can be shown that f X ( t ) ( y ) = f X ( t ) θ ( y ) E θ [ M ( t ) | X ( t ) = y ] {\displaystyle f_{X(t)}(y)=f_{X(t)}^{\theta ^{*}}(y)\mathbb {E} _{\theta ^{*}}[M(t)|X(t)=y]} . So, rejection sampling prescribes that one samples from a standard Brownian motion and accept with probability f X ( t ) ( y ) f X ( t ) θ ( y ) 1 c = 1 c E θ [ M ( t ) | X ( t ) = y ] {\displaystyle {\frac {f_{X(t)}(y)}{f_{X(t)}^{\theta ^{*}}(y)}}{\frac {1}{c}}={\frac {1}{c}}\mathbb {E} _{\theta ^{*}}[M(t)|X(t)=y]} .

Choice of tilting parameter

Siegmund's algorithm

Assume i.i.d. X's with light tailed distribution and E [ X ] > 0 {\displaystyle \mathbb {E} [X]>0} . In order to estimate ψ ( c ) = P ( τ ( c ) < ) {\displaystyle \psi (c)=\mathbb {P} (\tau (c)<\infty )} where τ ( c ) = inf { t : i = 1 t X i > c } {\displaystyle \tau (c)=\inf\{t:\sum \limits _{i=1}^{t}X_{i}>c\}} , when c {\displaystyle c} is large and hence ψ ( c ) {\displaystyle \psi (c)} small, the algorithm uses exponential tilting to derive the importance distribution. The algorithm is used in many aspects, such as sequential tests,[12] G/G/1 queue waiting times, and ψ {\displaystyle \psi } is used as the probability of ultimate ruin in ruin theory. In this context, it is logical to ensure that P θ ( τ ( c ) < ) = 1 {\displaystyle \mathbb {P} _{\theta }(\tau (c)<\infty )=1} . The criterion θ > θ 0 {\displaystyle \theta >\theta _{0}} , where θ 0 {\displaystyle \theta _{0}} is s.t. κ ( θ 0 ) = 0 {\displaystyle \kappa '(\theta _{0})=0} achieves this. Siegmund's algorithm uses θ = θ {\displaystyle \theta =\theta ^{*}} , if it exists, where θ {\displaystyle \theta ^{*}} is defined in the following way: κ ( θ ) = 0 {\displaystyle \kappa (\theta ^{*})=0} . It has been shown that θ {\displaystyle \theta ^{*}} is the only tilting parameter producing bounded relative error ( lim sup x V a r I A ( x ) P A ( x ) 2 < {\displaystyle {\underset {x\rightarrow \infty }{\lim \sup }}{\frac {Var\mathbb {I} _{A(x)}}{\mathbb {P} A(x)^{2}}}<\infty } ).[13]

Black-Box algorithms

We can only see the input and output of a black box, without knowing its structure. The algorithm is to use only minimal information on its structure. When we generate random numbers, the output may not be within the same common parametric class, such as normal or exponential distributions. An automated way may be used to perform ECM. Let X 1 , X 2 , . . . {\displaystyle X_{1},X_{2},...} be i.i.d. r.v.’s with distribution G {\displaystyle G} ; for simplicity we assume X 0 {\displaystyle X\geq 0} . Define F n = σ ( X 1 , . . . , X n , U 1 , . . . , U n ) {\displaystyle {\mathfrak {F}}_{n}=\sigma (X_{1},...,X_{n},U_{1},...,U_{n})} , where U 1 , U 2 {\displaystyle U_{1},U_{2}} , . . . are independent (0, 1) uniforms. A randomized stopping time for X 1 , X 2 {\displaystyle X_{1},X_{2}} , . . . is then a stopping time w.r.t. the filtration { F n } {\displaystyle \{{\mathfrak {F}}_{n}\}} , . . . Let further G {\displaystyle {\mathfrak {G}}} be a class of distributions G {\displaystyle G} on [ 0 , ) {\displaystyle [0,\infty )} with k G = 0 e θ x G ( d x ) < {\displaystyle k_{G}=\int _{0}^{\infty }e^{\theta x}G(dx)<\infty } and define G θ {\displaystyle G_{\theta }} by d G θ d G ( x ) = e θ x k G {\displaystyle {\frac {dG_{\theta }}{dG(x)}}=e^{\theta x-k_{G}}} . We define a black-box algorithm for ECM for the given θ {\displaystyle \theta } and the given class G {\displaystyle {\mathfrak {G}}} of distributions as a pair of a randomized stopping time τ {\displaystyle \tau } and an F τ {\displaystyle {\mathfrak {F}}_{\tau }-} measurable r.v. Z {\displaystyle Z} such that Z {\displaystyle Z} is distributed according to G θ {\displaystyle G_{\theta }} for any G G {\displaystyle G\in {\mathfrak {G}}} . Formally, we write this as P G ( Z < x ) = G θ ( x ) {\displaystyle \mathbb {P} _{G}(Z<x)=G_{\theta }(x)} for all x {\displaystyle x} . In other words, the rules of the game are that the algorithm may use simulated values from G {\displaystyle G} and additional uniforms to produce an r.v. from G θ {\displaystyle G_{\theta }} .[14]

See also

References

  1. ^ H.U. Gerber & E.S.W. Shiu (1994). "Option pricing by Esscher transforms". Transactions of the Society of Actuaries. 46: 99–191.
  2. ^ Cruz, Marcelo (2015). Fundamental Aspects of Operational Risk and Insurance Analytics. Wiley. pp. 784–796. ISBN 978-1-118-11839-9.
  3. ^ Butler, Ronald (2007). Saddlepoint Approximations with Applications. Cambridge University Press. pp. 156. ISBN 9780521872508.
  4. ^ Siegmund, D. (1976). "Importance Sampling in the Monte Carlo Study of Sequential Tests". The Annals of Statistics. 4 (4): 673–684. doi:10.1214/aos/1176343541.
  5. ^ Asmussen Soren & Glynn Peter (2007). Stochastic Simulation. Springer. p. 130. ISBN 978-0-387-30679-7.
  6. ^ Fuh, Cheng-Der; Teng, Huei-Wen; Wang, Ren-Her (2013). "Efficient Importance Sampling for Rare Event Simulation with Applications". {{cite journal}}: Cite journal requires |journal= (help)
  7. ^ Asmussen, Soren & Glynn, Peter (2007). Stochastic Simulation. Springer. pp. 164–167. ISBN 978-0-387-30679-7
  8. ^ Butler, Ronald (2007). Saddlepoint Approximations with Applications. Cambridge University Press. pp. 156–157. ISBN 9780521872508.
  9. ^ Seeber, G.U.H. (1992). Advances in GLIM and Statistical Modelling. Springer. pp. 195–200. ISBN 978-0-387-97873-4.
  10. ^ Asmussen Soren & Glynn Peter (2007). Stochastic Simulation. Springer. p. 407. ISBN 978-0-387-30679-7.
  11. ^ Steele, J. Michael (2001). Stochastic Calculus and Financial Applications. Springer. pp. 213–229. ISBN 978-1-4419-2862-7.
  12. ^ D. Siegmund (1985) Sequential Analysis. Springer-Verlag
  13. ^ Asmussen Soren & Glynn Peter, Peter (2007). Stochastic Simulation. Springer. pp. 164–167. ISBN 978-0-387-30679-7.
  14. ^ Asmussen, Soren & Glynn, Peter (2007). Stochastic Simulation. Springer. pp. 416–420. ISBN 978-0-387-30679-7