Hanson 2007 Book – Background Info on Jump Diffusions

Stochastic Jump Diffusions (Ch1-Ch3)

Overall I find the writing of this book to be sloopy and imprecise at times, and some crucial concepts could have been more carefully explained, This can be quite annoying if you are teaching yourself with this. While it certainly is much simpler than Oksendal-Sulem and avoids semimartingales all together, I really cannot recommend teaching yourself with this book. So I decided to stop using this book after reading the first 3 chapters. My summaries are below.

Ch1 Stochastic Jump and Diffusion Processes

Markov Process
SP is a Markov process if the conditional probability satisfies: (domain of state space), we have .

Wiener Process

the standard Wiener process has:

  1. continuous path:

  2. independent increments: are mutually independent for all on non-overlapping time intervals

  3. is a stationary process: the distro of is independent of . Note that it is really difference stationary, should say Brownian motion has “stationary increments” to be precise.

  4. is Markov

  5. , so the density of is

  6. with prob 1:

So, if we think of Brownian increments of equal time steps, , where . These are iid with normal distro:

The book then refer to as “differential process”, and when , it has the same distro as , which is normal with mean 0 and variance .

Non-differentiability of sample path:

Poisson Processes

  1. has unit jumps: if jumps occurs at , then

  2. is right-continuous

  3. has independent increments: are mutually independent for all on non-overlapping time intervals

  4. is a stationary process: distro of is independent of . Again, the terminology should be “stationary increments”.

  5. is Markov:

  6. is Poisson distributed with mean and variance : . Here denotes the probability of the Poisson RV being equal to , not some parameter.

  7. with probability 1:

  8. is a martingale

Thus as for BM, are iid, and has the same discrete Poisson distribution as : .

As with BM, define , and this has the same discrete distro as , i.e., .

If we are to simulate , usually simulate time between jumps as we can show that has exponential distro: .

Poisson 0-1 Jump law
As , with probability , otherwise no jumps. Other possibilities have probabilities that vanish quicker than these two. To be “precise”, the book states it as:

  1. As
  2. As
  3. As

Then using this 0-1 jump law, we formally write

temporal/non-stationary Poisson process
time dependent jump rate: .

given the rate process , define , or in differential form, .

The **temporal Poisson process* has the following analogous results:

  1. up to order , with prob we have , otherwise

  2. inter-jump times is again exponentially distributed:

Ch2 Stochastic Integration for Diffusions

Jump diffusion SDE with initial conditions has the form:

This is a symbolic equation, it has no meaning until we specify the methods of integration for the 3 types of integrals:

Riemann Integration

  1. use to denote

  2. the partition of the interval is index by . A total of intervals. Denote as mesh size

  3. on each subinterval, take an “approximation point” , where .

  4. Define (constructively) , where

  5. Because BM is continuous with prob1, the integral of wrt can be defined via Riemann: . Here we chose but any is fine.

  6. Even for as solutions to jump-diffusion SDEs, can define this way.

  7. Stieltjes integral refers to a deterministic integration wrt the position on the path of . Define it (constructively) as: . This makes sense if is continuous and BV.

Ito integration wrt

start with trying to define . You would expect it may mimic deterministic case where . But turns out to be not the case.

So we go back to a discrete approximation first, and we use Ito’s choice of approximation ( ) to preserve independent increments:

After some algebra, we can show that

Then the book calculated the expectation of this expression, which is 0, and claims that this suggest a reasonable form of stochastic integral to be: . I don’t see why it seems natural, but certainly the sum of the squares converge to in L2 sense, and in the end the conjecture is correct.

convergence in mean square The RV converges in mean square to RV if , and we write it as , or we use the notation which stands for Ito mean square equals to$.

It would seem here that whenever this appears, to its right it should be a discrete approximation, for example, . However, later on this is used quite generously.

Then book proves the ms limit of the sum of squared Brownian increments is , and using that, we can prove the ms limit of is indeed .

This gives a rigorous definition of the expression , now we move to define Ito integral for more general integrands.

Ito mean square (ims) limit stochastic integral

For integrals of the form , we first define its forward integration approximation as:

,

then, denote

If this limit exists, we define to be this limit . Note that we would need to require to be bounded in the sense that is finite. Note also that in principle have other evaluations, depending on our choice of “ ”.

The book then summarizes the results so far, by claiming:

  1. (recall we said this notation will be used quite generously later. This is one instance)

The first expression is meant to say:

That is, sometimes we see a Ito integral to the left of , and to the right we have some term not involving . The discrete approximation is then implicit – we approximate the Ito integral as the ms limit of a discrete forward approximation, which is then shown to equal to the RHS. It’s hard for me to see the necessity of putting “ims” on top of the inequality, but I guess the book emphasizes that the symbolic stochastic integral doesn’t have to be calculated in the Ito sense, using the forward difference. So we want to emphasize we are taking the Ito integral by writing “ims” on top.

It is indeed remarkable that you can take an arbitaray Brownian sample path, partition time interval into fine subintervals, and then calculate the sum of the square of Brownian increments for different terminal time. It turns out this sum as a function of terminal time resembles the linear function . Note it is just one arbitrary sample path, we are not empirically verifying the ms convergence.

Compare this the continuously differentiable case, the corresponding quadratic of a differential $(dx)^{2}(t) would be negligible relative to terms of order :

, and after some algebra we can show that this is equal to 0.

I find mathstackexchange has some nice discussion on . I’ll summarize it here.

  1. one can show converges to as mesh size goes to 0

  2. this is based on the fact that is a martingale

  3. It seems natural to DEFINE

  4. thus, we have , ergo, the heuristic rule

The second expression of the book ( ) gives the following expression by the book:

. It reads “equal in -precision mean square. I don’t see how this can be useful though.

fundamental theorem of Ito calculus

  1. if is continuous, then

  2. if is C1, then

for exact derivatives, Ito stochastic integration and ordinary Riemann integration agree.

Now I have no idea what this is supposed to mean. First of all, the RHS of eq1. is not a well-defined random variable. Also, some book calls fundamenal theorem of Ito calculus to be Ito’s lemma.

But as we mentioned before, denotes discrete approximation, so for eq1 the discrete approx is .

Then, , where we used continuity of .

For eq2, the formal derivation is then: .

Then we see that the terms cancel so that

In general, some assumptions are needed so that ms convergence works. The book calls it “i-PWCA”, aka Piece-Wise-Constant Approximationsin the Ito Sense. This is consistent with more standard/modern treatment of Ito integral where we use a sequence of simple functions to approximate the integrand in the L2 sense.

Then, it is shown that if we assume ms integrability: , we have . While it is kinda intuitive when we write the discrete approximation using forward difference, the rigorous proof is not trivial at all.

Ito isometry

Finally, this section establishes some other properties of Ito integral, such as linearity, additivity, continuity of sample paths. And demonstrated the heuristic rules such as .

Note that now we know that it means: = 0 here is written as , and it means .

Stratonovich Integral Recall Ito approximates integrand using . Now we do discrete approximation for general .

We would then manipulate this sum to get familiar terms like , and new terms like which can be shown to converge ms to . In the end, we have:

Stratonovich sets . Note that when , the expectation of the integral in general is not 0.

Ch3 Stochastic Integration for Jumps

A more modern/standard treatment of this topic is here. The author is Nicolas Privault and he has extensive notes online about stochastic analysis and math fin.

Definition: (we require to be finite, and we need to satisfy the “i-PWCA” assumption, i.e., can be well approximated by simple functions)

Following the same “proof”, we can show the “fundamental thm of Poisson jump calculus:

Again, I don’t know exact what eq1 means. Its proof says:

1.1

1.2

1.3

** now I think of it, perhaps the in eq1 should be replaced with .**

Now, the book moves on to prove a special case:

It seems like a heuristic argument based on “dt-calculus” is provided first, then the ms limit is rigorously proven.

Then this is generalized to:

Note that if , then the integral evaluates to . This is because we are using forward approximation: at the time when the Poisson process jumps from 0 to 1, the contribution to the integral is , and when the Poisson process jumps from 1 to 2 (at some time before ), it contributes another term , and that’s it.

Also note that Nicolas Privault seem to disregard the issue with and just write . This seems to be the standard definition… elsewhere people talk about for a deterministic function , .

Finally, we have the most general Poisson integral formula:

,

unfortunately I don’t know what is supposed to mean and I don’t know why we have instead of as before. Book on p73 mentioned that it is “pre-jump time”, as if it is an actual time instant. I thought might mean the left limit of at , but the book also uses expressions like , as if it really is a particular constant.

update: I was reading Seydel’s “Tools for Computational Finance” when it occurred to me that we do need to distinguish between pre- and post- jump values, since it is awkward to say that the jump size is 2 times the current state, for example, as the current state already incorporates the jump. So often, we say jump size is 2 times the pre-jump value of the current state.

Specifically, on p61 of Seydel, jump size is defined as , where:

  1. denotes the infinitesmial instant immediately after jump

  2. denotes the infinitesmial instant immediately before jump

Although I still feel like we could have just write as .

Then some heuristic rules are derived:

And finally, isometry is proved. Letting , we have:

No comment found.

Add a comment

You must log in to post a comment.