Multivariable Ito Calculus
As is the case for ordinary Calculus, there exists a multi-variable version of Ito Calculus involving more than one Brownian. It is relevant for modelling situations in which there are several independent sources of uncertainty, for example stochastic volatility, credit risk with stochastic interest and hazard rates or multi-factor yield curve modelling.The extension of the one-variable Ito Calculus to several Brownian motion does not involve any really new ideas, once we have made clear what exactly is mean by a multi-dimensional Brownian motion. In fact, multi-dimensionality could conceivably be introduced in more than one way: either by looking at several stochastic processes, all parametrised by the same single time-parameter t, which is whatwill be done here, or by replacing the single t by a kind of ‘multi-dimensional time’ (t1 , . . . , tn ), which is what we will not do - the latter idea leads to the notion of Brownian sheets, which falls beyond the scope of this book (and which, at least to our knowledge, have to date not yet found applications in Finance).
n-dimensional Brownian motion
We begin by deﬁningmulti-dimensional Brownian motion. A standard Brownian motion in Rn , or a standard n-dimensional Brownian motion, is stochastic process (Zt )t≥0 whose value at time t is simply a vector of n independent Brownian motions at t: Zt = (Z1,t , . . . , Zn,t ). (6.1)
That is, each Zj,t is (the value at time t of) a one-dimensional Brownian motion, and the diﬀerent components Zi,t , Zj,t (i = j) are independentfor all (possibly diﬀerent) times t, t ≥ 0. We use the Z instead of W, since we want to reserve the latter for the more general case of correlated Brownian motion, which we introduce next. Let ρ = (ρij )1≤i,j≤n be a (constant) positive symmetric matrix1 satisfying
1 According to our notational conventions, we should have used a blackboard-boldface ρ, but for typographical reasons we simple usean ordinary one.
66 ρii = 1 and −1 ≤ ρi,j ≤ 1. ⎛ 1 ⎜ ρ21 ⎜ ρ=⎜ . ⎝ . . ρn1 Written out in full, we have ⎞ ρ12 · · · ρ1n 1 · · · ρ2n ⎟ ⎟ . . ⎟, ρij = ρji ∈ [−1, 1]. .. . . ⎠ . . . ρn2 · · · 1
Positivity of the matrix means that ρij xi xj ≥ 0,
for all x = (x1 , . . . , xn ) ∈ Rn . Equivalently, it means that all eigenvalues of ρ are positive. This implies, by standardLinear Algebra, that one can ﬁnd an n × n matrix H = (hij )1≤i,j≤n such that ρ = HHT , (6.3) where HT means taking the transpose of H. We can even take H to be upper (or lower) triangular in (6.3), in which case this is called a Cholesky decomposition of ρ: see Exercise 6.10 for an explicit example. If Zt = (Z1.t , . . . , Zn,t ) is a standard n-dimensional Brownian motion as introduced above, wedeﬁne a new vector-valued process Wt = (W1,t , . . . , Wn,t ) by Wt = H Zt , or, in terms of components, Wi,t =
hij Zj,t , i = 1, · · · , n.
This new process has the following, easily veriﬁed, properties. (i) W0 = 0, the zero vector in Rn . (ii) If s ≤ t, then Wt − Ws is multivariate normal, with mean 0 and variance– covariance matrix (t − s) · ρ: Wt − Ws ∼ N (0, (t − s) ·ρ). (iii) If 0 ≤ r ≤ s < t, then the random variables Wt − Ws and Wr are independent, meaning that each component of the former is independent of each component of the latter. (iv) The paths t → Wt are continuous with probability 1. Indeed, the ﬁrst property is obvious. As regards the second one, a linear combination of jointly normal random vectors is again jointly normal, and the covariances ofthe components can be computed as
E (Wi,t − Wi,s )(Wj,t − Wj,s ) =
hik hjl E (Zk,t − Zk,s )(Zl,t − Zl,s ) hik hjk E (Zk,t − Zk,s )2
= (t − s) HHT
Mathematical Methods I, Autumn 2009
where we used that diﬀerent components of (Zt )t≥0 are independent at all times (possibly diﬀerent for the diﬀerent components). Finally, the third and fourth...
Leer documento completo
Regístrate para leer el documento completo.