Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Cheat Sheet: Mathematical Foundations for Finance, Cheat Sheet of Finance

Some Basic Concepts and Results of Mathematical Foundations for Finance

Typology: Cheat Sheet

2019/2020

Uploaded on 10/09/2020

wualter
wualter 🇺🇸

4.8

(91)

71 documents

Partial preview of the text

Download Cheat Sheet: Mathematical Foundations for Finance and more Cheat Sheet Finance in PDF only on Docsity! Cheat Sheet: Mathematical Foundations for Finance Steven Battilana, Autumn Semester 2017 1 Appendix: Some Basic Concepts and Results 1.1 Very Basic Things Def. (i) An empty product equals 1, i.e. ∏0 j=1 sj = 1. (ii) An empty sum equals 0, i.e. ∑0 j=1 sj = 0. Def. (geometric series) (i) sn = a0 n∑ k=0 qk q 6=1 = a0 qn+1−1 q−1 = a0 1−qn+1 1−q (ii) s = ∞∑ k=0 a0qk |q|<1 = a0 q−1 Def. (conditional probabilities) Q[C ∩D] = Q[C]Q[D|C] Def. (compact) Compactness is a property that generalises the notion of a subset of Euclidean space being closed (that is, containing all its limit points) and bounded (that is, having all its points lie within some fixed di- stance of each other). Examples include a closed interval, a rectangle, or a finite set of points (e.g. [0, 1]). Def. (P -trivial) F0 is P -trivial iff P [A] ∈ {0, 1}, ∀A ∈ F0. Useful rules  E[eZ ] = eµ+ 1 2 σ2 , for Z ∼ N (µ, σ2).  Let W be a BM, then 〈W 〉t = t, hence d〈W 〉s = ds. Def. (Hilbert space) A Hilbert space is a vector space H with an inner product 〈f, g〉such that the norm defined by |f | = √ 〈f, g〉 turns H into a complete metric space. If the metric defined by the norm is not complete, then H is instead known as an inner product space. Def. (cauchy-schwarz) • |〈u,w〉2| ≤ 〈u, u〉 · 〈w,w〉 • ∑N j=1 ujwj ≤ (∑N j=1 u 2 j ) 1 2 (∑N j=1 w 2 j ) 1 2 • ∫ b a u(τ)w(τ)dτ ≤ (∫ b a u 2(τ)dτ ) 1 2 (∫ b a w 2(τ)dτ ) 1 2 • |a(u, v)| ≤ (a(u, v)) 1 2 (a(u, v)) 1 2 Def. (Taylor) • f(x± h) = ∑∞ j=0 (±h)j j! djf dxj • Tf(x; a) = ∑∞ n=0 f(n)(a) n! (x− a)n Rem. (uniform convergence on compacts in probability 1) A mode of convergence on the space of processes which occurs often in the study of stochastic calculus, is that of uniform convergence on compacts in probability or ucp convergence for short. First, a sequence of (non-random) functions fn : R+ → R converges uniformly on compacts to a limit f if it converges uniformly on each bounded interval [0, t]. That is, sup s≤t |fn(s)− f(s)| → 0 as n→∞. If stochastic processes are used rather than deterministic functions, then convergence in probability can be used to arrive at the following definition. Def. (uniform convergence on compacts in probability) A sequence of jointly measurable stochastic processes Xn converges to the limit X uniformly on compacts in probability if P [ sup s≤t |Xns −Xs| > K ] → 0 as n→∞, ∀t,K > 0. Rem. (uniform convergence on compacts in probability 2) The notation Xn ucp−−→ X is sometimes used, and Xn is said to converge ucp to X. Note that this definition does not make sense for arbitrary stochastic processes, as the supremum is over the uncoun- table index set [0, t] and need not be measurable. However, for right or left continuous processes, the supremum can be restricted to the countable set of rational times, which will be measurable. Def. M2d([0, a]) := {M ∈M 2([0, a]) | t 7→Mt(ω) RCLL for any ω ∈ Ω} Def. (i) We denote by H2 the vector space of all semimartingales va- nishing at 0 of the form X = M + A with M ∈ M2d(0,∞) and A ∈ FV (finite variation) predictable with total variation V (1) ∞ (A) = ∫∞ 0 |dAs| ∈ L 2(P ). (ii) ‖X‖2H2 = ‖M‖ 2 M2 +‖V (1)∞ (A)‖2L2 = E [ [M ]∞ + (∫ ∞ 0 |dA| )2] Thm. (dominated convergence theorem for stochastic integrals) Suppose that X is a semimartingale with decomposition X = M+A as above, and let Gn, n ∈ N, and G be predictable processes. If Gnt (ω)→ Gt(ω) for anyt ≥ 0, almost surely, and if there exists a process H that is integrable w.r.t. X such that |Gn| ≤ H for any n ∈ N, then Gn· X → G·X u.c.p., as n→∞. If, in addition to the assumptions above, X is in H2 and ‖H‖X <∞ then even ‖Gn· X −G·X‖H2 → 0 as n→∞. ”Def.” (Wiki: null set) In set theory, a null set N ⊂ R is a set that can be covered by a countable union of intervals of arbitrarily small total length. The no- tion of null set in set theory anticipates the development of Lebesgue measure since a null set necessarily has measure zero. More generally, on a given measure space M = (X,Σ, µ) a null set is a set S ⊂ X s.t. µ(S) = 0. Def. (power set) We denote by 2Ω the power set of Ω; this is the family of all subsets of Ω. Def. (σ-fiel or σ-algebra) A σ-field or σ-algebra on Ω is a family F of subsets of Ω which con- tains Ω and which is closed under taking complements and countable unions, i.e.  A ∈ F ⇒ Ac ∈ F  Ai ∈ F , i ∈ N ⇒ ⋃ i∈N Ai is in F Rem. F is then also closed under countable intersections. Def. (finite σ-field) A σ-field is called finite if it contains only finitely many sets. Def. (measurable space) A pair (Ω,F) with Ω 6= ∅ and F a σ-algebra on Ω is called a mea- surable space. Rem. X is measurable (or more precisely Borel-measurable) if for every B ∈ B(R), we have {X ∈ B} ∈ F . Def. (indicator function) For any subset A of Ω, the indicator function IA is the function de- fined by IA(ω) := { 1, ω ∈ A 0, ω /∈ A Def. Let Ω 6= ∅ and a function X : Ω → R (or more generally to Ω′). Then σ(X) is the smallest σ-field, say, on Ω s.t. X is measurable with respect to G and B(R) (or G and F ′, respectively). We call σ(X) the σ-field generated by X. Steven Battilana p 1 / 23 Mathematical Foundations for Finance Rem. We also consider a σ-field generated by a whole familiy of mappings; this is then the smallest σ-field that makes all the mappings in that family measurable. Def. (probability measure, probability space) If (Ω,F) is a masurable space, a probability measure on F is a map- ping P : F → [0, 1] s.t. P [Ω] = 1 and P is σ-additive, i.e. P ⋃ i∈N Ai  = ∑ i∈N P [Ai], Ai ∈ F , i ∈ N, Ai ∩Aj = ∅, i 6= j. The triple (Ω,F , P ) is then called a probability space. Def. (P -almost surely) A statement holds P -almost surely or P -a.s. if the set A := {ω | the statement does not hold} is a P -nullset, i.e. has P [A] = 0. We sometimes use instead the for- mulation that a statement holds for P -almost all ω. Notation E.g., X ≥ Y P -a.s. means that P [X < Y ] = 0, or, equivalently P [X ≥ Y ] = 1. Notation: P [X ≥ Y ] := P [{X ≥ Y }] := P [{ω ∈ Ω | X(ω) ≥ Y (ω)}]. Def. (random variable) Let (Ω,F , P ) be a probability space and X : Ω → R a measurable function. We also say that X is a (real-valued) random variable. If Y is another random variable, we call X and Y equivalent if X = Y P -a.s. Def. (p-integrable) We denote by L0 or L0(F) the family of all equivalence classes of random variables on (Ω,F , P ). For 0 < p < ∞, we denote by Lp(P ) the family of all equivalence classes of random variables X which are p-integrable in the sense that E[|X|p] <∞, and we write then X ∈ Lp(P ) or X ∈ Lp for short. Finally, L∞ is the family of all equivalence classes of random variables that are bounded by a constant c (where the constant can depend on the random variable). Def. (Atom) If (Ω,F , P ) is a probability space, then an atom of F is a set A ∈ F with the properties that P [A] > 0 and that if B ⊆ A is also in F , then either P [B] = 0 or P [B] = P [A]. Intuitively, atoms are the ’smallest indivisible sets’ in a σ-field. Atoms are pairwise disjoint up to P -nullsets. Def. (atomless) The space (Ω,F , P ) is called atomless if F contains no atoms; this can only happen if F is infinite. Finite σ-fields can be very conve- niently described via their atoms because every set in F is then an union of atoms. Fatou’s Lemma If {fn} is a sequence of nonnegative measurable functions, then∫ lim infn→∞ fndµ ≤ lim infn→∞ ∫ fndµ. An example of an sequence of functions for which the inequality becomes strict is given by fn(x) = { 0, x ∈ [−n, n] 1, otherwise 1.2 Conditional expectations: A survival kit Rem. Let (Ω,F , P ) be a probability space and U a real-valued random variable, i.e. an F-measurable mapping U : Ω → R. Let G ⊆ F be a fixed sub-σ-field of F ; the intuitive interpretation is that G gives us some partial information. The goal is then to find a prediction for U on the basis of the information conveyed by G, or a best estimate for U that uses only information from G. Def. (conditional expectation of U given G) A conditional expectation of U given G is a real-valued random va- riable Y with the following two properties: (i) Y is G-measurable (ii) E[UIA] = E[Y IA], ∀A ∈ G Y is then called a version of the conditional expectation and is de- noted by Y = E[U |G]. Thm. 2.1 Let U be an integrable random variable, i.e. U ∈ L1(P ). Then: (i) There exists a conditional expectation E[U |G], and E[U |G] is again integrable. (ii) E[U |G] is unique up to P -nullsets: If Y, Y ′ are random variables satisfying the right above definition, then Y ′ = Y P -a.s. Lem. (properties and computation rules) Next, we list properties of and computation rules for conditional ex- pectations. Let U,U ′ be integrable random variables s.t. E[U |G] and E[U ′|G] exist. We denote by bG the set of all bounded G-measurable random variables. Then we have: (i) E[UZ] = E[E[U |G]Z], ∀Z ∈ b (ii) Linearity: E[aU1bU ′|G] = aE[U |G] + bE[U ′|G], P -a.s. ∀a, b,∈ R (iii) Monotonicity: If U ≥ U ′ P -a.s., then E[U |G] ≥ E[U ′|G] P -a.s. (iv) Projectivity: E[U |H] = E[E[U |G]|H], P -a.s. ∀σ-field H ⊆ G (v) E[U |G] = U, P -a.s. if U ∈ G-measurable (vi) E[E[U |G]] = E[U ] (vii) E[ZU |G] (v)= E[ZE[U |G]|G] (viii)= ZE[U |G], P -a.s. ∀Z ∈ bG (viii) E[U |G] = E[U ], P -a.s. for U independent of G Rem. (i) Instead of integrability of U , one could also assume that U ≥ 0; then analogous statements are true. (ii) More generally, (i) and (vii) hold as soon as U and ZU are both integrable or both nonnegative. (iii) If U is Rd-valued, one simply does everything component by com- ponent to obtain analogous results. Lem. 2.2 Let U, V be random variables s.t. U is G-measurable and V is inde- pendent of G. For every measurable function F ≥ 0 on R2, then E[F (U, V )|G] = E[F (u, V )]|u=U =: f(U). Intuitively, one can compute the conditional expectation E[F (U, V )|G] by ’fixing the known value U and taking the ex- pectation over the independent quantity V . Thm. 2.3 Suppose (Un)n∈N is a sequence of random variables. (i) If Un ≥ X P -a.s. for all n and some integrable random variable X, then E[lim inf n→∞ Un|G] ≤ lim inf n→∞ E[Un|G], P -a.s. (ii) If (Un) converges to some random variable U P -a.s. and if |Un| ≤ X P -a.s. for all n and some integrable random variable X, then E[ lim n→∞ Un|G] = E[U |G] = lim n→∞ E[Un|G], P -a.s. 1.3 Stochastic processes and functions Def. (stochastic process) A (real-valued) stochastic process with index set T is a family of random variables Xt, t ∈ T , which are all defined on the same pro- bability space (Ω,F , P ). We often write X = (Xt)t∈T . Def. (increment of stochastic process) For any stochastic process X = (Xk)k=0,1,...,T , we denote the in- crement from k − 1 to k of X by ∆Xk := Xk −Xk−1. Rem. A stochastic process can be viewed as a function depending on two parameters, namely ω ∈ Ω and t ∈ T . Def. (trajectory) If we fix t ∈ T , then ω 7→ Xt(ω) is simply a random variable. If we fix instead ω ∈ Ω, then t 7→ Xω(t) can be viewed as a function T → R, and we often call this the path or the trajectory of the process corresponding to ω. Def. (continuous) A stochastic process is continuous if all or P -almost all its trajecto- ries are continuous functions. Def. (RCLL) A stochastic process is RCLL if all of P -almost all its trajectories are right-continuous (RC) functions admitting left limits (LL). Def. (Wiki: signed measure) Given a measurable space (X,Σ), i.e. a set X with a σ-algebra on it, an signed measure is a function µ : Σ → R, s.t. µ(∅) = 0 and µ is σ-additive, i.e. it satisfies µ (⋃∞ n=1 An ) = ∑∞ n=1 µ(An) where the series on the right must converge absolutely, for any se- quence A1, A2, ..., An, ... of disjoint sets in Σ. Def. (Wiki: total variation in measure theory) Consider a signed measure µ on a measurable space (X,Σ), then it is possible to define two set functions W (µ, ·) and W (µ, ·), respectively Steven Battilana p 2 / 23 Mathematical Foundations for Finance is the discounted gains process. Cor. 3.2 For any martingale X and any stopping time τ , the stopped process Xτ is again a martingale. In particular, EQ[Xk∧τ ] = EQ[X0], ∀k. Interpretation A martingale describes a fair game in the sense that one cannot predict where it goes next. (i) Cor. 3.2 says that one cannot change this fundamental character by cleverly stopping the game. (ii) Thm. 3.1 says that as long as one can only use information from the past, not even complicated clever betting will help. Thm. 3.3 Suppose that X is an Rd-valued local Q-martingale null at 0 and ϑ is an Rd-valued predictable process. If the stochastic integral process ϑ ·X is uniformly bounded below (i.e. ϑ ·X ≥ −b Q-a.s., ∀k, b ≥ 0), then ϑ ·X is a Q-martingale. 2.4 An example: The multinomial model Def. (multiplicative model) The multiplicative model with i.i.d. returns is given by S̃0k S̃0k−1 = 1 + r > 0, ∀k, S̃1k S̃1k−1 = Yk, ∀k, where S̃00 = 1, S̃ 1 0 = S 1 0 > 0 is a constant, and Y1, ..., YT are i.i.d. and take the finitely many values 1 + y1, ..., 1 + ym with respective probabilities p1, ..., pm. We assume that all the probabilities pj are > 0 and that ym > ym−1 > ... > y1 > −1. This also ensures that S̃1 remains strictly positive. Rem. Intuition suggests that for a reasonable model, the sure factor 1 + r should lie between the minimal and maximal values 1+y1 and 1+ym of the (uncertain) random factor. Def. (canonical model) The simplest and in fact canonical model for this setup is a path space. Let Ω = {1, ...,m}T = {ω = (x1, ..., xT ) | xk ∈ {1, ...,m} for k = 1, ..., T} be the set of all sequences of length T formed by element of {1, ...,m}. Take F = 2Ω, the family of all subsets of Ω, and de- fine P by setting P [{ω}] = px1px2 · ... · pxT = T∏ k=1 pxk . Finally, define Y1, ..., YT by Yk(ω) := 1 + yxk (2) so that Yk(ω) = 1 + yj iff xk = j. We take as filtration the one generated by S̃1 (or, equivalently, by Y ) s.t. Fk = σ(Y1, ..., Yk), k = 0, 1, ..., T. Def. (atom) A set A ⊆ Ω is an atom of Fk iff there exists a sequence (x1, ..., xk) of length k with elements xi ∈ {1, ...,m} s.t. A consists of all those ω ∈ Ω that start with the substring (x1, ..., xk), i.e. A = Ax1,...,xk := {ω = (x1, ..., xT ) ∈ {1, ...,m} T | xi = xi for i = 1, ..., k}. Consequences (i) Each Fk is parametrised by substrings of length k and therefore contains precisely mk atoms. (ii) When going from time k to time k + 1, each atom A = Ax1,...,xk from Fk splits into precisely m subsets A1 = Ax1,...,xk,1, ..., Am = Ax1,...,xk,m that are atoms of Fk+1. (iii) The atoms of Fk are pairwise disjoint and their union is Ω. Fi- nally, each set B ∈ Fk is a union of atoms of Fk; so the family Fk of events observable up to time k consists of 2m k sets. Def. (one-step transition probabilities) For any atom A = Ax1,...,xk of Fk, we then look at its m suc- cessor atoms A1 = Ax1,...,xk,1, ..., Am = Ax1,...,xk,m of Fk+1, and we define the one-step transition probabilities for Q at the node corresponding to A by the conditional probabilities Q[Aj |A] = Q[Aj ] Q[A] , for j = 1, ...,m. Because A is the disjoint union of A1, ..., Am, we have 0 ≤ Q[Aj |A] ≤ 1 for j = 1, ..,m and m∑ j=1 Q[Aj |A] = 1. Rem. The decomposition of factorisation of Q in such a way that for every trajectory ω ∈ Ω, its probability Q[{ω}] is the product of the succes- sive one-step transition probabilities along ω. Rem. We can describe Q equivalently either via its global weights Q[{ω}] or via its local transition behaviour. Def. (independent growth rates) The (coordinate) variables Y1, ..., YT from (2) are independent under Q iff for each k, the one-step transition probabilities are the same for each node at time k, but they can still differ across date k. Def. Y1, ..., YT are i.i.d. under Q iff at each node throughout the tree, the one-step transition probabilities are the same. Rem. Probability measures with this particular structure can therefore be described by m−1 parameters; recall that the m one-step transition probabilities at any given node must sum to 1, which eliminates one degree of freedom. 2.5 Properties of the market Characterisation of financial markets via EMMs (equivalent mar- tingale measures) The description of a financial market model via EMMs can be summarized as follows:  Existence of an EMM ⇐⇒ the market is arbitrage-free i.e. Pe(S) 6= ∅ by the 1st FTAP  Uniqueness of the EMM ⇐⇒ the market is complete i.e. #(Pe(S)) = 1 by the 2nd FTAP 2.5.1 Arbitrage 1st Fundamental Theorem of Asset Pricing (FTAP) Thm. 2.1 (Dalang/Morton/Willinger) (i) Consider a financial market model in finite discrete time. (ii) Then S is arbitrage-free iff there exists an EMM for S, i.e. (NA) ⇐⇒ Pe(S) 6= ∅  In other words: If there exists a probability measure Q ≈ P on FT s.t. S is a Q-martingale, then S is arbitrage-free.  Limitations: The most important of these assumptions are fric- tionless markets and small investors—and if one tries to relax these to allow for more realism, the theory even in finite discrete time becomes considerably more complicated and partly does not even exist yet. Cor. 2.2 The multinomial model with parameters y1 < ... < ym and r is arbitrage-free iff y1 < r < ym. Cor. 2.3 The binomial model with parameters u > d and r is arbitrage-free iff d < r < u. In that case, the EMM Q∗ for S̃1/S̃0 is unique (on FT ) and is given as in Cor. 1.4. Arbitrage opportunity Def. (arbitrage opportunity, arbitrage-free) (i) An arbitrage opportunity is an admissible self-financing strategy ϕ = ∧ (0, ϑ) with zero initial wealth, with Vt(ϕ) ≥ 0, P -a.s. and with P [VT (ϕ) > 0] > 0. (ii) The financial market (Ω,F ,F, P , S0, S) or shortly S is called arbitrage-free if there exist no arbitrage opportunities. (iii) Sometimes one also says that S satisfies (NA). Def. (i) (NA+) := forbids to produce something out of nothing with 0- admissible self-financing strategies. Steven Battilana p 5 / 23 Mathematical Foundations for Finance (ii) (NA′) := forbids the same for all (not necessarily admissible) self- financing strategies. (iii) Then we clearly have the implications: (NA′) ⇒ (NA) ⇒ (NA+) Note, for finite discrete time, the three concepts are all equivalent. Prop. 1.1 For a financial market in finite discrete time, the following statements are equivalent: (i) S is arbitrage-free. (ii) There exists no self-financing strategy ϕ = ∧ (0, ϑ) with zero initial wealth and satisfying VT (ϕ) ≥ 0, P -a.s. and P [VT (ϕ) > 0] > 0. In other words, S satisfies (NA′). (iii) For every (not necessarily admissible) self-financing strategy ϕ with V0(ϕ) = 0, P -a.s. and VT (ϕ) ≥ 0 P -a.s., we have VT (ϕ) = 0 P -a.s. (iv) For the space G′ := {GT (ϑ) | ϑ is Rd-valued and predictable} of all final wealths that one can generate from zero initial wealth through self-financing trading, we have G′ ∩ L0+(FT ) = {0} where L0+(FT ) denotes the space of all nonnegative FT - measurable RVs, i.e. L0+(FT ) = Rn+, (L0(FT ) = Rn). Rem. (i) The two sets L0+(FT ) and G′ can be separated by a hyperplane, and the normal vector defining that hyperplane then yields (after suitable normalisation) the (density of the) desired EMM. (ii) The existence of an EMM follows from the existence of a separa- ting hyperplane between two sets. (iii) The set Pe(S) is convex, it is either empty, or contains exactly one element, or contains infinitely (uncountably) many elements. Interpretation: Absence of arbitrage is a natural economic/financial requirement for a reasonable model of a financial market, since there cannot exist ”money pumps” (at least not for long). Def. (equivalent) Two probability measures Q and P on (Ω,F) are equivalent (on F), written as Q ≈ P on F , it they ahve the same nullsets (in F), i.e. if for each set A (in F) we have P [A] = 0 iff Q[A] = 0. Lem. 1.2 If there exists a probability measure Q ≈ P on FT s.t. S is a Q- martingale, then S is arbitrage-free. Cor. 1.3 In the multinomial model with parameters y1 < ... < ym and r, the- re exists a probability measure Q ≈ P s.t. S̃10/S̃0 is a Q-martingale iff yi < r < ym. Cor. 1.4 In the binomial model with parameters u > d and r, there exists a probability measure Q ≈ P s.t. S̃10/S̃0 is a Q-martingale iff u > r > d. In that case, Q is unique (on F) and characterised by the property that Y1, ..., YT are i.i.d. under Q with parameter Q[Yk = 1 + u] = q ∗ = r−d u−d = 1−Q[Yk = 1 + d]. 2.5.2 Completeness Def. (Completeness of the market)  A financial market model (in finite discrete time) is called com- plete if every payoff H ∈ L0+(FT ) is attainable.  Otherwise it is called incomplete. Thm. 2.1 (Valuation and hedging in complete markets) Consider a financial market model in finite discrete time and sup- pose that F0 is trivial and S is arbitrage-free and complete. Then for every payoff H ∈ L0+(F), there is a unique price process V H = (V Hk )k=0,1,...,T which admits no arbitrage. It is given by V Hk = EQ[H|Fk] = Vk(V0, ϑ), for k = 0, 1, ..., T for any EMM Q for S and any replicating strategy ϕ = ∧ (V0, ϑ) for H. 2nd Fundamental Theorem of Asset Pricing (FTAP) Thm. 2.2  Consider a financial market model in finite discrete time and ass- ume that S is arbitrage-free, F0 is trivial and FT = F .  Then S is complete iff there is a unique EMM for S (NA) + completeness ⇐⇒ #(Pe(S)) = 1, i.e. Pe(S) is a singleton. Remark:  If a financial market in discrete time is complete, then FT is finite (i.e. completeness is quite restrictive).  Completeness is only an assertion about FT -measurable quanti- ties. Example: The binomial model Recall: We recall that this model is described by parameters p ∈ (0, 1) and u > r > d > −1; then we have S̃0k = (1+r) k and S̃1k = S 1 0 ∏k j=1 Yj with S10 > 0 and Y1, ..., YT i.i.d under P taking values 1+u or 1+d with probability p or 1−p, respectively. The filtration F is generated by S̃ = (S̃0, S̃1) or equivalently by S̃1 or by Y . Note that F0 is then trivial because S̃00 = 1 and S̃ 1 0 = S 1 0 is a constant. We also take F = FT . Rem. We already know from Cor. 2.3 that this model is arbitrage-free and has a unique EMM for S1 = S̃1/S̃0. Hence S1 is complete by Thm. 2.2, and so every H ∈ L0+(FT ) is attainable, with a price process given by V Hk = EQ∗ [H|Fk] for k = 0, 1, ..., T , where Q∗ is the unique EMM for S1. We also recall from Cor. 2.3 that the Yj are under Q∗ again i.i.d., but with Q∗[Y1 = 1 + u] = q∗ := r−d u−d ∈ (0, 1). All the above quantities S1, H, V H are discounted with S̃0, i.e. ex- pressed in units of asset 0. The undiscounted quantities are the stock price S̃1 = S1S̃0, the payoff H̃ := HS̃0T and its price process Ṽ H̃ with Ṽ H̃k := V H k S̃ 0 k for k = 0, 1, ..., T . Cor. 3.1 In te binomial model, te undiscounted arbitrage-free price process of any undiscountede payoff H̃ ∈ L0+(FT ) is given by Ṽ Hk = S̃ 0 kEQ∗ [ H̃ S̃0T ∣∣∣∣∣Fk ] = EQ∗ [ H̃ S̃0k S̃0T ∣∣∣∣∣Fk ] = S̃0k S̃0T EQ∗ [H̃|Fk] for k = 0, 1, ..., T. 2.6 Pricing of contingent claims H Def. (general European option) A general European option of payoff or contingent claim is a random variable H ∈ L0+(FT ). Interpretation The interpretation is that H describes the net payoff (in units of asset 0) that the owner of this instrument obtains at time T ; so ha- ving H ≥ 0 is natural and also avoids integrability issues. Since H is FT -measurable, the payoff can depend on the entire information up to time T ; and ”European” means that the time for the payoff is fixed at the end T . Def. (European call option, net payoff) (i) A European call option on asset i, with maturity T and strike K gives its owner the right, but not the obligation, to buy at time T one unit of asset i for the price K, irrespective of what the actual asset price SiT then is. (ii) In monetary terms, any rational person will make use of (exercise) that right iff SiT (ω) > K, then the net payoff is S i T (ω)−K, more formal H(ω) = max(0, SiT (ω)−K) = (S i T (ω)−K) +. As a random variable, this is clearly nonnegative and FT - measurable since Si is adapted. Example (payoffs) Steven Battilana p 6 / 23 Mathematical Foundations for Finance (i) If we want to bet on a reasonably stable asset price evolution, we might be interested in a payoff of the form H = IB with B = { a ≤ min i=1,...,d min k=0,1,...,T Sik < max i=1,...,d max k=0,1,...,T Sik ≤ b } This option pays at time T on unit of money iff all stocks remain between the levels a and b up to time T . (ii) A payoff of the form H = IAg ( 1 T T∑ k=1 Sik ) , A ∈ FT , g ≥ 0 gives a payoff which depends on the average price (over time) of asset i, but which is only due in case that a certain event A occurs. Def. (Attainability)  A payoff H ∈ L0+(FT ) is called attainable if there exists an admis- sible self-financing strategy ϕ= ∧ (V0, ϑ) with VT (ϕ) = H P -a.s.  The stratety ϕ is then said to replicate H and is called a replica- ting strategy for H. Thm. 1.1 (Arbitrage-free valuation of attainable payoffs)  Consider a financial market in finite discrete time and suppose that S is arbitrage-free and complete and F0 is trivial.  Then for every payoff H ∈ L0+(FT ) has a unique price process V H = (V Hk )k=0,1,...,T which admits no arbitrage.  V H is given by: V Hk = EQ [H|Fk] = Vk(V0, ϑ) for k = 0, 1, . . . , T , for any EMM Q for S and for any replicating strategy ϕ= ∧ (V0, ϑ) for H.  Rem.: Because it involves no preferences, but only the assumpti- on of absence of arbitrage, the valuation from this Thm. is often also called risk-neutral valuation, and an EMM Q for S is called a risk-neutral measure Thm. 1.2 (Characterisation of attainable payoffs)  Consider a financial market in finite discrete time and suppose that S is arbitrage-free and F0 is trivial. For any payoff H ∈ L0+(FT ), the following are equivalent: (i) H is attainable. (ii) sup Q∈Pe(S) EQ[H] < ∞ is attained in some Q∗ ∈ Pe(S), i.e. the supremum is finite and a maximum. In other words, we have sup Q∈Pe(S) EQ[H] = EQ∗ [H] <∞ for some Q∗ ∈ Pe(S). (iii) The mapping Pe(S) → R, Q 7→ EQ[H] is constant, i.e. H has the same and finite expectation under all EMMs Q for S.  Remark: Note that not all of these relationships necessarily hold for financial markets in infinite discrete time or continuous time. ”2)⇒ 3)” in general only holds if H is bounded. Approach to valuing and hedging payoffs For a given payoff H in a financial market in finite discrete time (with F0 trivial): (i) Check if S is arbitrage-free by finding at least one EMM Q for S. (ii) Find all EMMs Q for S. (iii) Compute EQ[H] for all EMMs Q for S and determine the supre- mum of EQ[H] over Q. (iv) If the supremum is finite and a maximum, i.e. attained in some Q∗ ∈ Pe(S), then H is attainble and its price process can be computed as V Hk = EQ[H|Fk], for any Q ∈ P(S). If the supremum is not attained (or, equivalently for finite discrete time, there is a pair of EMMs Q1, Q2 with EQ1 [H] 6= EQ2 [H]), then H is not attainable. Invariance of the risk-neutral pricing method under a change of numéraire  The risk-neutral pricing method is invariant under a change of numéraire, i.e. all assets can be priced under a risk-neutral me- thod independent of the chosen asset used for discounting.  Denote with Q∗∗ the EMM for Ŝ0 := S̃ 0 S̃1 . Denote with Q∗ the EMM for S1 = S̃ 1 S̃0 .  Then it holds for a financial market (S̃0, S̃1) and an undiscounted payoff H̃ ∈ L0+(FT ) that: S̃0kEQ∗ [ H̃ S̃0T ∣∣∣∣∣Fk ] = S̃1kEQ∗∗ [ H̃ S̃1T ∣∣∣∣∣Fk ] EMMs in submarkets  If a market (S0, S1, . . . , Sk) is (NA), i.e. there exists an EMM Q, then this EMM Q is also an EMM for all submarkets. (e.g. for (Sk, Si), k 6= i, for (Sk, Si, Sj), k 6= i 6= j etc.)  If there exists a EMM Qj for a submarket (S0, Sj) which is not also an EMM for another submarket (S0, Sk), j 6= k, then the whole market (S0, S1, . . . , Sk) is not (NA), i.e. it admits arbitra- ge. 2.7 Multiplicative model  Suppose that we start with the RVs r1, . . . , rT and Y1, . . . , YT .  Define the bank account/riskless asset by: S̃0k := k∏ j=1 (1 + rj), S̃0k S̃0k−1 = 1 + rk, S̃ 0 0 = 1 Remarks: – S̃0k is Fk−1-measurable (i.e. predictable). – rk denotes the rate for (k − 1, k].  Define the stock/risky asset by: S̃1k := S 1 0 k∏ j=1 Yj , S̃1k S̃1k−1 = Yk, S̃ 1 0 = const., S̃ 1 0 ∈ R Remarks: – S̃1k is Fk-measurable (i.e. adapted). – Yk denotes the growth factor for (k − 1, k]. – The rate of return Rk is given by Yk = 1 +Rk. 2.7.1 Cox-Ross-Rubinstein (CRR) binomial model Assumptions  Bank account/riskless asset: Suppose all the rk ∈ R are constant with value r > −1. This means that we have the same nonrandom interest rate over each period. Then the bank account evolves as S̃0k for k = 0, 1, . . . , T .  Stock/risky asset: Suppose that Y1, . . . YT ∈ R are independent and only take two values, 1 + u with probability p, and 1 + d with probability 1− p (i.e. all Yk are i.i.d.). Then the stock prices at each step moves either up (by a factor 1 + u) or down (by a factor 1 + d), i.e. S̃1k S̃1k−1 = Yk = { 1 + u, with probability p 1 + d, with probability 1− p Martingale property The discounted stock price S̃ 1 S̃0 is a P- martingale iff r = pu+ (1− p)d. EMM  In the binomial model, there exists a probability measure Q ≈ P s.t. S̃ 1 S̃0 is a Q-martingale iff u > r > d. Steven Battilana p 7 / 23 Mathematical Foundations for Finance Cases of stopping times  Define the stopping time τa for a ∈ R, a > 0 as: τa := inf{t ≥ 0|Wt > a} Then it holds that: – τa1 ≤ τa2 , P -a.s. for a1 < a2. – P [τa <∞] = 1. – Wτa = a, P -a.s. – E [ Wτa2 ∣∣∣Fτa1 ] 6= Wτa1 ,P-a.s. i.e. the stopping theorem fails for τ = τa2 and σ = τa1 . – limn→∞ τn(ω) =∞,P-a.s.  Define the stopping time ρa for a ∈ R, a > 0 as: ρa := sup{t ≥ 0|Wt > a} Then it follows that ρa =∞ with probability 1 under P . Def. (stopping time) Again exactly like in discrete time, a stopping time w.r.t. F is a map- ping τ : Ω→ [0,∞] s.t. {τ ≤ t} ∈ Ft for all t ≥ 0. Def. (events observable up to time σ) We define for a stopping time σ, the σ-field of events observable up to time σ as Fσ := {A ∈ F | A ∩ {σ ≤ t} ∈ Ft, for all t ≥ 0}. (One must and can check that Fσ is a σ-field, and one has Fσ ⊆ Fτ for σ ≤ τ .) Def. We also need to define Mτ , the value M at the stopping time τ , by (Mτ )(ω) := Mτ(ω)(ω). Note that this implicitly assumes that we have a random variable M∞, because τ can take the value +∞. Def. (hitting times) One useful application of Prop. 2.2 is the computation of the Laplace transforms of certain hitting times. More precisely, let W = (Wt)t≥0 be a BM and define for a > 0, b > 0 the stopping times τa := inf{t ≥ 0 |Wt > a}, σa,b := inf{t ≥ 0 |Wt > a+ bt}. 3.3 Density processes/Girsanov’s theorem Density in discrete time  Assume (Ω,F) and a filtration F = (Fk)k=0,1,...,T in finite dis- crete time. On (Ω,F), we have two probability measures Q and P , and we assume Q ≈ P .  Radon-Nykodin theorem: There exists a density dQ dP := D This is a RV D > 0, P -a.s. s.t. for all A ∈ Fk and for all RVs Y ≥ 0 it holds that: Q[A] = EP [DIA] ⇐⇒ EQ[Y ] = EP [YD].  This can also be written as∫ Ω Y dQ = ∫ Ω YDdP This formula tells us how to compute Q-expectations in terms of P -expectations and vice-versa. Density process in discrete time  Assume the same setting as before.  Radon-Nykodin theorem: The density process Z of Q w.r.t. P , or also called the P -martingale Z, is defined as Zk : = EP [D|Fk] = EP [ dQ dP ∣∣∣∣Fk] for k = 0, 1, . . . , T  Then for every Fk-measurable RV Y ≥ 0 or Y ∈ L1(Q), it holds that EQ[Y |Fk] = EP [Y Zk|Fk] and for every k ∈ {0, 1, . . . , T} and any A ∈ Fk, it holds that Q[A] = EP [ZkIA]  Properties: – Zk is a RV and Zk > 0, P -a.s. – A process N = (Nk)k=0,1,...,T which is adapted in F is a Q- martingale iff the product ZN is a P -martingale. (This tells us how martingale properties under P and Q are related to each other.)  Bayes formula: If j ≤ k and Uk is Fk-measurable and either ≥ 0 or in L1(Q), then EQ[Uk|Fj ] = 1 Zj EP [ZkUk|Fj ] Q-a.s. This tells us how conditional expectations under Q and P are related to each other. Lem. 3.1 (i) For every k ∈ {0, 1, ..., T} and any A ∈ Fk or any Fk-measurable random variable Y ≥ 0 or Y ∈ L1(Q), we have Q[A] = EP [ZkIA] ⇔ EQ[Y ] = EP [ZkY ], respectively. (This means that Zk is the density of Q w.r.t. P on Fk.) (ii) If j ≤ k and Uk is Fk-measurable and either ≥ 0 or in L1(Q), then we have the Bayes formula EQ[Uk|Fj ] = 1Zj EP [ZkUk|Fj ] Q-a.s. (This tells us how conditional expectations under Q and P are related to each other.) Written in terms of Dk, the Bayes formula for j = k−1 becomes EQ[Uk|Fk−1] = EP [DkUK |Fk−1]. This shows that the rations Dk play the role of ”one-step condi- tional densities” of Q with respect to P . (iii) A process N = (Nk)k=0,1,...,T which is adapted to F is a Q- martingale iff the product ZN is a P -martingale. (This tells us how martingale properties under P and Q are related to each other.) Proof Lem. 3.1 (i) EQ[Y ] = EP [YD] (vi) = EP [EP [YD|Fk]] YFk-meas.= EP [Y EP [D|Fk]] = EP [Y Zk]  (ii) a) LHS: EQ[Nk|Fj ] if Q-martingale = Nj ⇒ NjZj = EP [NkZk|Fj ] ⇒ NZ P -martingale b) RHS: 1 Zj EP [NkZk|Fj ] if P -martingale = 1 Zj NjZj = Nj , i.e. EQ[Nk|Fj ] = Nj ⇒ N is Q-martingale This concludes the proof for (ii)  Def. Dk := Zk Zk−1 , for k = 1, ..., T. The process D is adapted, strictly positive and satisfies by its defini- tion EP [Dk|Fk−1] = 1, because Z is a P -martingale. Rem. Again because Z is a martingale and by Lem. 3.1, EP [Z0] = EP [ZT ] = EP [ZT IΩ] Lem. 3.1 1) = Q[Ω] = 1, and we can recover Z from Z0 and D via Zk = Z0 ∏k j=1 Dj , for k = 0, 1, ..., T. Steven Battilana p 10 / 23 Mathematical Foundations for Finance Rem. To construct an equivalent martingale measure for a given process S, all we need are an F0-measurable random variable Z0 > 0 P -a.s. with EP [Z0] = 1 and an adapted strictly positive process D = (Dk)k=1,...,T satisfying EP [Dk|Fk−1] = 1 for all k, and in addition EP [Dk(Sk − Sk−1)|Fk−1] = 0 for all k. Def. (i.i.d. returns) S̃1k = S 1 0 k∏ j=1 Yj , S̃ 0 k = (1 + r) k, where Y1, ..., YT are > 0 and i.i.d. under P . Rem. (construction of an EMM Q) Choose Dk like Yk independend of Fk−1. Then we must have Dk = gk(Yk) for some measurable function gk, and we have to choose gk in such a way that we get 1 = EP [Dk|Fk−1] = EP [gk(Yk)] and 1 + r = EP [DkYk|Fk−1] = EP [Ykgk(Yk)]. (Note that these calculations both exploit the P independence of Yk from Fk−1.) If this choice is possible, we can then choose all the gk ≡ g1, because the Yk are (assumed) i.i.d. under P . To ensure that Dk > 0, we can impose gk > 0. Density process in continuous time  Suppose we have P and a filtration F = (Ft)t≥0. Fix T ∈ (0,∞) and assume only that Q ≈ P on FT .  If we have this for every T <∞, we call Q and P locally equiva- lent and write Q loc ≈ P . For an infinite horizon, this is usually strictly weaker tahtn Q ≈ P . Def. (density process) Let us for simplicity fix T ∈ (0,∞) and suppose that Q ≈ P on FT . Denote by Zt := EP [ dQ|FT dP |FT ∣∣∣∣∣Ft ] for 0 ≤ t ≤ T the density process of Q w.r.t. P on [0, T ], choosing an RCLL version of this P -martingale on [0, T ]. Rem. Since Q ≈ P on FT , we have Z > 0 on [0, T ], and because Z is a P -(super)martingale, this implies that also Z− > 0 on [0, T ] by the so-called minimum principle for supermartingales. Lem. 2.1 Suppose that Q ≈ P on FT . Then (i) For s ≤ t ≤ T and every Ut which is Ft-measurable and either ≥ 0 or in L1(Q), we have the Bayes formula EQ[Ut|Fs] = 1 Zs EP [ZtUt|Fs] Q-a.s. (ii) An adapted process Y = (Yt)0≤t≤T is a (local) Q-martingale iff the product ZY is a (local) P -martingale. Rem.  If Q loc ≈ P , we can use Lem. 2.1 for any T <∞ and hence obtain a statement for processes Y = (Y )t≥0 on [0,∞).  One consequence of part 2) of Lem. 2.1 is also that 1 Z is a Q - martingale, on [0,∞] if Q ≈ P on FT , or even on [0,∞) if Q loc ≈ P . Rem. Suppose that Q loc ≈ P with density process Z, then Q loc ≈ P implies that Z is a local martingale. Thm. 2.2 (Girsanov) (i) Suppose that Q loc ≈ P with density process Z. If M is a local P-martingale null at 0, then M̃ := M − ∫ 1 Z d[Z,M ] is a local Q-martingale null at 0. (ii) In particular, every P-semimartingale is also a Q-semimartingale (and vice-versa, by symmetry). Def. (product rule) • ZM = ∫ Z−dM + ∫ M−dZ + [Z,M ] • d(ZM) = Z−dM +M−dZ + d[Z,M ] Lem. Let Z,M be of finite variation, then it holds 〈M〉, 〈Z〉 ⇒ 〈M,Z〉 is also of finite variation. Rem. When [Z,M ] is of finite variation, the following holds[ 1 Z , [Z,M ] ] = ∑ ∆ ( 1 2 ) ∆[Z,M ] = ∫ ∆ ( 1 Z ) d[Z,M ]. Def. (alternative stochastic Itô integral definition) ∀V ”that are nice enough” 〈 ∫ vdW, V 〉 = ∫ vd〈W,V 〉 Thm. 2.3 (Girsanov (continuous version))  Suppose that Q loc ≈ P with continuous density process Z. Write Z = Z0E(L). If M is a local P-martingale null at 0, then M̃ := M − [L,M ] = M − 〈L,M〉 is a local Q-martingale null at 0.  Moreover, if W is a P-BM, then W̃ is a Q-BM.  In particular, if L = ∫ νdW for some ν ∈ L2loc(W ), then W̃ = W − 〈 ∫ νdW,W 〉 = W − ∫ νsds so that the P-BM W = W̃ + ∫ νsds becomes under Q a BM with (instantaneous) drift ν.  If we have a closer look at W ∗ defined in the Black-Scholes chap- ter, we see that W ∗ is a Brownian motion under the probability measure Q∗ given by dQ∗ dP := E ( − ∫ λdW ) T = exp ( −λWT − 1 2 λ2T ) , whose density process w.r.t. P is Z∗t = E ( − ∫ λdW ) t = exp ( −λWt − 1 2 λ2T ) for 0 ≤ t ≤ T. Steven Battilana p 11 / 23 Mathematical Foundations for Finance 4 Stochastic Integration and Calculus 4.1 Brownian motion Brownian motion Preliminery Throughout this chapter, we work on a probability space (Ω,F , P ). In particular, Ω cannot be finite or countable. A filtration F = (Ft) in continuous time; this is like in discrete time a family of σ-fields Ft ⊆ F with Fs ⊆ Ft for s ≤ t. The time parameter runs either through t ∈ [0, T ] with fixed time horizon T ∈ (0,∞) or through t ∈ [0,∞). In the later case, we define F∞ := ∨ t≥0 Ft := σ ⋃ t≥0 Ft  . For technical reasons, we should also assume that F satisfies the so-called usual conditions of being right-continuous (RC) and P - complete. Def. (Brownian motion) A Brownian motion w.r.t. P and a filtration F = (Ft)t≥0 is a (real- valued) stochastic process W = (Wt)t≥0 which [(BM0)] is adapted to F, starts at 0 (i.e. W0 = 0 P -a.s.) and satisfies the following properties: (BM0) null at zero W is adapted to F and null at 0 (i.e. W0 ≡ 0, P -a.s.). (BM1) independent and stationary increments For s ≤ t, the increment Wt−Ws is independent (under P ) of Fs and satisfies under P : Wt −Ws ∼ N (0, t− s)). (BM2) continuous sample paths W has continuous trajectories, i.e. for P -a.a. ω ∈ Ω, the function t→Wt(ω) on [0,∞) is continuous. Remarks:  Brownian motion in Rm is simply an adapted Rm-valued stocha- stic process null at 0 and with the increment Wt−Ws having the normal distribution N (0, (t − s)Im×m), where Im×m denotes the identity matrix.  Skript version: Brownian motion inRm is simply an adaptedRm - valued stochastic process null at 0 with (BM2) and s.t. (BM1) holds with N (0, t − s) replaced by N (0, (t − s)Im×m), where Im×m denotes the m×m identity matrix. Def. (alternative definition of BM without any filtration) There is also a definition of BM without any filtration F. This is a (real-valued) stochastic process W = (Wt)t≥0 which starts at 0, satisfies (BM2) and instead of (BM1) the following property: (BM1′) For any n ∈ N and any times 0 = t0 < t1 < ... < tn < ∞, the increments Wti − Wti−1 , i = 1, ..., n, are independent (under P ) and we have (under P ) that Wti − Wti−1 ∼ N (0, ti − ti−1), or N (0, (ti−ti−1)Im×m) if W is Rm-valued. Instead of (BM1′), one also says (in words) that W has independent stationary increments with a (specific) normal distribution. Transformations of BM Prop. 1.1 Suppose W = (Wt)t≥0 is a BM. Then: (i) W 1 := −W is a BM. (ii) Restarting at a fixed time T : W 2t := WT+t −WT for t ≥ 0 is a BM for any T ∈ (0,∞). (iii) Rescaling in space and time: W 3t := cW t c2 for t ≥ 0 is a BM for any c ∈ R, c 6= 0. (iv) Time-reversal: W 4t := WT−t −WT for 0 ≤ t ≤ T is a BM on [0, T ] for any T ∈ (0,∞). (v) Inversion of small and large times: W 5t := { tW 1 t for t > 0 0 for t = 0 for t ≥ 0 is a BM. (vi) W 6t := (Wt) 2 − t = 2 ∫ t 0 WsdWs, t ≥ 0 (vii) W 7t := exp ( αWt − 12α 2t ) for t ≥ 0 and for any α ∈ R. Note that we always use here the definition of BM without an exo- genous filtration. Laws on BM The next result gives some information about how trajectories of BM behave. Prop. 1.2 Suppose W = (Wt)t≥0 is a BM. Then: (i) Law of large numbers: lim t→∞ Wt t = 0, P -a.s. i.e. BM grows asymptotically less than linearly (as t→∞). (ii) (Global) law of the iterated logarithm (LIL): With ψglob(t) := √ 2t log(log t)), it holds for t ≥ 0 that: lim t→∞ sup Wt ψglob(t) = +1, P -a.s. lim t→∞ inf Wt ψglob(t) = −1, P -a.s. i.e. for P -a.a. ω, the function t 7→ Wt(ω) for t → ∞ oscillates precisely between t 7→ ±ψglob(t). (iii) (Local) law of the iterated logarithm (LIL): With ψloc(h) := √ 2h log(log 1 h ), it holds for t ≥ 0 that: lim h↘0 sup Wt+h −Wt ψloc(h) = +1, P -a.s. lim h↘0 inf Wt+h −Wt ψloc(h) = −1, P -a.s. i.e. for P -a.a. ω, to the right of t, the trajectory u 7→ Wu(ω) around the level Wt(ω) oscillates precisely between h 7→ ±ψloc(h). Prop. 1.3 Suppose W = (Wt)t≥0 is a BM. Then for P -a.a. ω ∈ Ω, the function t 7→Wt(ω) from [0,∞) to R is continuous, but nowhere differentia- ble. Def. (partition) Call a partition of [0,∞) any set Π ⊆ [0,∞) of time points with 0 ∈ Π and Π ∩ [0, T ] finite for all T ∈ [0,∞). This implies that Π is at most countable and can be ordered increasingly as Π = {0 = t0 < t1 < ... < tm < ... <∞}. Def. (mesh size, sum) The mesh size of Π is then defined as |Π| := sup{ti − ti−1 | ti−1, ti ∈ Π}, i.e. the size of the biggest time-step in Π. For any partition Π of [0,∞), QΠT := ∑ ti∈Π (Wti∧T −Wti−1∧T )2 is then the sum up to time T of the squared increments of BM along Π. We expect, at least for |Π| very small so that time points are close together, that (Wti∧T −Wti−1∧T )2 ≈ ti ∧T − ti−1 ∧T and hence QΠT ≈ ∑ ti∈Π ti ∧ T − ti−1 ∧ T2 for |Π| small. Thm. 1.4 (quadratic variation) Suppose W = (Wt)t≥0 is a BM. For any sequence (Πn)n∈N of par- titions of [0,∞) which is refining (i.e. Πn ⊆ Πn+1∀n) and satisfies lim n→∞ |Πn| = 0, we then have P [ lim n→∞ QΠnt = t, for every t ≥ 0 ] = 1. Steven Battilana p 12 / 23 Mathematical Foundations for Finance  For every H ∈ bE, the stochastic integral process H·M = ∫ HdM is then also a square-integrable martingale, • [H ·Mt] = [∫ t 0 HsdXs ] = ∫ t 0 H 2 sd[X]s and we have the isometry property • E [ (H ·M∞)2 ] = E [(∫ ∞ 0 HsdMs )2] = E (n−1∑ i=0 hi(Mti+1 −Mti ) )2 (∗) = E [ n−1∑ i=0 h2i ([M ]ti+1 − [M ]ti ) ] = E [∫ ∞ 0 H2sd[M ]s ] Note that the last d[M ]-integral can be defined ω by ω, since t 7→ [M ]t(ω) is increasing and hence of finite variation. But of course it is here also just a finite sum, because H has such a simple form.  (∗): Below we show the reasoning why this holds. From Thm. 1.1 we know: (M2t − [M ]t)t≥0 is a local martingale (here in Lem. 1.2 it is even a martingale) ⇒ E[M2ti+1 − [M ]ti+1 |Fti ] = E[M 2 ti+1 |Fti ]− E[[M ]ti+1 |Fti ] mart. prop. = M2ti − [M ]ti = 0 M2ti ,[M ]ti Fti -meas.⇐⇒ E[M2ti+1 −M 2 ti |Fti ]−E[[M ]ti+1 − [M ]ti |Fti ] = 0 E[M2ti+1 −M 2 ti |Fti ] =E[[M ]ti+1 − [M ]ti |Fti ]. Rem. (i) The argument in the proof of Lem. 1.2 actually shows that the process (H ·M)2 − ∫ H2d[M ] is a martingale. (ii) It is ”not very difficult” to argue that ∆( ∫ H2d[M ] = (∆(H ·M))2 for H ∈ bE, by exploiting that H is piecewise constant and ∆[M ] = (∆M)2. (iii) In view of Thm. 1.1 and the uniqueness there, the combination of these two properties can also be formulated as saying that [H ·M ] = [∫ HdM ] = ∫ H2d[M ], for H ∈ bE. (iv) This is the proof for (ii): ∆ (∫ H2rd[M ]r ) t = ∫ t 0 H2rd[M ]r − lim s↗t ∫ s 0 H2rd[M ]r = n−1∑ i=0 h2i ([M ]ti+1∧t − [M ]ti∧t) − lim s↗t n−1∑ i=0 h2i ([M ]ti+1∧s − [M ]ti∧s) ti≤s<t≤ti+1 = h2i ([M ]t − lim s↗t [M ]s) =h2i∆[M ]t ∆[M ]=(∆M)2 = h2i (∆Mt) 2 =(hi∆Mt) 2 =(hi(Mt − lim s↗t Ms)) 2 ti≤s<t≤ti+1 = ( n−1∑ i=0 hi(Mti+1∧t −Mti∧t) − lim s↗t n−1∑ i=0 hi(Mti+1∧s −Mti∧s) )2 = ( H ·Mt − lim s↗t H ·Ms )2 =(∆(H ·M)t)2  Def. (product space) Ω. = Ω× (0,∞). Def. (predictable σ-field) (i) We define the predictable σ-field P on Ω as the σ-field generated by all adapted left-continuous processes. (ii) We call a stochastic process H = (Ht)t>0 predictable if it is P- measurable when viewed as a mapping H : Ω→ R. Note, as a consequence, every H ∈ bE is then predictable since it is adapted and left-continuous. Def. We define the (possibly infinite) measure PM := P ⊗ [M ] on (Ω,P) by setting EM [Y ] := E [∫ ∞ 0 Ys(ω)d[M ]s(ω) ] , for Y ≥ 0 predictable; the inner integral is defined ω-wise as a Lebesgue-Stieltjes integral because t 7→ [M ]t(ω) is increasing, null at 0 and RCLL and so can be viewed as the distribution function of a (possibly infinite) ω-dependent measure on (0,∞). Def. We introduce the space L2(M) := L2(M,P ) := L2(Ω,P, PM ) = {all (equivalence classes of) predictable H = (Ht)t>0 s.t. ‖H‖L2(M) := (EM [H 2]) 1 2 = ( E [∫ ∞ 0 H2sd[M ]s ]) 1 2 <∞ } . (As usual, by taking equivalence classes, we identify H and H′ if they agree PM -a.e. (almost everywhere = ∧ ” a.s.) on Ω.) Def. We define the space M20 as the space of all RCLL martingales N = (Nt)t≥0 null at 0 which satisfy supt≥0 E[N 2 t ] <∞. Lem. 1.2 (restate Lem. 1.2. with above notations) For a fixed square-integrable martingale M , the mapping H 7→ H ·M is linear and gies from bE to the space M20 of all RCLL martinga- les N = (Nt)t≥0 null at 0 which satisfy supt≥0 E[N 2 t ] < ∞. Lem. (Doob’s inequality) The last assertion is true because each H ·M remains constant after some tn given by H ∈ bE, and because Doob’s inequality gives for any martingale N and any t ≥ 0 that E [ sup 0≤s≤t |Ns|2 ] ≤ 4E[|Nt|2]. Rem. (i) Now the martingale convergence theorem implies that each N ∈ M20 admits a limit N∞ = limt→∞Nt P -a.s., and we have N∞ ∈ L 2 by Fatou’s lemma, and the process (Nt)0≤t≤∞ defined up to∞, i.e. on the closed interval [0,∞], is still a martingale. (ii) Doob’s maximal inequality implies that two martingales N and N ′ which have the same final value, i.e. N∞ = N ′∞ P -a.s., mus coincide. Therefore we can identify N ∈ M20 with its limit N∞ ∈ L2(F∞, P ), and soM20 becomes a Hilbert space with the norm ‖N‖M20 = ‖N∞‖L2 = (E[N 2 ∞]) 1 2 and the scalar product (N,N ′)M20 = (N∞, N ′∞)L2 = E[N∞, N ′ ∞]. Cor. Because of the above remark the mapping H 7→ H · M from bE to M20 is linear and an isometry because of Lem. 1.2 says that for H ∈ bE, ‖H ·M‖M20 =(E[(H ·M∞) 2]) 1 2 = ( E [∫ ∞ 0 H2sd[M ]s ]) 1 2 =‖H‖L2(M). Rem. By general principles, this mapping can therefore be uniquely exten- ded to the closure of bE in L2(M). In other words, we can define Steven Battilana p 15 / 23 Mathematical Foundations for Finance a stochastic integral process H ·M for every H that can be appro- ximated, w.r.t. the norm ‖ · ‖L2(M), by processes from bE, and the resulting H ·M is again a martingale in M20 and still satisfies the isometry property described in the equation above. Prop. 1.3 Suppose that M is in M20. Then: (i) bE is dense in L2(M), i.e. the closure of bE in L2(M) is L2(M). (ii) For every H ∈ L2(M), the stochastic integral process H ·M =∫ HdM is well defined, in M20 and satisfies the above equation. Rem. Let M ∈M20, we then have E[|Mt|2] <∞ for every t ≥ 0 s.t. every M ∈M20 is also a square-integrable martingale. However, the converse is not true; Brownian motion W for exam- ple is a martingale and has E[W 2t ] = t. So sup t≥0 E[W 2t ] = ∞ which means that BM is not in M20. Def. (locally square-integrable, stochastic interval) (i) We call a local martingale M null at 0 locally square-integrable and write M ∈ M20,loc if there is a sequence of stopping times τn ↗∞ P -a.s. s.t. Mτn ∈M20 for each n. (ii) We say for a predictable process H that H ∈ L2loc(M) if the- re exists a sequence of stopping times τn ↗ ∞ P -a.s. s.t. HIK0,τnK ∈ L 2(M) for each n. Here we use the stochastic in- terval notation K0, τnK := {(ω, t) | 0 < t ≤ τn(ω). Def. (another definition of stochastic integral)  For M ∈ M20,loc and H ∈ L 2 loc(M), defining the stochastic inte- gral is straightforward, we simply set H ·M := (HIK0,τnK) ·M τn , on K0, τnK which gives a definition on all of Ω since τn ↗ ∞, s.t. K0, τnK increases to Ω.  The only piont we need to check is that this definition is consis- tent, i.e. tht the definition on K0, τn+1K ⊇K0, τnK does not clash with the defintion on K0, τnK. This can be done by using the pro- perties of stachastic integrals.  Of course, H ·M is then in M20,loc. Rem. If M is Rd-valued with components M i that are all in M20,loc, one can also define the so-called vector stochastic integral H · M for Rd-valued predictable processes in a suitable space L2loc(M); the re- sult is then a real-valued process. However, one warning is indicated: L2loc(M) is not obtained by just asking that each component H i should be in L2loc(M i) and then setting H · M = ∑ i Hi · M i. In fact, it can happen that H ·M is well defined whereas the individual Hi ·M i are not. So the intuition for the multidimensional case is that∫ HdM = ∫ ∑ i HidM i 6= ∑ i ∫ HidM i. Def. (continuous local martingale, locally bounded) (i) M is a continuous local martingale null at 0, briefly written as M ∈Mc0,loc. This includes in particular the case of a Brownian motion W . (ii) Then M is in M20,loc because it is even locally bounded : For the stopping times τn := inf{t ≥ 0 | |Mt| > n} ↗ is∞ P -a.s., We have by continuity that |Mτn | ≤ n for each n, because |Mτnt | = |Mt∧τn | = { |Mt| ≤ n, t ≤ τn |Mτn | = n, t > τn. (iii) The set L2loc(M) of nice integrands for M can here be explicitly described as L2loc = {all predictable processes H = (Ht)t>0 s.t.∫ t 0 H2sd〈M〉s <∞, P -a.s. ∀t ≥ 0 } . Finally, the resulting stochastic integral H ·M = ∫ HdM is then, also a continuous local martingale, and of course null at 0. Properties  (Local) Martingale properties – If M is a local martingale and H ∈ L2loc(M), then ∫ HdM is a local martingale in M20,loc. If H ∈ L 2(M), then ∫ HdM is even a martingale in M20. – If M is a local martingale and H is predictable and locally bounded (∗), then ∫ HdM is a local martingale. (∗) : (which means that there are stopping times τn ↗ ∞ P - a.s. s.t. HIK0,τnK is bounded by a constant cn, say, for each n ∈ N) – If M is a martingale inM20 and H is predictable and bounded, then ∫ HdM is again a martingale in M20. – Warning: If M is a martingale and H is predictable and boun- ded, then ∫ HdM need not be a martingale; this is in striking contrast to the situation in discrete time.  Linearity If M is a local martingale and H,H′ are in L2loc(M) and a, b ∈ R, then also aH + bH′ is in L2loc(M) and (aH+bH′) ·M = (aH) ·M+(bH′) ·M = a(H ·M)+b(H′ ·M).  Associativity If M is a local martingale and H ∈ L2loc(M), then we already know that H ·M is again a local martingale. Then a predictable process K is in L2loc(H ·M) iff KH is in L 2 loc(M), and then K · (H ·M) = (KH) ·M, i.e. ∫ Kd( ∫ HdM) = ∫ KHdM.  Behaviour under stopping – Suppose that M is a local martingale, H ∈ L2loc(M) and τ is a stopping time. Then Mτ is a local martingale by the stopping theorem, H is in L2loc(M τ ), HIK0,τK is in L 2 loc(M), and we have (H ·M)τ = H · (Mτ ) = (HIK0,τK) ·M = (HIK0,τK) · (Mτ ). – In words: A stopped stochastic integral is computed by either first stopping the integrator and then integrating, or setting the integrand equal to 0 after the stopping time and then in- tegrating, or combining the two.  Quadratic variation and covariation – Suppose that M,N are local martingales, H ∈ L2loc(M) and K ∈ L2loc(N). Then[∫ HdM,N ] = ∫ Hd[M,N ] and[∫ HdM, ∫ KdN ] = ∫ HKd[M,N ]. – The covariation process of two stochastic integrals is obtained by integrating the product of the integrands w.r.t. the cova- riation process of the integrators. – In particular, [ ∫ HdM ] = ∫ H2d[M ]. (We have seen this al- ready for H ∈ bE in the remark after Lem. 1.2)  Jumps Suppose M is a local martingale and H ∈ L2loc(M). Then we already know that H ·M is in M20,loc and therefore RCLL. Its jumps are given by ∆ (∫ HdM ) t = Ht∆Mt, for t > 0, where ∆Yt := Yt − Yt− again denotes the jump at time t of a process Y with trajectories which are RCLL. Steven Battilana p 16 / 23 Mathematical Foundations for Finance 4.5 Extension to semimartingales Def. (semimartingale, special semimartingale) (i) A semimartingale is a stochastic process X = (Xt)t≥0 that can be decomposed as X = X0 +M +A, where M is a local martin- gale null at 0 and A is an adapted process null at 0 and having RCLL trajectories of finite variation. (ii) A semimartingale X is called special if there is such a decompo- sition where A is in addition predictable. Rem. (canonical decomposition, continuous semimartingale, optio- nal quadratic variation) (i) If X is a special semimartingale, the decomposition with A pre- dictable is unique and called the canonical decomposition. The uniqueness result uses that any local martingale which is predic- table and of finite variation must be constant. (ii) If X is a continuous semimartingale, both M and A can be cho- sen continuous as well. Therefore X is special because A is then predictable, since A is adapted and continuous. (iii) If X is a semimartingale, then we define its optional quadratic variation or square bracket process [X] = ([X]t)t≥0 via [X] := [M ] + 2[M,A] + [A] := [M ] + 2 ∑ ∆M∆A+ ∑ (∆A)2. One can show that this is well defined and does not depend on the chosen decomposition of X. Moreover, [X] can also be ob- tained as a quadratic variation similarly as in Thm. 1.1. However, X2 − [X] is no longer a local martingale, but only a semimartin- gale in general. Def. (stochastic integral for semimartingale) If X is a semimartingale, we can define a stochastic integral H ·X =∫ HdX at least for any process H which is predictable and locally bounded. We simply set H ·X := H ·M +H ·A, where H ·M is as is the previous section and H ·A is defined ω-wise as a Lebesgue-Stieltjes integral. Properties Rem.  The resulting stochastic integral then has all the properties from the previous section except those that rest in an essential way on the (local) martingale property.  The isometry property for example is of course lost.  We still have, for H predictable and locally bounded: – H ·X is a semimartingale. – If X is special with canonical decomposition X = X0+M+A, then H · X is also special, with canonical decomposition H ·X = H ·M +H ·A. (This uses the non-obvious fact that if A is predictable and of finite variation and H is predictable and locally bounded, the pathwise defined integral H ·A can be chosen to be predictable again.) – linearity : same formula as before. – associativity : same formula as before. – behaviour under stopping : same formula as before. – quadratic variation and covariation: same formula as before. – jumps: same formula as before. – If X is continuous, then so is H · X; this is clear from ∆(H ·X) = H∆X = 0. Thm. (sort of dominated convergence theorem, continuity proper- ty) (i) If Hn, n ∈ N , are predictable processes with Hn → 0 point- wise on Ω and |Hn| ≤ |H| for some locally bounded H, then Hn ·X → 0 uniformly on compacts in probability, which means that sup 0≤s≤t |Hn ·Xs| → 0 in probability as n→∞, ∀t ≥ 0. (ii) This can also be viewed as a continuity property of the stocha- stic integral operator H 7→ H · X, since (pointwise and locally bounded) convergence of (Hn) implies convergence of (Hn ·X), in the sense of above formula. Rem. (further properties) (i) If X is a semimartingale and f is a C2-function, then f(X) is again a semimartingale. (ii) If X is a semimartingale w.r.t. P and R is a probability measure equivalent to P , then X is a semimartingale w.r.t. R. This will follow from Girsanov’s theorem, which even gives a de- composition of X under R. (iii) If X is any adapted process with RC trajectories, we can always define the (elementary) stochastic integral H · X for processes H ∈ bE. If X is s.t. this mapping on bE also has the continuity property from the above Thm. for any sequence (Hn)n∈N ∈ bE converging pointwise to 0 and with |Hn| ≤ 1 for all n, then X must in fact be a semimartingale. Rem. The above result implies that if we start with any model where S is not a semimartingale, there will be arbitrage of some kind. Lem. The family of semimartingales is invariant under a transformation by a C2-function, i.e. f(X) is a semimartingale whenever X is a semimartingale and f ∈ C2. 4.6 Stochastic calculus Good to know Def. (weak convergence of probability measure) Suppose µj is a sequence of measures on R. By the definition of weak convergence of measures, µj weak converges to µ means that for any bounded continuous function f , there holds that∫ R fµj n→∞−−−−→ ∫ R fµ. Thm. (Wiki: Lebesgue’s Dominated Convergence Theorem) Let {fn} be a sequence of real-valued measurable functions on a mea- sure space (S,Σ, µ). Suppose that the sequence converges pointwise to a function f and is dominated by some integrable function g in the sense that |fn(x)| ≤ g(x) ∀n in the index set of the sequence and x ∈ S. Then f is integrable and lim n→∞ ∫ S |fn − f |dµ = 0 which also implies lim n→∞ ∫ S fndµ = ∫ S fdµ. Rem. The statement ”g is integrable” is meant in the sense of Lebesgue, i.e∫ S |g|dµ <∞. Lem. One can use that any continuous local martingale of finite variation is constant. Throughout this chapter We work on a probability space (Ω,F , P ) with a filtration F = (Ft) satisfying the usual conditions of right-continuity and P - completeness. For all local martingales, we then can and tacitly do choose a version with RCLL trajectories. For the time parameter t, we have either t ∈ [0, T ] with a fixed time horizon T ∈ (0,∞) or t ≥ 0. In the latter case, we set F∞ := ∨ t≥0 Ft := σ ( ⋃ t≥0 Ft ) . Steven Battilana p 17 / 23 Mathematical Foundations for Finance General properties/results  Any continuous, adapted process H is also predictable and locally bounded. It furthermore holds for any predictable, locally bounded process H that H ∈ L2loc(W ).  Let f : R → R be an arbitrary continuous convex function. Then the process (f(Wt))t≥0 is integrable and is a (P ,F)- submartingale.  Given a (P ,F)-martingale (Mt)t≥0 and a measurable function g : R+ → R, the process (Mt + g(t))t≥0 is a: – (P ,F)-supermartingale iff g is decreasing; – (P ,F)-submartingale iff g is increasing.  A continuous local martingale of finite variation is identically con- stant (and hence vanishes if it is null at 0).  For a function f : R → R in C1, the stochastic integral∫ · 0 f ′(Ws)dWs is a continuous local martingale. Furthermore, for f ∈ C2 it holds that f(W ) is a continuous local martingale iff ∫ · 0 f ′′(Ws)ds = 0.  If a predictable process H = (Ht)t≥0 satisfies E [ H2sds ] <∞, ∀T ≥ 0 then ∫ HdWs is a square-integrable martingale.  If f : R → R is bounded and continuous, then the stochastic integral ∫ f(W )dW is a square-integrable martingale.  If a process H = (Ht)t≥0 is predictable and the map s 7→ E[H2s ] is continuous, then the stochastic integral ∫ HdW is a square- integrable martingale.  If f : R → R is polynomial, then the stochastic integral∫ f(W )dW is a square-integrable martingale. 5 Black-Scholes Formula 5.1 Black-Scholes (BS) model Rem. The Black-Scholes model or Samuelson model is the continuous- time analogue os the Cox-Ross-Rubinstein binomial model we have seen at length in earlier chapters. Def. Throughout this we will use the following setting. A fixed time horizon T ∈ (0,∞) and a probability space (Ω,F , P ) on which there is a Brownian motion W = (Wt)0≤t≤T . We take as filtration F = (F0≤t≤T the one generated by W and augmented by the P -nullsets form F0T := σ(Ws; s ≤ T ) s.t. F = F W satisfies the usual conditions under P . Def. (undiscounted financial market model) The financial market model has two basic traded assets: a bank ac- count with constant continuously compounded interest rate r ∈ R, and a risky asset (usually called stock) habing two parameters µ ∈ R and σ > 0. Undiscounted prices are given by S̃0t = e rt S̃1t = S 1 0 exp ( σWt + ( µ− 1 2 σ2 ) t ) with a constant S10 > 0. Cor. Applying Itô’s formula to the above equations yields dS̃0t = S̃ 0 t rdt, dS̃1t = S̃ 1 t µdt+ S̃ 1 t σdWt, which can be rewritten as dS̃0t S̃0t = rdt, dS̃1t S̃1t = µdt+ σdWt, This means that the bank account has a relative price change (S̃0t+dt − S̃ 0 t )/S̃ 0 t of rdt over a short time period (t, t + dt]; so r is the growth rate of the bank account. In the same way, the relative price change of the stock has a part µdt giving a growth at rate µ, and a second part σdWt ”with mean 0 and variance σ2dt” that causes random fluctuations. We call µ the drift (rate) and σ the (instantaneous) volatility of S̃1. Def. (discounted financial market model) We pass to quantities discounted with S̃0; so we have §0 = S̃0/S̃0 ≡ 1, and S1 = S̃1/S̃0 is by the undiscounted financial market model given by S1t = S 1 0 exp ( σWt + ( µ−r − 1 2 σ2 ) t ) . We obtain via Itô’s formula that S1 solves the SDE dS1t = S 1 t ((µ− r)dt+ σdWt). For later use, we observe that this gives d〈S1〉t = (S1t )2σ2dt for the quadratic variation of S1, since 〈W 〉t = t. Rem. As in discrete time, we should like to have an equivalent martingale measure for the discounted stock price process S1. To get an idea how to find this, we rewrite dS1t =S 1 t ((µ− r)dt+ σdWt). ⇔ dS1t =S1t σ ( dWt + µ− r σ dt ) = S1t σdW ∗ t , with W ∗ = (W ∗t )0≤t≤T defined by W ∗t := Wt + µ− r σ t = Wt + ∫ t 0 λds for 0 ≤ t ≤ T. Def. (market price of risk or Sharpe ratio) The quantity λ := µ−r σ is often called the instantaneous market price of risk or infinitesimal Sharpe ratio of S1. Rem. mean portfolio return−risk-free rate standard deviation of portfolio return = Sharpe ratio. Rem. By looking at Grisanov’s theorem, we see that W ∗ is a Brownian motion under the probability measure Q∗ given by dQ∗ dP := E ( − ∫ λdW ) T = exp ( −λWT − 1 2 λ2T ) , whose density process w.r.t. P is Z∗t = E ( − ∫ λdW ) t = exp ( −λWt − 1 2 λ2T ) for 0 ≤ t ≤ T. Rem. By dS1t = S 1 t σ ( dWt + µ− r σ dt ) = S1t σdW ∗ t , the stochastic integral process S1t = S 1 0 + ∫ t 0 S1uσdW ∗ u Steven Battilana p 20 / 23 Mathematical Foundations for Finance is then a continuous local Q∗-martingale like W ∗; it is even a Q∗- martingale since we get S1t =S 1 0 exp ( σWt + ( µ−r − 1 2 σ2 ) t ) ⇔ S1t =S10 exp ( σW ∗t − 1 2 σ2t ) by Itô’s formula, and so we can use Proposition 2.2 under Q∗. Rem. (unique equivalent martingale measure) In the script on page 118 and 119 we have shown that in the Black- Scholes model, there is a unique equivalent martingale measure, which is given explicitly by Q∗. So we expect that the Black-Scholes model is not only ”arbitrage-free”, but also ”complete” in a suitable sense. Def. (i) Take any H ∈ L0+(FT ) and view H as a random payoff (in dis- counted units) due at time T . Recall that F is generated by W and that W ∗t = Wt + λt, 0 ≤ t ≤ T , is a Q∗-Brownian motion. (ii) Because λ is deterministic, W and W ∗ generate the same filtra- tion, and so we can also apply Itô’s representation theorem with Q∗ and W ∗ instead of P and W . So if H is also in L1(Q∗), the Q∗-martingale V ∗t := EQ∗ [H|Ft], 0 ≤ t ≤ T , can be represented as V ∗t = EQ∗ [H] + ∫ t 0 ψHs dW ∗ s for 0 ≤ t ≤ T, with some unique ψH ∈ L2loc(W ∗) s.t. ∫ ψHdW ∗ is a Q∗- martingale. Rem. (trading strategy, self-financing) If we define for 0 ≤ t ≤ T ϑHt := ψHt S1t σ , ηHt := V ∗ t − ϑHt S1t (which are both predictable because ψH is), then we can interpret ϕH = (ϑH , ηH) as a trading strategy whose discounted value pro- cess is given by Vt(ϕH) = ϑHt S 1 t + η H t S 0 t = V ∗ t for 0 ≤ t ≤ T , and which is self-financing in the (usual) sense that Vt(ψ H) = V ∗t = V ∗ 0 + ∫ t 0 ψHu dW ∗ u = V0(ψ H)+ ∫ t 0 ϑHu dS 1 u, 0 ≤ t ≤ T. Moreover, VT (ϕ H) = V ∗T = H a.s. shows that the strategy ϕH replicates H, and∫ ϑHdS1 = V (ϕH)− V0(ϕH) = V ∗ − EQ∗ [H] ≥ −EQ∗ [H] (because V ∗ ≥ 0, since H ≥ 0) shows that ϑH is admissible (for S1) int the usual sense. Rem.  In summary, every H ∈ L1+(FT , Q∗) is attainable in the sen- se that it can be replicated by a dynamic strategy trading in the stock and the bank account in such a way that the strategy is self- financing and admissible, and its value process is a Q∗-martingale.  In that sense, we can say that the Black-Scholes model is com- plete.  By the same arguments as in discrete time, we then also obtain the arbitrage-free value at time t of any payoff H ∈ L1+(FT , Q∗) as its conditional expectation V Ht = V ∗ t = EQ∗ [H|F ] under the unique equivalent martingale measure Q∗ for S1.  This is in perfect parallel to the results we have seen for the CRR binomial model. Rem. (i) All the above computations and results are in discounted units. (ii) Itô’s representation theorem gives the existence of a strategy, but does not tell us how it looks. (iii) The SDE dS1t = S 1 t ((µ− r)dt+ σdWt) for discounted prices is dS1t S1t = (µ− r)dt+ σdWt and this is rather restrictive since µ, r, σ are all constant. An ob- vious extension is to allow the coefficients µ, r, σ to be (suitably integrable) predictable processes, or possibly funcitonals of S or S̃. This brings up several issue which are enlisted in the script on page 121. BS model (undiscounted, historical measure P) S̃0t = e rt S̃1t = S̃ 1 0 exp ( σWt + ( µ− 1 2 σ2 ) t ) dS̃0t S̃0t = rdt dS̃1t S̃1t = µdt+ σdWt BS model (discounted, historical measure P) S0t = 1 S 1 t = S 1 0 exp ( σWt + ( µ− r − 1 2 σ2 ) t ) dS1t S1t = (µ− r)dt+ σdWt BS model (discounted, risk-neutral measure Q) dS1t = S 1 t σ ( dWt + µ− r σ dt ) = S1t σdW ∗ t S1t = S 1 0 + ∫ t 0 S1uσdW ∗ u = S 1 0 exp ( σW ∗t − 1 2 σ2t ) where W ∗t := Wt + µ− r σ t = Wt + ∫ t 0 λds Market price of risk  The market price of risk or infinitemsimal Sharpe ratio of S1 is defined as λ∗ = µ− r σ 5.2 Markovian payoffs and PDEs Rem. (martingale approach, PDE approach)  The presentation in the previous subsection is often called the martingale approach to valuing options, for obvious reasons.  If one has more structure for the payoff H, an alternative method involves the use of partial differential equations (PDEs) and is thus called the PDE approach. Def. (used throughout this subsection) Suppose that the (discounted) payoff is of the form H = h(S1T ) for some measurable function h ≥ 0 on R+. We also suppose that H is in L1(Q∗); here, H = (S̃1T − K̃) +/S̃0T = (S 1 T − K̃e −rT )+. Rem. (value process) We start with the value process. Since we have V ∗t = EQ∗ [H|Ft] = EQ∗ [h(S 1 T )|Ft], we look at S1t = S 1 0 exp(σW ∗ − 1 2 σ2t) and write S1T = S 1 t S1T S1t = S1t exp(σ(W ∗ T −W ∗ t )− 1 2 σ2(T − t)). In the last term, the first factor S1t is obviously Ft-measurable. Mo- reover, W ∗ is a Q∗-Brownian motion w.r.t. F, and so in the se- cond factor, W ∗T −W ∗ t is under Q ∗ independent of Ft and has an N (0, T − t)-distribution. Def. Therefore we get V ∗t = EQ∗ [h(S 1 T )|Ft] = v(t, S 1 t ) Steven Battilana p 21 / 23 Mathematical Foundations for Finance with the function v(t, x) given by v(t, x) = EQ∗ [ h ( x exp ( σ(W ∗T −W ∗ t )− 1 2 σ2(T − t) ))] = EQ∗ [ h ( xeσ √ T−tY− 1 2 σ2(T−t) )] = ∫ ∞ −∞ h ( xeσ √ T−ty− 1 2 σ2(T−t) ) 1 √ 2π e− 1 2 y2dy, where Y ∼ (0, 1) under Q∗. Rem. This already gives a fairly precise structural description of V ∗t as a function of (t and) S1t , instead of a general Ft-measurable random variable. Rem. (strategy) As explained in the script on page 123, dV ∗t = vx(t, S 1 t )dS 1 t + (vt(t, S 1 t ) + 1 2 vxx(t, S1t )σ 2(S1t ) 2)dt and Vt(ϕ H) = V ∗t = V0(ϕ H) + ∫ t 0 ϑHu dS 1 u yield vx(t, S1t )dS 1 t = dV ∗ t = ϑ H t dS 1 t s.t. we obtain the strategy explicitly as ϑHt = ∂v ∂x (t, S1t ), i.e. as the spatial derivative of v, evalueated along the trajectories of S1. Def. (discounted PDE) In fact, the vanishing of the dt-term means that the function vt(t, x) + 1 2 vxx(t, x)σ2x2 must vanish along the trajectories of the space-time process (t, S1t )0<t<T . But each S 1 t is by S1t = S 1 0 exp(σW ∗ t − 1 2 σ2t) lognormally distributed and hence has all of (0,∞) in its support. so the support of the space-time process contains (0, T )× (0,∞), and so v(t, x) must satisfy the (linear, second-order) PDE 0 = ∂v ∂t + 1 2 σ2x2 ∂2v ∂x2 , on (0, T )× (0,∞). Moreover, the definition of v via V ∗t = EQ∗ [h(S 1 T )|Ft] = v(t, S 1 t ) gives the boundary condition v(T, ·) = h(·) on (0,∞), because v(T, S1T ) = V ∗ T = H = h(S 1 T ) and the support of the dis- tribution of S1T contains (0,∞). Rem. So if we cannot compute the integral in∫ ∞ −∞ h ( xeσ √ T−ty− 1 2 σ2(T−t) ) 1 √ 2π e− 1 2 y2dy explicitly, we can at least obtain v(t, x) numerically by solving the above PDE. Def. (discounted PDE) If the undiscounted payoff is H̃ = h̃(S̃1T ) and the undiscounted value at time t is ṽ(t, S̃1t ), we have the relations h̃(S̃1T ) = h̃(e rT S̃1T ) = H̃ = e rTH = erT h(S1T ) and ṽ(t, x̃) = ertv(t, x̃e−rt). For the function ṽ, we then obtain from 0 = ∂v ∂t + 1 2 σ2x2 ∂2v ∂x2 , on (0, T )× (0,∞) the PDE 0 = ∂ṽ ∂t + rx̃ ∂ṽ ∂x̃ + 1 2 σ2x̃2 ∂2ṽ ∂x̃2 − rṽ with the boundary condition ṽ(T, ·) = h̃(·). 5.3 Black-Scholes PDE Black-Scholes PDE 0 = ∂ṽ ∂t + rx̃ ∂ṽ ∂x̃ + 1 2 σ2x̃2 ∂2ṽ ∂x̃2 − rṽ, ṽ(T, ·) = h̃(·) 5.4 Black-Scholes formula for option pricing Martingale pricing approach  The discounted arbitrage-free value at time t of any discounted payoff H ∈ L1+(FT ,Q∗), HT = H(S̃0T , S̃ 1 T ), is given by V ∗t = EQ [H|Ft] := ϑ(t, S1t )  Then the discounted payoff H can be hedged via the replicating strategy (V0, ϑ) s.t. V0 + ∫ T 0 ϑudS 1 u = H(S̃ 0 T , S̃ 1 T ) Using Itô’s representation theorem the replicating strategy can be expressed as V ∗t = ϑ(t, S 1 t ) = V0 + ∫ t 0 ϑsdS 1 s + cont. FV process︸ ︷︷ ︸ ”usually” vanishes V0 = ϑ(0, S 1 0), ϑt = ∂ϑ ∂x (t, S1t ) Rem. (European call option) In the special case of a European call option, the value process and the corresponding strategy can be computed explicitly, and this has found widespread use in industry. Def. Suppose the undiscounted strike price is K̃ s.t. the undiscounted payoff is H̃ = (S̃1T − K̃) +. Then H = H̃/S̃0T = (S 1 T − K̃e −rT )+ =: (S1T −K) +. Rem. We obtain form v(t, x) = EQ∗ [ h ( x exp ( σ(W ∗T −W ∗ t )− 1 2 σ2(T − t) ))] = EQ∗ [ h ( xeσ √ T−tY− 1 2 σ2(T−t) )] = ∫ ∞ −∞ h ( xeσ √ T−ty− 1 2 σ2(T−t) ) 1 √ 2π e− 1 2 y2dy, that the discounted value of H at time t is V ∗t = EQ∗ [( xeσ √ T−tY− 1 2 σ2(T−t) −K )+]∣∣∣∣ x=S1t . Because we have Y ∼ N (0, 1) under Q∗, an elementary computation yields for x > 0, a > 0 and b ≥ 0 that EQ∗ [( xeaY− 1 2 a2 − b )+] = xΦ ( log ( x b ) + 1 2 a2 a ) −bΦ ( log ( x b ) − 1 2 a2 a ) , where Φ(y) = Q∗[Y ≤ y] = ∫ y −∞ 1 √ 2π e− 1 2 z2dz is the cumulative distribution function (CDF) of the standard normal distribution N (0, 1). Def. (Black-Scholes formula) Plugging in the above formula x = S1t , a = σ √ T − t, b = K and then passing to undiscounted quantities therefore yields the famous Black-Scholes formula in the form Ṽ H̃t = ṽ(t, S̃ 1 t ) = S̃ 1 tΦ(d1)− K̃e−r(T−t)Φ(d2) Steven Battilana p 22 / 23 Mathematical Foundations for Finance
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved