Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

University of Illinois at Urbana-Champaign Department of ..., Exercises of Probability and Statistics

Learning Objectives: 4.1 Discrete-Time Markov Chains and Transition Probabilities: Markov chain, discrete-time, countable state space, states of system, ...

Typology: Exercises

2022/2023

Uploaded on 05/11/2023

esha
esha 🇺🇸

3

(1)

1 document

1 / 49

Toggle sidebar

Related documents


Partial preview of the text

Download University of Illinois at Urbana-Champaign Department of ... and more Exercises Probability and Statistics in PDF only on Docsity! University of Illinois at Urbana-Champaign Department of Mathematics ASRM 409 Stochastic Processes for Finance and Insurance Spring 2019 Chapter 4 Discrete-Time Markov Chains (Part I) by Alfred Chong Learning Objectives: 4.1 Discrete-Time Markov Chains and Transition Probabilities: Markov chain, discrete-time, countable state space, states of system, not realize before time n, realize at time n, Markov property, given present state drop past history, one- step transition probability, one-step transition probability matrix, k-step transition probability, k-step transition probability matrix, Chapman-Kolmogorov equation, intermediate states, adding up all, unconditional distribution of states, initial distribution, homogeneous, independent of time, number of time steps ahead, repeatedly, k-th power. 4.2 Classification of States: state-transition graph, accessible, communicate, communication relation, reflexive, symmetric, translative, equivalence relation, partition, disjoint equivalence classes, communication classes, irreducible, one and only one class, reducible, decomposition, smaller sub-graphs, closed, never escape, absorbing state, period, aperiodic, primitive, recurrent, number of time system passing state, transient. 4.3 More on Recurrence and Transience: first passage time, first return probability, r-th passage time, r-th return probability, strong Markov property, stopping times, mean time spent in transient states, class properties, finite states, recurrent, close, transient, non-closed, singleton class, first-step analysis, submatrix, rows and columns for transient states. Further Exercises: S 2017 October Q13, S 2016 October Q14, S 2016 May Q9, LC 2015 October Q10, LC 2014 April Q11, 3L 2013 May Q7, 3L 2013 May Q8, 3L 2012 November Q8, 3L 2012 November Q16, 3L 2012 May Q8, 3L 2011 May Q8. Further Readings: Lo Chapter 3, Ross Chapter 4. 1 4.1 Discrete-Time Markov Chains and Transition Prob- abilities 4.1.1 Recall that a stochastic process X is a mapping from Ω×T into S, where Ω is a sample space, T is a time index set, and S is a state space. For each ω ∈ Ω, {Xt (ω)}t∈T is a sample path of the process X. For each t ∈ T , {Xt (ω)}ω∈Ω is a random variable. By omitting the sample argument ω ∈ Ω, the process X can be regarded as a family of random variables X = {Xt}t∈T . 4.1.2 A Markov chain is another important example of stochastic processes. In this chapter, the time index set is T = {0, 1, 2, . . . }; the state space S is a countable set, which could be a finite set. For each n = 0, 1, . . . , the random variable Xn represents the state of a system at time n. 4.1.3 A discrete-time Markov chain X = {Xn}∞n=0 defined on a countable state space S is a stochastic process such that, for any n = 0, 1, . . . , for any i0, i1, . . . , in−1, i, j ∈ S, P (Xn+1 = j|Xn = i,Xn−1 = in−1, . . . , X0 = i0) = P (Xn+1 = j|Xn = i) . (∗) 4.1.4 The condition (∗) for a stochastic process X being a Markov chain is called the Markov property, which is equivalent to that, for any n = 0, 1, . . . , for any k = 1, 2 . . . , for any i0, i1, . . . , in−1, i, j1, . . . , jk ∈ S, P (Xn+k = jk, . . . , Xn+1 = j1|Xn = i,Xn−1 = in−1, . . . , X0 = i0) = P (Xn+k = jk, . . . , Xn+1 = j1|Xn = i) . (†) 4.1.5 Essentially, the Markov property (∗), or equivalently (†), entails that the conditional dis- tribution of future states Xn+k, . . . , Xn+1 of the system depends on its past states Xn, . . . , X0 only through the present state Xn. 4.1.6 In the Markov property (∗), the conditional probabilities P (Xn+1 = j|Xn = i), for any n = 0, 1, . . . , and i, j ∈ S, specifies the probability that the system will enter state j at time n+ 1 when it is in state i at time n. Therefore, for each n = 0, 1, . . . , and i, j ∈ S, pn,ij = P (Xn+1 = j|Xn = i) is called a one-step transition probability from state i to state j at time n. These one-step transition probabilities at time n can be compactly represented by the one-step 2 i, j ∈ S, p (k) ij = ∑ rk−1∈S · · · ∑ r2∈S ∑ r1∈S pir1pr1r2 . . . prk−1j. Therefore, for any n = 0, 1, . . . and k = 1, 2, . . . , µTn+k = µTnP (k) = µTnP k. 4.1.13 In the remaining of this chapter, unless otherwise specified, the Markov chain X is homo- geneous. Example 1 [CAS Exam MAS-I 2018 October Q9]: You are given the following information about a homogeneous Markov chain: • There are three states to answering a trivia question: Skip (State 0), Correct (State 1), and Wrong (State 2). • P =  0 0.85 0.15 0.20 0.80 0 0 0.70 0.30 . • The candidate skipped the previous question. Calculate the probability of the candidate correctly answering at least one of the two subsequent questions. Solution: Let Xn be the state of the system at time n. Then {Xn}n≥0 is a homogeneous Markov chain with transition matrix P given in the problem. We are given the initial state is µ0 = (1, 0, 0)T . Hence P (X1 = 1 or X2 = 1|X0) = P (X1 = 1|X0) + P (X2 = 1|X0)− P (X1 = 1, X2 = 1|X0) = p01 + p (2) 01 − p01p11 = p01 + p01p11 + p02p21 − p01p11 = p01 + p02p21 = 0.85 + 0.15× 0.70 = 0.955. Exercise 1 [CAS Exam S 2017 May Q11]: You are given: • Loan borrowers move among three states at the end of each month: – State 1: Current – State 2: Delinquent – State 3: Default 5 • 75% of Current borrowers remain Current; 25% move to Delinquent. • 60% of Delinquent borrowers remain Delinquent; 20% move to Current; 20% move to Default. • Once in Default, the loan is written off so that the borrower remains in Default forever. Calculate the probability that a Current borrower at the beginning of a calendar quarter will not be in Default by the end of the same quarter. Example 2 [CAS Exam LC 2015 October Q9]: You are given the following information: • Actuarial exam takers transition through the examination process according to a homoge- neous Markov chain with the following four states: – State 1 = Actuarial student – State 2 = ACAS – State 3 = FCAS – State 4 = Change careers • Q =  0.7 0.2 0.0 0.1 0.0 0.8 0.2 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 1.0 . • Bob is an actuarial student and Judy is an ACAS. • Bob and Judy transition independently of one another. Calculate the probability that either Bob or Judy will be an FCAS after three transitions. Solution: Let Xn be the state of Bob after n transitions, let Yn be the state of Judy after n transitions. Then {Xn}n≥0 and {Yn}n≥0 are independent homogeneous Markov chains with common transition matrix Q given in the problem. P (X3 = 3 or Y3 = 3|X0 = 1, Y0 = 2) = P (X3 = 3|X0 = 1, Y0 = 2) + P (Y3 = 3|X0 = 1, Y0 = 2)− P (X3 = 3, Y3 = 3|X0 = 1, Y0 = 2) = P (X3 = 3|X0 = 1) + P (Y3 = 3|Y0 = 2)− P (X3 = 3|X0 = 1) · P (Y3 = 3|Y0 = 2) . Since P (X3 = 3|X0 = 1) = the 3rd component of (1, 0, 0, 0)Q3 = 0.1, P (Y3 = 3|Y0 = 2) = the 3rd component of (0, 1, 0, 0)Q3 = 0.488, 6 P (X3 = 3 or Y3 = 3|X0 = 1, Y0 = 2) = 0.5392. Exercise 2 [CAS Exam 3L 2013 October Q8]: Assume workers transition through the labor force independently with the transitions following a homogeneous Markov chain process for three states: • Employed full-time • Employed part-time • Retired The transition matrix is: 0.7 0.2 0.1 0.0 0.6 0.4 0.0 0.0 1.0 . • Worker A is currently employed full-time • Worker B is currently employed part-time Calculate the probability that at least one of the workers will be employed part-time after two transitions. Example 3 [CAS Exam S 2017 May Q12]: You are given the following information about a homogeneous Markov chain: • There are three daily wildfire risk states: Green (state 0), Yellow (state 1) and Red (state 2). • Transition between states occurs at the end of each day. • The daily transition matrix P = 0.82 m n 0.61 0.28 0.11 0.40 0.31 0.29 . • The wildfire risk was Yellow on Wednesday. • Today is Thursday and the wildfire risk is Green. • The probability that the wildfire risk will be Red on Saturday is 0.07. Calculate the absolute difference between m and n. Solution: The sum of each row of a transition matrix must be 1, and so we have m+ n = 0.18. (1) 7 Transitions between states occur at the end of each year, according to the following transition probability matrices: P0 = 0.85 0.10 0.05 0 0 1.00 0 0 1.00  ; Pn = 0.90 0.07 0.03 0 0 1.00 0 0 1.00  , for n ≥ 1 • At time t = 0, the insurer has 10, 000 policies in State 0. • The insurer does not write new policies. Calculate the expected number of policies canceled by the insured during the next two years. Solution: Let Xn be the state of an insurance policy started at time n = 0. Thus P (X1 = 1 or X2 = 1|X0 = 0) = P (X1 = 1|X0 = 0) + P (X1 = 0, X2 = 1|X0 = 0) = P (X1 = 1|X0 = 0) + P (X1 = 0|X0 = 0)P (X2 = 0|X1 = 0) = p0,01 + p0,00p1,01 = 0.1 + 0.85× 0.07 = 0.1595. Thus expected number of policies canceled by the insured during the next two years is 1595. Exercise 5 [CAS Exam 3L 2012 May Q16]: You are given the following information: • An entity can be in any of three states: State 1, State 2, or State 3. • Transitions and cash flows occur at the end of each period. • The cash flow when moving from State 1 to State 2 is 0.8. • There are no other costs associated with transitions. • There is a cash flow of −0.25 if the entity does not change states at the end of a period. • i = 10% • The following matrices show the probabilities of moving between states at time t = 1 and t = 2. Q1 = 0.5 0.4 0.1 0.3 0.2 0.5 0.1 0.5 0.4  Q2 = 0.3 0.5 0.2 0.1 0.3 0.6 0.1 0.2 0.7  . Calculate the expected present value of the cash flow for the entity starting in State 1 at t = 1 over the next two periods. 10 4.2 Classification of States In this section, we assume that {Xn}∞n=0 is a homogeneous Markov chain with state space S and transition matrix P . For any i ∈ S, we will use Pi(·) to stand for the conditional probability Pi(·|X0 = i). For example, Pi(X1 = j) stands for Pi(X1 = j|X0 = i). The Markov property can be written as for any n = 0, 1, . . . , for any i0, i1, . . . , in−1, i, j ∈ S, P (Xn+1 = j|Xn = i,Xn−1 = in−1, . . . , X0 = i0) = Pi (X1 = j) . 4.2.1 For any states i, j ∈ S, the state j is called accessible from the state i, denoted as i→ j, if there exists a k = 0, 1, . . . (which depends on i and j) such that p (k) ij > 0. By definition, any state i ∈ S is always accessible from itself since p (0) ii = 1. 4.2.2 For any states i, j ∈ S, we say that i and j communicates with each other, denoted as i↔ j, if i→ j and j → i. 4.2.3 Since the communication relation ↔ defined on the state space S is reflexive (i↔ i), symmetric (i ↔ j ⇔ j ↔ i), and translative (i ↔ j, j ↔ k ⇔ i ↔ k), the relation ↔ is an equivalence relation on the state space S. Hence, the equivalence relation ↔ provides a partition of the state space S into disjoint equivalence classes, which are called communication classes. 4.2.4 The Markov chain X is called irreducible if there exists one and only one communi- cation class for X. A Markov chain is irreducible if every state communicates with every other states. 4.2.5 A subset C ⊆ S is said to be closed, if, for any i ∈ C, ∑ j∈C pij = 1. Essentially, once the system enters any state in this subset C, it can never escape from C. 4.2.6 Each communication class of the Markov chain X is either closed or non-closed. If X is a Markov chain with state space S and transition matrix P = (pij)i,j∈S and if C ⊆ S is closed, then (pij)i,j∈C is a transition matrix. 4.2.7 Suppose X is a Markov chain with state space S, i ∈ S and C = {i} is closed for X, then the state i is called an absorbing state for the Markov chain X. 4.2.8 If i ∈ S is a state such that p (n) ii > 0 for some n ≥ 1, we define its period to be di = gcd { k = 1, 2, · · · : p(k) ii > 0 } . 4.2.9 If i, j ∈ S communicate with each other, then di = dj. That is, period is a class property. 11 Here is a proof. Let n1 and n2 be such that p (n1) ij > 0, p (n1) ji > 0. Then p (n1+n2) ii ≥ p (n1) ij p (n2) ji > 0, thus di is a divisor of n1 + n2. If p (n) jj > 0, then p (n1+n2+n) ii ≥ p (n1) ij p (n) jj p (n2) ji > 0, consequently di is a divisor of n1 + n2 + n. Since di is a divisor of n1 + n2, it must be a divisor of n. Hence di is a divisor of all members in the set {n ≥ 1 : p (n) jj > 0}, which implies that di ≤ dj since dj is the greatest common divisor. Similarly, dj ≤ d− I, and so we must have di = dj. 4.2.10 A state i ∈ S is called aperiodic if di = 1. The Markov chain X is aperiodic if all of its states are aperiodic. 4.2.11 If a Markov chain X is irreducible with at least one state being aperiodic, then the Markov chain X is aperiodic. 4.2.12 If a finite state Markov chain X is irreducible and aperiodic, then there exists a positive integerN such that, for any i ∈ S, p (k) ii > 0 for all k ≥ N . Here is a proof using the following elementary result: If I is a set of positive integers satisfying the properties (i) gcd(I) = 1; and (ii) m,n ∈ I implies m + n ∈ I, then there is a positive integer N such that n ∈ I for all n ≥ N . For each i ∈ S, the set Ii = {n ≥ 1 : p (n) ii > 0} satisfies (i) and (ii) above, thus there exists apositive integer Ni such that n ∈ Ii for all n ≥ Ni. We can simply take N = max{Ni : i ∈ S}. 4.2.13 If a finite state Markov chain X is irreducible and aperiodic, then the one-step transition probability matrix is primitive, i.e. there exists a k = 1, 2, . . . (which is independent of states) such that, for any i, j ∈ S, p (k) ij > 0. Here is a proof. For any i 6= j ∈ S, since i, j communicate with each other, there exists a positive integer nij such that p (nij) ij > 0. Let N be the a positive number such that p (k) ii > 0 for all k ≥ N . We can simply take k = N + ∑ i 6=j∈S Nij. 4.2.14 Here is a proof of the elementary number theory fact used 4.2.12: We first prove that I contains two consecutive integers. Otherwise, there :is an integer k ≥ 2 and an n1 ∈ I such that n1 + k ∈ I and any two distinct integers in I differ by at least k. It follows from property (i) that there is an n ∈ I such that k is not a divisor of n. We can write n = mk + r, where m is a non-negative integer and 0 < r < k. It follows from property (ii) that 12 chain transition probability matrix: P =  0.7 0.2 0.1 0.0 0.0 0.4 0.4 0.2 0.0 0.0 0.5 0.5 0.0 0.0 0.0 0.4 0.2 0.1 0.3 0.0 0.0 0.0 0.0 0.0 1.0  Identify its communicating classes, classify closed and non-closed classes, and classify states in recurrence and transience. Solution: The communication classes are: {1, 2, 3}, {4}, {5}. {1, 2, 3} and {5} are closed, {4} is non-closed. {1, 2, 3} and {5} are recurrent classes, {4} is a transient class. Exercise 7 [Continuation of Exercise 6]: You are given the following Markov chain transition probability matrix: P =  0.4 0.0 0.0 0.0 0.6 0.0 0.6 0.0 0.4 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.2 0.1 0.3 0.4 0.5 0.0 0.0 0.0 0.5  Classify states in recurrence and transience. 4.3 More on Recurrence and Transience In this section, we assume that {Xn}∞n=0 is a homogeneous Markov chain with state space S and transition matrix P . For any i ∈ S, we will use Pi(·) to stand for the conditional probability Pi(·|X0 = i). 4.3.1 For any i ∈ S, let Ti be a random variable which is defined by: Ti = inf {n ≥ 1 : Xn = i} . This random variable Ti is called the first passage time for the state i. Hence, the probability fi = Pi (Ti <∞) (= P (Ti <∞|X0 = i)) defines the first return probability to the state i. 15 4.3.2 Recall that any i ∈ S, Ni = ∑∞ n=0 1{Xn=i} is the number of times that X spends in state i. It is obvious that Pi(Ni ≥ 2) = Pi(Ti <∞) = fi. Let m and n be positive integers. the probability that X starting from i first returns to i at time m and next come back to i n units of time later is Pi(Ti = m)Pi(Ti = n). Thus Pi(Ni ≥ 3) = ∞∑ m=1 ∞∑ n=1 Pi(Ti = m)Pi(Ti = n) = ( ∞∑ m=1 Pi(Ti = m) )( ∞∑ n=1 Pi(Ti = n) ) = f 2 i . Recursively, we have Pi(Ni ≥ r) = f r−1 i . Hence Pi(Ni = r) = f r−1 i (1− fi). 4.3.3 Due to 4.3.2, the state i ∈ S being recurrent or transient can be characterized by the return probability: state i is recurrent ⇔ fi = 1; state i is transient ⇔ fi < 1. Indeed, Pi (Ni <∞) = ∞∑ r=1 Pi (Ni = r) = ∞∑ r=1 (fi) r−1 (1− fi) . Hence, the state i is transient if and only if ∑∞ r=1 (fi) r−1 (1− fi) = 1, which is equivalent to that fi < 1. Essentially, to check if the system will pass the state i infinitely often, one only has to check if the system will almost surely ever return the state i for one time. 4.3.4 Due to 4.3.3, the state i ∈ S being recurrent or transient can be further characterized in terms of the k-step transition probabilities: state i is recurrent ⇔ ∞∑ k=0 p (k) ii =∞; state i is transient ⇔ ∞∑ k=0 p (k) ii <∞. 16 Indeed, ∞∑ k=0 p (k) ii = ∞∑ k=0 Pi (Xk = i) = ∞∑ k=0 E [ 1{Xk=i}|X0 = i ] = E [ ∞∑ k=0 1{Xk=i} ∣∣∣∣X0 = i ] = E [Ni|X0 = i] = ∞∑ r=1 Pi (Ni ≥ r) = ∞∑ r=1 (fi) r−1 Hence, the state i is transient if and only if fi < 1, which is equivalent to that ∑∞ k=0 p (k) ii =∑∞ r=1 (fi) r−1 = 1 1−fi <∞. Essentially, to check if the system will pass the state i infinitely often, one only has to check if the mean time spent in the state i will be infinite. Consequently, only the mean time spent in transient states i, for the system starting at state i, are finite: si = E [Ni|X0 = i] = ∞∑ k=0 p (k) ii = ∞∑ r=1 (fi) r−1 = 1 1− fi . 4.3.5 Recurrence and transience are class properties, in the sense that, if C ⊆ S is a communicating class, then either all of the states in C are recurrent or transient. Here is a proof. Suppose i, j ∈ C and i is transient. There exist n,m ≥ 0 with p (n) ij > 0 and p (m) ji > 0. For all r ≥ 0, p (n+r+m) ii ≥ p (n) ij p (r) jj p (m) ji , and so ∞∑ r=0 p (r) jj ≤ 1 p (n) ij p (m) ji ∞∑ r=0 p (n+r+m) ii <∞ by 4.3.4. Hence j is also transient. 4.3.6 Any recurrent communicating class is a closed class. On the other hand, any closed com- municating class with a finite number of states is recurrent. Hence, for a finite state Markov chain X, a communicating class is recurrent if and only if it is closed; a communicating class is transient if and only if it is non-closed. 4.3.7 For a finite state Markov chain X, suppose that a singleton communicating class C = {i}, for some i ∈ S, is non-closed. By 4.3.6, the state i is transient, and, by 4.3.4, the mean time spent by the system in this transient state i ∈ C is si = ∞∑ k=0 p (k) ii = 1 + pii + p2 ii + · · · = 1 1− pii , 17 are looking s4. p (k) 44 = .3k, and so s4 = ∞∑ k=0 p (k) 44 = 1 1− 0.3 = 1 .7 = 10 7 . Exercise 8: You are given the following Markov chain transition probability matrix: P =  0.5 0.5 0.0 0.0 0.0 0.5 0.5 0.0 0.0 0.0 0.0 0.0 0.5 0.5 0.0 0.0 0.0 0.5 0.5 0.0 0.1 0.1 0.0 0.0 0.8  Calculate the mean time spent by the system in any transient states of the system which starts at any transient states. Example 9 [Revisiting of Exercise 7]: You are given the following Markov chain transition prob- ability matrix: P =  0.4 0.0 0.0 0.0 0.6 0.0 0.6 0.0 0.4 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.2 0.1 0.3 0.4 0.5 0.0 0.0 0.0 0.5  Calculate the mean time spent by the system in any transient states of the system which starts at any transient states. Solution: We know that the communication classes are {1, 5}, {2, 4} and {3}. {1, 5} and {3} are closed and so are recurrent. {2, 4} is not closed and so is transient. We are looking for s22, s24, s42, s44. Let S = ( s22 s24 s42 s44 ) and PT = ( 0.6 0.4 0.2 0.3 ) Then by 4.3.9, S = (I − PT )−1 = ( 3.5 2 1 2 ) 20 Exercise 9: You are given the following Markov chain transition probability matrix: P =  0.2 0.8 0.0 0.0 0.0 0.5 0.5 0.0 0.0 0.0 0.2 0.3 0.0 0.3 0.2 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.4 0.0 0.6  Calculate the mean time spent by the system in any transient states of the system which starts at any transient states. Example 10 [CAS Exam S 2016 October Q11]: A firm classifies its workers into one of four categories based on their employment status: 1. Full-Time 2. Part-Time 3. On-Leave 4. Retired Workers independently transition between categories at the end of each year according to a Markov chain with the following transition probability matrix: P =  0.6 0.1 0.1 0.2 0.4 0.4 0.1 0.1 0.1 0.1 0.7 0.1 0.0 0.0 0.0 1.0  Calculate the probability a current full-time employee will ever go on leave. Solution: The communication classes are {1, 2, 3} and {4}. {4} is closed and thus recurrent. {1, 2, 3} is not close and thus transient. Let S =  s11 s12 s13 s21 s22 s23 s31 s32 s33  and PT =  0.6 0.1 0.1 0.4 0.4 0.1 0.1 0.1 0.7  . Then S = (I − PT )−1 = 2 9  17 4 7 13 11 8 10 5 20  . 21 Thus f13 = s13 s33 = 7 20 . Exercise 10 [Continuation of Exercise 9]: You are given the following Markov chain transition probability matrix: P =  0.2 0.8 0.0 0.0 0.0 0.5 0.5 0.0 0.0 0.0 0.2 0.3 0.0 0.3 0.2 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.4 0.0 0.6  Calculate the probability that the system will ever transit to one of the transient states starting from another transient state. 22 Moreover, since random variables ξn are identically distributed, the transition probabilities are independent of the time n = 0, 1, . . . . 4.4.4 Since p ∈ (0, 1), the one-dimensional random walk X on Z is irreducible since, for every states i, j ∈ Z, i ↔ j. By 4.3.6, either all of the states in Z are recurrent or transient. However, although the communicating class Z is closed, 4.2.18 cannot be applied to conclude that all states in Z are recurrent, since the one-dimensional random walk on Z is not of finite state. 4.4.5 By 4.3.6, to study the recurrence or transience property of the one-dimensional random walk on Z, it suffices to consider any one of the states, say 0 ∈ Z. Notice that, for any k = 1, 2, . . . , p (2k−1) 00 = 0 while p (2k) 00 = ( 2k k ) pk (1− p)k > 0. Hence, the period d0 = 2. By Stirling’s formula that k! ∼ √ 2πk ( k e )k (here ak ∼ bk means that limk→∞ ak/bk = 1), p (2k) 00 = C2k k p k (1− p)k ∼ (4p (1− p))k√ πk . In other words, for any ε ∈ (0, 1), there exists a positive integerK such that for any k ≥ K, (1− ε) (4p (1− p))k√ πk < p (2k) 00 < (1 + ε) (4p (1− p))k√ πk . When p = 1 2 , i.e. the one-dimensional random walk on Z is symmetric, 4p (1− p) = 1, and ∞∑ k=0 p (k) 00 = ∞∑ k=0 p (2k) 00 ≥ ∞∑ k=K p (2k) 00 > (1− ε) 1√ π ∞∑ k=K 1√ k =∞. By 4.3.4, for the symmetric one-dimensional random walk on Z, the state 0 ∈ Z is recurrent, and hence all states in Z are recurrent; when p 6= 1 2 , i.e. the one-dimensional random walk on Z is asymmetric, 4p (1− p) < 1, and ∞∑ k=0 p (k) 00 = ∞∑ k=0 p (2k) 00 = K−1∑ k=0 p (2k) 00 + ∞∑ k=K p (2k) 00 < K + (1 + ε) 1√ π ∞∑ k=K (4p (1− p))k√ k <∞. By 4.3.4, for the asymmetric one-dimensional random walk on Z, the state 0 ∈ Zis transient, and hence all states in Z are transient. 4.4.6 For a symmetric two-dimensional random walk on Z2, all states are recurrent; yet for a symmetric three-dimensional random walk on Z3, all states are transient. This 25 corresponds to one of the famous quotes in mathematics by Shizuo Kakutani: A drunk man will find his way home, but a drunk bird may get lost forever. 4.4.7 Random walk models are not as naive as they might sound. The following section applies a random walk model on studying the gambler’s ruin problem. Another crucial application of random walk models is the discrete-time binomial tree option pricing model which have been learned/will learn in ASRM410/ASRM510/SOA IFM. 4.5 Gambler’s Ruin Problem 4.5.1 There are two gamblers, player 1 and player 2, with their respective initial wealth W1,W2 ∈ N∪{0}. Let W = W1 +W2 be their total initial wealth, and suppose that W is finite. These two players 1 and 2 gamble against each other on a zero-sum game, by getting 1 unit of wealth if he/she wins a round, but losing 1 unit of wealth if he/she loses a round. Assume that both players do not quit the game unless either one of them is broke. 4.5.2 From the perspective of player 1, let p ∈ (0, 1) be the probability that he/she wins a round, and hence 1 − p is the probability that he/she loses a round. On the other hand, from the perspective of player 2, 1− p is the winning probability and p is the losing probability. Let {ξn}∞n=1 be a sequence of random variables, where for each n = 1, 2, . . . , ξn represents the net gain of wealth for player 1 at the n-th round of the game. Hence, {ξn}∞n=1 are independent and identically distributed with P(ξ1 = 1) = p and P(ξ1 = −1) = 1− p. Let X = {Xn}∞n=0 be another sequence of random variables, where, for each n = 0, 1, . . . , Xn represents the wealth of player 1 after n rounds of the game, which are recursively given by, for any n = 0, 1, . . . ,Xn+1 = Xn + ξn+1 if Xn ∈ {1, 2, . . . ,W − 1} Xn+1 = Xn if Xn ∈ {0,W} . In particular, X0 = W1. 4.5.3 The process X is a a homogeneous Markov chain with state space S = {0, 1, . . . ,W} 3 26 W1,W2, and hence the one-step transition probability matrix is given by P =  1 · · · 0 0 0 0 0 · · · 0 ... . . . ... ... ... ... ... ... ... 0 · · · 1− p 0 p 0 0 · · · 0 0 · · · 0 1− p 0 p 0 · · · 0 0 · · · 0 0 1− p 0 p · · · 0 ... ... ... ... ... ... ... . . . ... 0 · · · 0 0 0 0 0 · · · 1  . 4.5.4 This homogeneous Markov chain X is reducible with two absorbing states 0 and W . More precisely, the Markov chain X has three communicating classes {0}, {1, 2, . . . ,W − 1}, {W}, which are respectively closed, non-closed, and closed. By 4.3.6, the states 0 are W are recurrent, while the states 1, 2, . . . ,W − 1 are transient, regardless of the value p ∈ (0, 1). 4.5.5 For any subset A ⊆ S, let HA be the random variable defined by: HA = inf {n = 0, 1, · · · : Xn ∈ A} . This random variable HA is called the first hitting time of A. Hence, for any i ∈ S, the probability hi,A = P (HA <∞|X0 = i) defines the hitting probability of A from the state i, while the expectation ki,A = E [HA|X0 = i] defines the mean time taken to hit A from the state i. 4.5.6 If hi,A < 1, or equivalently P (HA =∞|X0 = i) > 0, then ki,A = E [HA|X0 = i] = E [HA|HA <∞, X0 = i]P (HA <∞|X0 = i) + E [HA|HA =∞, X0 = i]P (HA =∞|X0 = i) =∞. On the other hand, if hi,A = 1, or equivalently P (HA =∞|X0 = i) = 0, then ki,A = E [HA|X0 = i] = E [HA|HA <∞, X0 = i]P (HA <∞|X0 = i) + E [HA|HA =∞, X0 = i]P (HA =∞|X0 = i) = E [HA|HA <∞, X0 = i] , 27 4.5.12 For any i ∈ {1, 2, . . . ,W − 1}, by the law of total expectation, ki,{0,W} = E [ H{0,W}|X0 = i ] = E [ H{0,W}|X1 = i+ 1, X0 = i ] pi,i+1 + E [ H{0,W}|X1 = i− 1, X0 = i ] pi,i−1 = E [ H{0,W}|X1 = i+ 1, X0 = i ] p+ E [ H{0,W}|X1 = i− 1, X0 = i ] (1− p) However, for any j ∈ {0, 1, . . . ,W}, E [ H{0,W}|X1 = j,X0 = i ] = 1 + kj,{0,W}. Therefore, the mean time to absorption ki,{0,W} satisfies the following difference equa- tion: ki,{0,W} = 1 + pki+1,{0,W} + (1− p) ki−1,{0,W}, ∀i ∈ {1, 2, . . . ,W − 1} , with the boundary conditions that k0,{0,W} = kW,{0,W} = 0. Rearranging the difference equation, ki+1,{0,W} − ki,{0,W} = 1− p p ( ki,{0,W} − ki−1,{0,W} ) − 1 p , ∀i ∈ {1, 2, . . . ,W − 1} . which implies that, for any i ∈ {1, 2, . . . ,W}, ki,{0,W} − ki−1,{0,W} = ( 1− p p )i−1 ( k1,{0,W} − k0,{0,W} ) − 1 p (( 1− p p )i−2 + ( 1− p p )i−3 + · · ·+ 1 ) . If p 6= 1 2 , then, for any i ∈ {1, 2, . . . ,W}, ki,{0,W} − ki−1,{0,W} = ( 1− p p )i−1 ( k1,{0,W} − k0,{0,W} ) − 1 p 1− ( 1−p p )i−1 1− ( 1−p p ) = ( 1− p p )i−1(( k1,{0,W} − k0,{0,W} ) + 1 2p− 1 ) − 1 2p− 1 , 30 and thus, by telescoping terms, ki,{0,W} = ( ki,{0,W} − ki−1,{0,W} ) + ( ki−1,{0,W} − ki−2,{0,W} ) + · · ·+ ( k1,{0,W} − k0,{0,W} ) + k0,{0,W} = (( 1− p p )i−1 + ( 1− p p )i−2 + · · ·+ 1 )(( k1,{0,W} − k0,{0,W} ) + 1 2p− 1 ) − i 2p− 1 + 0 = 1− ( 1−p p )i 1− ( 1−p p ) ((k1,{0,W} − k0,{0,W} ) + 1 2p− 1 ) − i 2p− 1 . If p = 1 2 , then, for any i ∈ {1, 2, . . . ,W}, ki,{0,W} − ki−1,{0,W} = ( k1,{0,W} − k0,{0,W} ) − 2 (i− 1) , and thus, by telescoping terms, ki,{0,W} = ( ki,{0,W} − ki−1,{0,W} ) + ( ki−1,{0,W} − ki−2,{0,W} ) + · · ·+ ( k1,{0,W} − k0,{0,W} ) + k0,{0,W} = i ( k1,{0,W} − k0,{0,W} ) − 2 ((i− 1) + (i− 2) + · · ·+ 1) + 0 = i ( k1,{0,W} − k0,{0,W} ) − i (i− 1) . Therefore, together with the boundary condition that kW,{0,W} = 0, the mean time taken until either player 1, with an initial wealth W1 = i ∈ {0, 1, . . . ,W}, or player 2 will eventually win the whole game, is given by ki,{0,W} =  1−( 1−p p ) i 1−( 1−p p ) W W 2p−1 − i 2p−1 if p 6= 1 2 i (W − i) if p = 1 2 . Example 11 [CAS Exam S 2015 October Q8]: Ben and Allison each decide to wager 1 unit against each other on flips of a fair coin, until one of them runs out of money. At the start of the contest, Ben has 20 units and Allison has 55 units. Find the variance of Ben’s final wealth. Solution: The coin is fair, so p = 1/2. Let Ben be Player 1, and so W1 = 20. Let Allison be Player 2, and so W2 = 55. W = W1 + W2 = 75. Ben’s final wealth X is either 0 or 75. By 4.5.10, P(X = 0|X0 = 20) = P(H{0} <∞|X0 = 20) = h20,{0} = 1− 20 75 = 11 15 31 and P(X = 75|X0 = 20) = 4 15 . Thus E[X] = 20, E[X2] = 1500 and Var(X) = 1100. Exercise 11 [CAS Exam MAS-I 2018 October Q10]: You are given the following information about when a fair coin is flipped: • If the outcome is Heads, 1 chip is won • If the outcome is Tails, 1 chip is lost • A gambler starts with 20 chips and will stop playing when he either has lost all his chips or he reaches 50 chips • Of the first 10 flips, 7 are Heads and 3 are Tails Calculate the probability that the gambler will lose all of his chips, given the results of the first 10 flips. Example 12: Alfred and Daniel decide to gamble against each other by flipping an unfair coin, until one of them runs out of money. At the beginning, Alfred has $20 while Daniel has $100. When the coin lands on head, with probability 0.7, Alfred wins $1 while Daniel loses $1; when the coin lands on tail, with probability 0.3, Daniel wins $1 while Alfred loses $1. Find the average number of flips needed until either Alfred or Daniel is eventually broke. Solution: Let Alfred be Player 1, then p = 0.7 and W1 = 20. Let Daniel be Player 2, then W2 = 100 and W = W1 +W2 = 120. By 4.5.12, k20,{0,120} = 1− ( 1−p p )i 1− ( 1−p p )W W 2p− 1 − i 2p− 1 with i = 20 = 1− ( 3 7 )20 1− ( 3 7 )120 120 0.4 − 20 0.4 ≈ 250. Exercise 12 [Modified from CAS Exam MAS-I 2018 October Q10]: You are given the following information about when an unfair coin is flipped: • If the outcome is Heads, with probability 0.3, 1 chip is won 32 In fact, by 2.3.2, Var (Xn+1) = σ2µn + Var (Xn)µ2 = · · · = σ2 ( µn + µn+1 + · · ·+ µ2n ) . 4.6.8 One of the important aspects of branching processes is studying the probability that extinction occurs in the population: P (E) = P (∃ n ∈ N s.t. Xn = 0) = P (∪∞n=1 {Xn = 0}) = lim n→∞ P (Xn = 0) . 4.6.9 If P(ξ11 = 1) = 1, then nothing interesting happens with the population and obviously P (E) = 0 in this case. So we will assume that P(ξ11 = 1) < 1 from now on. The quantity of E [ξ] alone provides a sufficient condition to check if the extinction will eventually occur in this family almost surely: if µ ≤ 1, then P (E) = 1; if µ > 1, then P (E) < 1. Here is a proof of the case µ ≤ 1. In this case P(Xn ≥ 1) ≤ E(Xn) = µn → 0, thus P (E) = lim n→∞ P (Xn = 0) = 1. 4.6.10 We now prove the conclusion in the paragraph above in the case µ ≥ 1. Obviously, we have P(ξ11 = 0) > 0 and P(ξ11 = 0) + P(ξ11 = 1) < 1. For this, we first observe the following recursive formula in terms of probability generating functions: for any n = 1, 2, . . . , GXn (z) = ( GXn−1 ◦ φ ) (z) = (φ ◦ φ ◦ · · · ◦ φ) (z) = ( φ ◦GXn−1 ) (z) , where GXn is the probability generating function of Xn and φ is the probability generating function of ξ11. Indeed, by the law of total expectation and the independence property 35 of random variables ξn,k, GXn (z) = E [ zXn ] = E [ z ∑Xn−1 k=1 ξn,k ] = E [ E [ z ∑Xn−1 k=1 ξn,k |Xn−1 ]] = E [ z0|Xn−1 = 0 ] P (Xn−1 = 0) + ∞∑ i=1 E [ z ∑i k=1 ξn,k |Xn−1 = i ] P (Xn−1 = i) = P (Xn−1 = 0) + ∞∑ i=1 E [ z ∑i k=1 ξn,k ] P (Xn−1 = i) = E [ zξ11 ]0 P (Xn−1 = 0) + ∞∑ i=1 E [ zξ11 ]i P (Xn−1 = i) = ∞∑ i=0 φ (z)i P (Xn−1 = i) = E [ φ (z)Xn−1 ] = ( GXn−1 ◦ φ ) (z) = ( GXn−2 ◦ φ ◦ φ ) (z) = · · · = (φ ◦ φ ◦ · · · ◦ φ) (z) = ( φ ◦GXn−1 ) (z) . In particular, when z = 0, for any n = 1, 2, . . . , P (Xn = 0) = (φ ◦ P) (Xn−1 = 0) . Since any probability generating function is a continuous function, taking n→∞ on both sides yields that the extinction probability P (E) satisfies the following implicit equation: z = φ (z) , that is P (E) = φ (P (E)) = ∞∑ j=0 P (E)j P (ξ11 = j) . Obviously z = 1 is a solution. But there could be other solutions. We claim that the extinction probability P (E) is the smallest positive solution. Let â be the smallest positive solution. We will show by induction that for every n, an = P(Xn = 0) ≤ â, which implies P (E) = limn P(Xn = 0) = limn an ≤ â. This is obviously true for n = 0 since a0 = 0. Suppose that an−1 ≤ â. Then P(Xn = 0) = φ(P(Xn−1 = 0)) = φ(an−1) ≤ φ(â) = â, the last inequality is due to the fact that φ is an increasing function. Suppose µ = 1, then φ′(1) = ∞∑ k=1 kP (ξ11 = k) = µ = 1. Recall that we are assuming P(ξ11 = 0) > 0 and P(ξ11 = 0) + P(ξ11 = 1) < 1. Thus 36 P(ξ11 ≥ 2) > 0. From this we easily get that φ′′(z) > 0, z ∈ (0, 1). Thus φ′(z) < 1, z ∈ (0, 1). Hence for s ∈ (0, 1), 1− φ(s) = ∫ 1 s φ′(r)dr ≤ 1− s, that is, φ(s) > s. Therefore, in this case, z = 1 is the only positive root. Consequently, P (E) = 1. Now suppose µ > 1. Then φ′(1) = ∞∑ k=1 kP (ξ11 = k) = µ > 1. Since φ(1) = 1, we must have φ(z) < z for z < 1 but close to 1, by considering the function ψ(z) = φ(z) − z. Since φ(0) > 0, we have ψ(0) > 0 and ψ(z) < 0. Thus, there exists a ∈ (0, z) such that ψ(a) = 0, i.e., φ(a) = a. Since φ′′(z) > 0, for z ∈ (0, 1), the function φ is convex, so there is at most one a ∈ (0, 1) such that φ(a) = a. The extinction probability P (E) is this positive solution, which is less than 1. 4.6.11 Suppose X0 = m > 1, then [Xn] = mµn and Var (Xn) = mσ2µn−1 ( 1−µn 1−µ ) if µ 6= 1 mnσ2 if µ = 1 . Example 13 [Modified from CAS Exam MAS-I 2018 October Q11]: A scientist has discovered a way to create a new element. This scientist studied the new element and observed some of its properties: • The element has a lifespan of 1 week, then it evaporates • The element can produce offspring right before evaporating • The creation of offspring is distributed as follows: Number of Offspring Probability 0 0.75 1 0.20 2 0.05 37 is the first passage time to j, then P(Tj <∞|X0 = j) = 1. Let mj = E[Tj|X0 = j]. Then mj can be finite or infinite. We say the recurrent state j is positive recurrent if mj <∞ and null recurrent if mj =∞. 4.7.4 Suppose i is recurrent and i↔ j, then P(Tj <∞) = 1. Indeed, since i ↔ j, there is a positive integer n such that p (n) ij > 0. Let X0 = i and say that the first opportunity is a success if Xn = j, and note that the first opportunity is a success with probability p (n) ij > 0. If the first opportunity is not a success, then consider the next time (after time n) that the chain enters the state i. (Because i is recurrent, the chain will definitely enter the state i again after time n.) Say that the second opportunity is a success if n time period later the chain is in state j. If the second opportunity is not a success,, then wait until the next the chain enters the state i, and say that the third opportunity is a success if n time period later the chain is in state j. Continuing in this manner, we can define infinite number of opportunities, each of which being a success with the same probability p (n) ij > 0. Thus the number of opportunities until the first success if a geometric random variable with parameter p (n) ij > 0, it follows that with probability 1 a success will eventually occur, and so, with probability 1, the state j will be eventually hit. 4.7.5 Suppose {Xn}∞n=0 is irreducible and recurrent. For any state j, let πj be the long-run proportion of time that {Xn}∞n=0 spends in the state j. Then, for any initial state, πj = 1 mj . Here is a proof of this assertion. Suppose X0 = i. Let S1 = Tj be the first passage time to the state j; let S2 be the additional time after S1 until the chain next enters the state j; and let S3 be the the additional time after S1 + S2 until the chain next enters the state j; and so on. Note that S1 is finite because of the previous paragraph. Also, for n ≥ 2, because Sn is the number of time steps between the (n − 1)-st and the n-th visit to the state j, it follows from the Markov property that S2, S3, . . . are iid random variables wit mean mj. Because teh n-th visit to the state j occurs at time S1 + · · · + Sn, we get that πj, the long-run proportion of time that the chain is in the state j, is πj = lim n→∞ n∑n i=1 Si = lim n→∞ 1 1 n ∑n i=1 Si = lim n→∞ 1 S1 n + ∑n i=2 Si n = 1 mj 40 where the last equality follows because limn→∞ S1/n = 0 and, from the strong law of large numbers, lim n→∞ S2 + · · ·+ Sn n = lim n→∞ S2 + · · ·+ Sn n− 1 · n− 1 n = mj. 4.7.6 If i is positive recurrent and i ↔ j, the j is positive recurrent. Consequently, positive recurrence is a class property. Here is a proof. Let n be such that p (n) ij > 0. Because πi is the long-run proportion of time that the chain is in the state i, and p (n) ij is the the long-run proportion of time when the chain is in state i that it will be in state j after an additional time n, πip (n) ij = long-run proportion of time that X is in i and will be in j after an additional time n = long-run proportion of time that X is in j and was in i n unit of time ago ≤ long-run proportion of time that X is in j. Hence πj ≥ πip (n) ij > 0, showing that j is positive recurrent. 4.7.7 Null recurrence is also a class property. Here is a proof. Suppose i is null recurrent and i ↔ j. Because i is recurrent and i ↔ j, we get that j is recurrent. If j were positive recurrent, then by the previous paragraph i would be also positive recurrent, which contradicts the assumption. 4.7.8 An irreducible finite state Markov chain must be positive recurrent. For we know that such a chain must be recurrent; hence all its states are either positive recurrent or null recurrent. If they were all null recurrent, then all the long-rum proportions would be 0, which is impossible when there are only finite many states. Consequently, the chain is positive recurrent. 4.7.9 A distribution π = [ π1, π2, . . . , π|S| ]T of states (πi ≥ 0 and ∑ i πi = 1) for the Markov chain X is called a stationary/invariant/equilibrium distribution if π satisfies πTP = πT . Since π is a distribution, |S|∑ i=1 πi = 1, or equivalently πT1 = 1. 4.7.10 Such a stationary/invariant/equilibrium distribution of states for the Markov chain X 41 exists if and only if there exists a solution π for the following system of linear equationsπTP = πT πT1 = 1 . 4.7.11 If there exists a time n = 0, 1, 2, . . . such that µn =  P (Xn = 1) P (Xn = 2) ... P (Xn = |S|)  = π, then, by 4.1.12, µTn+1 = µTnP = πTP = πT , and thus, µn+1 = π. Hence, if there exists a time n = 0, 1, 2, . . . such that µn = π, then µm = π, for any time m = n, n+ 1, . . . . This is the reason why the distribution π is called stationary/invariant/equilibrium distribution. 4.7.12 Since πi is the long-run proportion of time that the chain is in state i, it is also the long-run proportion of transitions from state i. Thus πipij = long-run proportionof transitions from state i to state j. Summing over all states i, we get πj = |S|∑ i=1 πipij. Thus, if the long-run proportions {πi : i ∈ S} satisfy ∑ i πi = 1, then π = (π1, π2, . . . ) T is a stationary/invariant/equilibrium distribution. In particular, if {Xn}∞n=0 is an irreducible finite state Markov chain, π = (π1, π2, . . . , π|S|) T is a stationary/invariant/equilibrium distribution. 4.7.13 In the next few paragraphs we will prove the following result. If the chain X is irreducible, then the following three assertions are equivalent: (i) Every state is positive recurrent; (ii) some state i is positive recurrent; (iii) there is an invariant distribution. Moreover, when(iii) holds, we have πi = 1/mi for all i. 42 To prove the last assertion, we return to the argument of (iii)⇒ (i) armed with the knowl- edge that our chain is recurrent, so λ = µk and the inequality (3) is in fact an equality. 4.7.19 A measure α = [ α1 α2 . . . α|S| ]T on S is called a limiting probability if, for any i ∈ S, αi = lim n→∞ P (Xn = i|X0 = j) = lim n→∞ p (n) ji , for any j ∈ S. 4.7.20 Suppose the Markov chain X is irreducible, aperiodic and positive recurrent. Let π be the invariant distribution. For any i, j ∈ S, lim n→∞ P(Xn = j|X0 = i) = πj, or equivalently, lim n→∞ p (n) ij = πj. In particular, if X is an irreducible and aperiodic finite state Markov chain, then i, j ∈ S, lim n→∞ p (n) ij = πj. 4.7.21 Suppose the Markov chainX is irreducible and aperiodic. IfX is transient or null recurrent, then i, j ∈ S, lim n→∞ p (n) ij = 0. 4.7.22 The Markov chain X being irreducible, aperiodic, and positive recurrent, is called ergodic. 4.7.23 The following result is proved in Proposition 4.6 by Ross (2014) on the long-run trans- formed mean: If the Markov chain X is irreducible with a stationary distribution π, then, for any bounded function r on S, lim N→∞ ∑N n=1 r (Xn) N = |S|∑ i=1 r (i) πi. If the function r is regarded as a reward function, in the sense that a reward r (i) is earned whenever the chain is in state i ∈ S, then this long-run transformed mean represents the average reward per unit of time. Example 15 [CAS Exam S 2016 May Q11]: A three-state Markov chain, with the following transition probability matrix, is used to model the movement of policyholders between three states: 45 States 1 2 3 1 0.5 0.5 0.0 2 0.4 0.4 0.2 3 0.3 0.4 0.3 Calculate the stationary percentage of policyholders in State 3. Solution: This Markov chain is irreducible. Let πi be the stationary percentage of policyholders in State i, i = 1, 2, 3. Solving the system 0.5π1 + 0.4π2 + 0.3π3 = π1 0.5π1 + 0.4π2 + 0.4π3 = π2 0 + 0.2π2 + 0.3π3 = π3 π1 + π2 + π3 = 1, we get π1 = 34 79 , π2 = 35 79 , π3 = 10 79 . Hence the stationary percentage of policyholders in State 3 is 10 79 . Exercise 15 [CAS Exam S 2015 October Q10]: An insurance company classifies its customers into three categories: 1. Good Risk 2. Acceptable Risk 3. Bad Risk Customers independently transition between categories at the end of each year according to a Markov process with the following transition matrix: P = 0.5 0.5 0.0 0.4 0.4 0.2 0.0 0.5 0.5  Calculate the stationary probability of a customer being classified as a Good Risk. Example 16 [Modified from CAS Exam S 2017 October Q14]: You are given the following information about a Markov chain with a transition probability matrix: • P = 0.60 0.30 0.10 0.70 0.30 0.00 0.30 0.00 0.70  • The three states are 1, 2, and 3. 46 • At time 0, the Markov chain is in State 3. Calculate the long-run proportion of time in state 3, and the expected number of steps needed to return to state 3. Solution: The Markov chain is irreducible. So the long-run proportions of time (π1, π2, π3)T is equal to to the invariant distribution. Solving the system 0.6π1 + 0.7π2 + 0.3π3 = π1 0.3π1 + 0.3π2 + 0 = π2 0.1π1 + 0 + 0.7π3 = π3 π1 + π2 + π3 = 1, we get π1 = 21 37 , π2 = 9 37 , π3 = 7 37 . The long-run proportion of time in state 3 is 7 37 and the expected number of steps needed to return to state 3 is m3 = 1 π3 = 37 7 . Exercise 16 [Modified from CAS Exam S 2017 May Q13]: You are given the following information about a Markov chain with a transition probability matrix: • P = 0.60 0.20 0.20 0.40 0.40 0.20 0.10 0.00 0.90  • The three states are 1, 2, and 3. • At time 0, the Markov chain is in State 2. Calculate the long-run proportion of time in state 2, and the expected number of steps needed to return to state 2. Example 17 [Modified from CAS Exam S 2016 May Q11]: A three-state Markov chain, with the following transition probability matrix, is used to model the movement of policyholders between three states: States 1 2 3 1 0.3 0.0 0.7 2 0.3 0.4 0.3 3 0.5 0.5 0.0 Calculate the limiting probabilities limn→∞ p (n) ji , j, i = 1, 2, 3. Solution: This Markov chain is irreducible and aperiodic and the state space is {1, 2, 3}. Thus 47
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved