Download Markov Chains: Steady State and Eigenvalues - Finding the Dominant Eigenvector - Prof. Tim and more Study notes Mathematics in PDF only on Docsity! Markov Chains – steady state and eigenvalues Tim Chartier Department of Mathematics @ Math 381 Tim Chartier Markov Chains – steady state and eigenvalues Announcement Wednesday’s class will be in Communications B-027. We will be using MATLAB. Tim Chartier Markov Chains – steady state and eigenvalues Taking lots of steps Recall v1 = Av0, v2 = Av1 = A(Av0) = A2v0, v3 = Av2 = A(A2v0) = A3v0, ... vn+1 = Avn = A(An−1v0) = An+1v0, Therefore, v100 = A100v0. In MATLAB, v100 would be found using the following command: v100 = Aˆ100*v0 Tim Chartier Markov Chains – steady state and eigenvalues Stepping in place Last time we noticed that for large n, vn+1 ≈ vn. Moreover, we are converging so that: Av = v. That is, a time step leaves the percentages unchanged. This relationship means that the vector v is an eigenvector of A with an eigenvalue of 1. Tim Chartier Markov Chains – steady state and eigenvalues Eigenvalue and eigenvector review Let A be a square m × m matrix with real elements. Recall from linear algebra that a scalar value λ is an eigenvalue of the matrix A if there is a nonzero vector v (the corresponding eigenvector) for which Av = λv. If v is an eigenvector, then so is αv for any scalar value α. Tim Chartier Markov Chains – steady state and eigenvalues A more complex example Keep in mind that even real matrices can have complex eigenvalues. Consider the matrix A = ( 1 2 −4 3 ) Then to find the eigenvalues we find: det(A − λI) = (1 − λ)(3 − λ) + 8 = λ2 − 4λ + 11, Setting this polynomial to zero, we find λ = 4 ± √ −28 2 = 2 ± i √ 7. Tim Chartier Markov Chains – steady state and eigenvalues Return to example We know the eigenvalues of: A = ( 1 2 4 3 ) are −1 and 5. Let’s find the eigenvectors associated with λ = −1. To find the eigenvector(s) associated with the eigenvalue λ = −1, find nonzero vectors v = (v1, v2)T satisfying ( 1 − λ 2 4 3 − λ )( v1 v2 ) = ( 2 2 4 4 )( v1 v2 ) = ( 0 0 ) . Using Gaussian elimination, we find: ( 2 2 0 4 4 0 ) −→ ( 2 2 0 0 0 0 ) . Tim Chartier Markov Chains – steady state and eigenvalues Example, cont. Again, Gaussian elimination yielded: ( 2 2 0 4 4 0 ) −→ ( 2 2 0 0 0 0 ) . Therefore, the solution set consists of all vectors v such that 2v1 + 2v2 = 0; i.e., such that v2 = −v1. We can take v1 to have any nonzero value, say, v1 = 1, and then the eigenvector is (1,−1)T . Tim Chartier Markov Chains – steady state and eigenvalues Converging Still, why does the Markov process yield this unique dominant eigenvector? The answer will be yes if M satisfies two conditions specified in the Perron-Frobenius theorem. First, the matrix must be irreducible. This occurs if every state is reachable from every other state via a path in the transition diagram. We have this which actually guarantees the second condition of the Perron-Frobenius theorem. Therefore, convergence is guaranteed. Tim Chartier Markov Chains – steady state and eigenvalues Converging to what? We have established convergence, but not yet to what vector. In the development to come, we will use the following: |λn| → 0 as n → ∞ if |λ| < 1, |λn| = 1 for all n if |λ| = 1, |λn| → ∞ as n → ∞ if |λ| > 1, Tim Chartier Markov Chains – steady state and eigenvalues Full set of eigenvectors Assume M has n linear independent eigenvectors. Thus, for an arbitrary initial guess x(0) (such that ∑ x(0)i = 1), we can express it as a linear combination of the eigenvectors {v1, v2, . . . , vn} of M: x(0) = c1v1 + c2v2 + . . . + cnvn, Tim Chartier Markov Chains – steady state and eigenvalues