Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Lecture Notes on Computational Physics: Matrix Equations and Eigenvalues - Prof. Donald G., Study notes of Physics

These lecture notes cover the topic of matrix equations and eigenvalues in the context of computational physics. Information on setting up matrix equations, scalar multiplication, vector components, the dot product, and the relationship between vectors and matrices. It also discusses the concept of linear transformations and how they can be represented by matrices.

Typology: Study notes

Pre 2010

Uploaded on 08/18/2009

koofers-user-q3u
koofers-user-q3u 🇺🇸

10 documents

1 / 43

Toggle sidebar

Related documents


Partial preview of the text

Download Lecture Notes on Computational Physics: Matrix Equations and Eigenvalues - Prof. Donald G. and more Study notes Physics in PDF only on Docsity! PHYS-4007/5007: Computational Physics Course Lecture Notes Section VIII Dr. Donald G. Luttermoser East Tennessee State University Version 4.1 Abstract These class notes are designed for use of the instructor and students of the course PHYS-4007/5007: Computational Physics taught by Dr. Donald Luttermoser at East Tennessee State University. Donald G. Luttermoser, ETSU VIII–3 i) Vector addition is commutative: |α〉 + |β〉 = |β〉 + |α〉. (VIII-12) ii) Vector addition is associative: |α〉 + (|β〉 + |γ〉) = (|α〉 + |β〉) + |γ〉. (VIII-13) iii) There exists a zero (or null) vector, |0〉, with the property that |α〉 + |0〉 = |α〉, (VIII-14) for every vector |α〉. iv) For every vector |α〉 there is an associated in- verse vector (| − α〉) such that |α〉 + | − α〉 = |0〉. (VIII-15) b) Scalar multiplication: The product of any scalar with any vector is another vector: a|α〉 = |γ〉. (VIII-16) i) Scalar multiplication is distributive with respect to vector addition: a(|α〉 + |β〉) = a|α〉 + a|β〉, (VIII-17) and with respect to scalar addition: (a + b)|α〉 = a|α〉 + b|α〉. (VIII-18) ii) It is also associative: a(b|α〉) = (ab)|α〉. (VIII-19) VIII–4 PHYS-4007/5007: Computational Physics iii) Multiplications by the null and unit vector are 0|α〉 = |0〉; 1|α〉 = |α〉. (VIII-20) (Note that | − α〉 = (−1)|α〉.) c) A linear combination of the vectors |α〉, |β〉, |γ〉, ... is an expression of the form a|α〉 + b|β〉 + c|γ〉 + · · · . (VIII-21) i) A vector |λ〉 is said to be linearly independent of the set |α〉, |β〉, |γ〉, ... if it cannot be written as a linear combination of them (e.g., unit vectors ∧ x, ∧ y, and ∧ z). ii) A collection of vectors is said to span the space if every vector can be written as a linear combination of the members of this set. iii) A set of linearly independent vectors that spans the space is called a basis =⇒ ∧x, ∧ y, and ∧ z (or i, j, k) define the Cartesian basis, which is a 3-D orthogonal basis. iv) The number of vectors in any basis is called the dimension of the space. Here we will introduce the finite bases (analogous to unit vectors), |e1〉, |e2〉, ..., |en〉, (VIII-22) of any given vector: |α〉 = a1|e1〉 + a2|e2〉 + · · · + an|en〉, (VIII-23) which is uniquely represented by the (ordered) n- tuple of its components: |α〉 ↔ (a1, a2, ..., an). (VIII-24) Donald G. Luttermoser, ETSU VIII–5 v) It is often easier to work with components than with the abstract vectors themselves. Use whatever method to which you are most comfortable. 3. In 3 dimensions, we encounter 2 kinds of vector products: the dot product and the cross product. The latter does not generalize in any natural way to n-dimensional vector spaces, but the former does and is called the inner product. a) The inner product of 2 vectors (|α〉 and |β〉) is a com- plex number (which we write as 〈α|β〉), with the following properties: 〈β|α〉 = 〈α|β〉∗ (VIII-25) 〈α|α〉 ≥ 0 (VIII-26) 〈α|α〉 = 0 ⇔ |α〉 = |0〉 (VIII-27) 〈α|(b|β〉 + c|γ〉) = b〈α|β〉 + c〈α|γ〉 (VIII-28) 〈α|β〉 = N∑ n=1 α∗nβn. (VIII-29) b) A vector space with an inner product is called an inner product space. c) Because the inner product of any vector with itself is a non-negative number (Eq. VIII-26), its square root is real — we call this the norm (think of this as the length ) of the vector: ‖α‖ ≡ √ 〈α|α〉. (VIII-30) d) A unit vector, whose norm is 1, is said to be normalized. e) Two vectors whose inner product is zero are called or- thogonal =⇒ a collection of mutually orthogonal nor- malized vectors, 〈αi|αj〉 = δij, (VIII-31) VIII–8 PHYS-4007/5007: Computational Physics f) The transpose of a matrix (T̃) is the same set of elements in T, but with the rows and columns interchanged: T̃ =   T11 T21 · · · Tn1 T12 T22 · · · Tn2 ... ... ... T1n T2n · · · Tnn   . (VIII-45) Note that the transpose of a column matrix is a row ma- trix! g) A square matrix is symmetric if it is equal to its trans- pose (reflection in the main diagonal — upper left to lower right — leaves it unchanged); it is antisymmetric if this operation reverses the sign: SYMMETRIC: T̃ = T; ANTISYMMETRIC: T̃ = −T. (VIII-46) h) The (complex) conjugate (T∗) is obtained by taking the complex conjugate of every element: T∗ =   T ∗11 T ∗ 12 · · · T ∗1n T ∗21 T ∗ 22 · · · T ∗2n ... ... ... T ∗n1 T ∗ n2 · · · T ∗nn   ; a∗ =   a∗1 a∗2 ... a∗n   . (VIII-47) i) A matrix is real if all its elements are real and imaginary if they are all imaginary: REAL: T∗ = T; IMAGINARY: T∗ = −T. (VIII-48) j) A square matrix is Hermitian (or self-adjoint as defined by T† ≡ T̃∗) if it is equal to its Hermitian conjugate; if Hermitian conjugation introduces a minus sign, the ma- trix is skew Hermitian (or anti-Hermitian): HERMITIAN: T† = T; SKEW HERMITIAN: T† = −T. (VIII-49) Donald G. Luttermoser, ETSU VIII–9 k) With this notation, the inner product of 2 vectors (with respect to an orthonormal basis), can be written in matrix form: 〈α|β〉 = a†b. (VIII-50) l) Matrix multiplication is not, in general, commutative (ST 6= TS) — the difference between 2 orderings is called the commutator: [S,T] ≡ ST−TS. (VIII-51) It can also be shown that one can write the following commutator relation: [ÂB̂, Ĉ] = Â [B̂, Ĉ] + [Â, Ĉ] B̂. (VIII-52) m) The transpose of a product is the product of the transpose in reverse order : ( ∼ ST) = T̃S̃, (VIII-53) and the same goes for Hermitian conjugates: (ST)† = T†S†. (VIII-54) n) The unit matrix is defined by 1 =   1 0 · · · 0 0 1 · · · 0 ... ... ... 0 0 · · · 1   . (VIII-55) In other words, 1ij = δij. (VIII-56) o) The inverse of a matrix (written T−1) is defined by T−1T = TT−1 = 1. (VIII-57) VIII–10 PHYS-4007/5007: Computational Physics i) A matrix has an inverse if and only if its deter- minant is nonzero; in fact T−1 = 1 detT C̃, (VIII-58) where C is the matrix of cofactors. ii) The cofactor of element Tij is (−1)i+j times the determinant of the submatrix obtained from T by erasing the ith row by the jth column. iii) As an example for taking the inverse of a matrix, let’s assume that T is a 3x3 matrix of form T =   T11 T12 T13 T21 T22 T23 T31 T32 T33   . (VIII-59) Its determinant is then det T = |T| = ∣∣∣∣∣∣∣∣∣ T11 T12 T13 T21 T22 T23 T31 T32 T33 ∣∣∣∣∣∣∣∣∣ = T11 ∣∣∣∣∣∣ T22 T23 T32 T33 ∣∣∣∣∣∣ − T12 ∣∣∣∣∣∣ T21 T23 T31 T33 ∣∣∣∣∣∣ +T13 ∣∣∣∣∣∣ T21 T22 T31 T32 ∣∣∣∣∣∣ = T11 (T22T33 − T23T32) − T12 (T21T33 − T23T31) +T13 (T21T32 − T22T31) . (VIII-60) Donald G. Luttermoser, ETSU VIII–13 where the coefficients Ci depend on the elements of T. This is called the characteristic equation for the matrix — its solutions determine the eigenvalues. Note that it is an nth-order equation, so it has n (complex) roots. i) Some of these root may be duplicates, so all we can say for certain is that an n x n matrix has at least one and at most n distinct eigenvalues. ii) In the cases where duplicates exist, such states are said to be degenerate. iii) To construct the corresponding eigenvectors, it is generally easiest simply to plug each λ back into Eq. (VIII-68) and solve (by hand) for the components of a (see Examples VIII-2 and VIII-3). 6. In many physical problems involving matrices in both classical mechanics and quantum mechanics it is desirable to carry out a (real) orthogonal similarity transformation or a unitary transfor- mation to reduce the matrix to its diagonal form (i.e., all non- diagonal elements equal to zero). a) If eigenvectors span the space, we are free to use them as a basis T̂ |f1〉 = λ1|f1〉 T̂ |f2〉 = λ2|f2〉 · · · T̂ |fn〉 = λn|fn〉 b) The matrix representing T̂ takes on a very simple form in this basis, with the eigenvalues strung out along the main VIII–14 PHYS-4007/5007: Computational Physics diagonal and all other elements zero: T =   λ1 0 · · · 0 0 λ2 · · · 0 ... ... ... 0 0 · · · λn   . (VIII-72) c) The (normalized) eigenvectors are equally simple: a(1) =   1 0 0 ... 0   , a(2) =   0 1 0 ... 0   , . . . , a(n) =   0 0 0 ... 1   . (VIII-73) d) A matrix that can be brought to diagonal form (Eq. VIII-72) by change of basis is said to be diagonalizable. e) In a geometrical sense, diagonalizing a matrix is equiva- lent to rotating the bases of a matrix about some point in the space until all of the off-diagonal elements go to zero. If D is the diagonalized matrix of matrix M, the operation that diagonalizes M is D = SMS−1 , (VIII-74) where matrix S is called a similarity transformation. Note that the inverse of the similarity matrix can be con- structed by using the eigenvectors (in the old basis) as the columns of S−1: ( S−1 ) ij = ( a(j) ) i . (VIII-75) f) There is great advantage in bringing a matrix to diagonal form — it is much easier to work with. Unfortunately, Donald G. Luttermoser, ETSU VIII–15 not every matrix can be diagonalized — the eigenvec- tors have to span the space for a matrix to be diagonalizable. 7. The Hermitian conjugate of a linear transformation (called a Hermitian transformation) is that transformation T̂ † which, when applied to the first member of an inner product, gives the same result as if T̂ itself had been applied to the second vector: 〈T̂ †α|β〉 = 〈α|T̂ β〉 (VIII-76) (for all vectors |α〉 and |β〉). a) Note that the notation used in Eq. (VIII-76) is commonly used but incorrect: T̂ β〉 actually means T̂ |β〉 and 〈T̂ †α|β〉 means the inner product of the vector T̂ †|α〉. b) Note that we can also write 〈α|T̂ β〉 = a†Tb = (T†a)†b = 〈T̂ †α|β〉. (VIII-77) c) In quantum mechanics, a fundamental role is played by Hermitian transformations (T̂ † = T̂ ). The eigenvectors and eigenvalues of a Hermitian transformation have 3 cru- cial properties: i) The eigenvalues of a Hermitian transforma- tion are real. ii) The eigenvectors of a Hermitian transfor- mation belonging to distinct eigenvalues are orthogonal. iii) The eigenvectors of a Hermitian transfor- mation span the space. VIII–18 PHYS-4007/5007: Computational Physics where C =   ∣∣∣∣∣∣ 1 0 3 2 ∣∣∣∣∣∣ − ∣∣∣∣∣∣ 0 0 i 2 ∣∣∣∣∣∣ ∣∣∣∣∣∣ 0 1 i 3 ∣∣∣∣∣∣ − ∣∣∣∣∣∣ 0 −i 3 2 ∣∣∣∣∣∣ ∣∣∣∣∣∣ 2 −i i 2 ∣∣∣∣∣∣ − ∣∣∣∣∣∣ 2 0 i 3 ∣∣∣∣∣∣∣∣∣∣∣∣ 0 −i 1 0 ∣∣∣∣∣∣ − ∣∣∣∣∣∣ 2 −i 0 0 ∣∣∣∣∣∣ ∣∣∣∣∣∣ 2 0 0 1 ∣∣∣∣∣∣   =   2 0 −i −3i 3 −6 i 0 2   , then B−1 = 1 3   2 −3i i 0 3 0 −i −6 2   . BB−1 = 1 3   (4 + 0 − 1) (−6i + 0 + 6i) (2i + 0 − 2i) (0 + 0 + 0) (0 + 3 + 0) (0 + 0 + 0) (2i + 0 − 2i) (3 + 9 − 12) (−1 + 0 + 4)   = 1 3   3 0 0 0 3 0 0 0 3   =   1 0 0 0 1 0 0 0 1   . √ If det(A) 6= 0, then A has an inverse: det(A) = −1(0 + 6i) − 1(4 − 6i) + i(−4i − 0) = −6i − 4 + 6i + 4 = 0. As such, A does not have an inverse. Example VIII–2. Find the eigenvalues and normalized eigenvectors of the following matrix: M =   1 1 0 1   . Can this matrix be diagonalized? Solution: 0 = det(M− λ1) = ∣∣∣∣∣∣ (1 − λ) 1 0 (1 − λ) ∣∣∣∣∣∣ = (1 − λ)2 Donald G. Luttermoser, ETSU VIII–19 λ = 1 (only one eigenvalue). From Eq. (VIII-68) we get   1 1 0 1     a1 a2   = 1 ·   a1 a2   . We get two equations from this eigenvector equation: a1 + a2 = a1 a2 = a2 . The second equation tells us nothing, but the first equation shows us that a2 = 0. We still need to figure out the value for a1. We can do this by normalizing our eigenvector a = |α〉: 1 = 〈α|α〉 = 2∑ i=1 |ai|2 = |a1|2 + |a2|2 = |a1|2 or a1 = 1. Hence our normalized eigenvector, |α〉 = a =   a1 a2   , is a =   1 0   . Since these eigenvectors do not span the space (as described on page VIII-4, §A.2.c.ii.), this matrix cannot be diagonalized. Example VIII–3. Find the eigenvalues and eigenvectors of the following matrix: M =   2 0 −2 −2i i 2i 1 0 −1   . VIII–20 PHYS-4007/5007: Computational Physics Solution: The characteristic equation is |M− λ1| = ∣∣∣∣∣∣∣∣∣ (2 − λ) 0 −2 −2i (i − λ) 2i 1 0 (−1 − λ) ∣∣∣∣∣∣∣∣∣ = (2 − λ) ∣∣∣∣∣∣ (i − λ) 2i 0 (−1 − λ) ∣∣∣∣∣∣ − 0 − 2 ∣∣∣∣∣∣ −2i (i − λ) 1 0 ∣∣∣∣∣∣ = (2 − λ)[(i − λ)(−1 − λ) − 0] − 2[0 − (i − λ)] = (2 − λ)(−i − iλ + λ + λ2) + 2i − 2λ) = −2i − 2iλ + 2λ + 2λ2 + iλ + iλ2 − λ2 − λ3 + 2i − 2λ = −λ3 + (1 + i)λ2 − iλ = 0 . To find the roots to this characteristic equation, factor out a λ and use the quadratic formula solution equation: 0 = −λ3 + (1 + i)λ − iλ = [−λ2 + (1 + i)λ − i]λ λ1 = 0 λ2,3 = −(1 + i) ± √ (1 + i)2 − 4i −2 = −(1 + i) ± √ (1 + 2i − 1) − 4i −2 = −(1 + i) ± √ −2i −2 . However note that (1 − i)2 = −2i. As such, the equation above becomes λ2,3 = −(1 + i) ± √ (1 − i)2 −2 = −(1 + i) ± (1 − i) −2 λ2 = −(1 + i) − (1 − i) −2 = −2 −2 = 1 λ3 = −(1 + i) + (1 − i) −2 = −2i −2 = i , Donald G. Luttermoser, ETSU VIII–23 or c2 = 1, which gives our third eigenvector: |γ〉 = c =   0 1 0   , for λ3 = i . C. Computational Linear Algebra. 1. Here, we will be solving the set of equations as described in Eqs. (VIII-1,2,3). a) However for the present, let N = number of unknowns and M = number of equations. b) If N = M , there’s a good chance we can obtain a unique solution. i) However, if one or more of the M equations is a linear combination of the others =⇒ row degen- eracy occurs =⇒ we will not be able to obtain a unique solution. ii) Or, if all equations contain variables in exactly the same linear combination =⇒ column degeneracy occurs =⇒ no unique solution can be found. iii) For square (N = M) matrices, row degeneracy implies column degeneracy and vise versa. iv) If a set of equations are degenerate, the matrix is said to be singular. VIII–24 PHYS-4007/5007: Computational Physics c) Numerically, other things can go wrong: i) If some equations are close to being a linear com- bination of the other equations in the set, roundoff errors may render them linear dependent at some stage of the calculations. ii) Accumulated roundoff errors in the solution can swamp the true solution =⇒ this can occur if N is too large =⇒ direct substitution of the so- lution back into the original equations can verify this. d) Linear sets with 2 < N <∼ 50 can be routinely solved in single precision without resorting to sophisticated meth- ods =⇒ in double precision, N → 200 without worrying about roundoff error. e) As we have discussed, solution to a set of linear equations involve inverting matrices. To write the most efficient ma- trix inverter, one needs to know how numbers are stored. Assuming we have matrix A =   a11 a12 · · · a1N a21 a22 · · · a2N ... ... . . . ... aM1 aM2 · · · aMN   . i) Column storage (which IDL calls “row-major stor- age”): a11, a21, ..., aM1, a12, a22, ..., aM2, ..., a1N , a2N , ..., aMN . =⇒ Fortran and IDL use this method. Donald G. Luttermoser, ETSU VIII–25 ii) Row storage (which IDL calls “column-major storage”): a11, a12, ..., a1N , a21, a22, ..., a2N , ..., aM1, aM2, ..., aMN . =⇒ C and C++ use this method. iii) The techniques we will be discussing here are de- signed with column storage in mind. 2. The basic process of solving linear systems of equations is to eliminate variables until you have a single equation with a single unknown. 3. For nonlinear problems, an iterative scheme is developed that solves a linearized version of the equations. 4. Equations that do not depend upon time are called autonomous systems, that is f(x, t) = f(x) . (VIII-78) a) If initial conditions are of the form xi(t) = xi(0) for all i (1 ≤ i ≤ N) and t, the solution points in the N -dimensional space of the variables are called steady state. b) If we start at steady state, we stay there forever. c) Locating steady states for linear equations (or ODE’s) is important since they are used in stability analysis prob- lems. d) It is easy to see that x∗ = [x∗1, ..., x ∗ N ] is a steady state if and only if f(x∗) = 0 , (VIII-79) VIII–28 PHYS-4007/5007: Computational Physics that might perform such a Gaussian elimination would be written as ∗ Forward elimination DO K = 1, N-1 % Go to column (k) operate DO I = K+1, N % on the rows (i) below column k. COEFF = A(I,K) / A(K,K) DO J = K+1, N A(I,J) = A(I,J) - COEFF * A(K,J) ENDDO A(I,K) = COEFF B(I) = B(I) - COEFF * B(K) ENDDO ENDDO Then the backsubstitution is performed via ∗ Backsubstitution X(N) = B(N) / A(N,N) % Start from bottom and work DO I = N-1, 1, -1 % work upward. (Note: This loop SUM = B(I) % goes from n-1 to 1 in steps of -1.) DO J = I+1, N % Skip lower triangular part. SUM = SUM - A(I,J)*X(J) ENDDO X(I) = SUM/A(I,I) ENDDO f) Gaussian elimination is a simple procedure, yet it has its pitfalls. Consider the set of equations εx1 + x2 + x3 = 5 x1 + x2 = 3 x1 + x3 = 4 In the limit ε → 0, the solution is x1 = 1, x2 = 2, x3 = 3. For these equations, the forward elimination step would start by multiplying the first equation by (1/ε) and sub- Donald G. Luttermoser, ETSU VIII–29 tracting it from the second and third equations, giving εx1 + x2 + x3 = 5 + (1 − 1/ε)x2 − (1/ε)x3 = 3 − 5/ε −(1/ε)x2 + (1 − 1/ε)x3 = 4 − 5/ε i) Of course, if ε = 0 we have big problems, since the (1/ε) factors blow up. ii) Even if ε 6= 0, but is small, we are going to have serious roundoff problems. In this case, 1/ε  1, so the equations above become εx1 + x2 + x3 = 5 −(1/ε)x2 − (1/ε)x3 = −5/ε −(1/ε)x2 − (1/ε)x3 = −5/ε At this point it is clear that we may not proceed since the second and third equations are now iden- tical =⇒ 3 unknowns with only 2 equations. g) Fortunately, there is a simple fix; we can just interchange the order of the equations before doing the forward elim- ination: x1 + x2 = 3 εx1 + x2 + x3 = 5 x1 + x3 = 4 i) The next step of forward elimination gives x1 + x2 = 3 (1 − ε)x2 + x3 = 5 − 3ε −x2 + x3 = 4 − 3 ii) Roundoff eliminates the ε terms giving x1 + x2 = 3 x2 + x3 = 5 −x2 + x3 = 1 VIII–30 PHYS-4007/5007: Computational Physics iii) The second step of forward elimination removes x2 from the third equation using the second equa- tion, x1 + x2 = 3 x2 + x3 = 5 2x3 = 6 iv) You can easily substitute back giving x1 = 1, x2 = 2, x3 = 3. h) Algorithms that rearrange the equations when they spot small diagonal elements are said to pivot. The price of pivoting is just a little extra bookkeeping in the program, but it is essential to use pivoting in all but the smallest matrices. i) Even with pivoting, one cannot guarantee being safe from roundoff problems when dealing with very large matrices. The program below performs Gaussian elimination with pivoting. subroutine ge(aa,bb,n,np,x) * Perform Gaussian elimination to solve aa*x = bb * Matrix aa is physically np by np but only n by n is used (n <= np) parameter( nmax = 100 ) real aa(np,np),bb(np),x(np) real a(nmax,nmax), b(nmax) integer index(nmax) real scale(nmax) * if( np .gt. nmax ) then print *, ’ERROR - Matrix is too large for ge routine’ stop end if * do i=1,n b(i) = bb(i) ! Copy vector do j=1,n a(i,j) = aa(i,j) ! Copy matrix end do end do * Donald G. Luttermoser, ETSU VIII–33 iii) As we have already discussed, the inverse of a matrix is defined by AA−1 = I, (VIII-86) where I is the identity matrix : I =   1 0 0 · · · 0 1 0 · · · 0 0 1 · · · ... ... ... . . .   . (VIII-87) iv) Defining the column vectors e1 =   1 0 0 ...   ; e2 =   0 1 0 ...   ; · · · ; eN =   ... 0 0 1   , (VIII-88) we may write the identity matrix as a row vector of column vectors, I = [e1 e2 · · · eN ] . (VIII-89) v) If we solve the linear set of equations, Ax1 = e1, (VIII-90) the solution vector x1 is the first column of the inverse matrix A−1. vi) If we proceed this way with the other e’s, we will compute all of the columns of A−1. In other words, our matrix inverse equation (Eq. VIII-86) is solved by writing it as A [x1 x2 · · · xN ] = [e1 e2 · · · eN ] . (VIII-91) VIII–34 PHYS-4007/5007: Computational Physics vii) After computing the x’s, we build A−1 as A−1 = [x1 x2 · · · xN ] . (VIII-92) viii) It is usually not necessary to write your own routines to do matrix inverse since virtually all pro- gramming languages has routines that will do this for you. For instance, Matlab has the built in func- tion inv(A), IDL has INVERT(A), Fortran uses the Numerical Recipes LUDCMP subroutine which can be freely downloaded (LINPACK also has inverting matrices routines), and C has similar routines to Fortran. ix) A handy formula to remember involves the in- verse of a 2 x 2 matrix: A−1 = 1 a11a22 − a12a21   a22 −a12 −a21 a11   . (VIII-93) For larger matrices the formulas quickly become very messy. c) Singular and Ill-Conditioned Matrices. i) A matrix that has no inverse is said to be singu- lar, e.g., A =   1 1 2 2   . And remember, a singular matrix has a determi- nant of zero. ii) Sometime a matrix is not singular but is so close to being singular that roundoff errors may push it over the edge. A trivial example would be   1 + ε 1 2 2   , Donald G. Luttermoser, ETSU VIII–35 where ε  1. iii) The condition of a matrix indicates how close it is from being singular; a matrix is said to be ill- conditioned if it is almost singular. iv) Formally, the condition criterion is defined as the normalized distance between a matrix and the nearest singular matrix. All of the programming languages mentioned above also have the ability of returning this normalized distance with either the inverse function or a separate function. E. LU Decomposition. 1. Suppose we are able to write a matrix as the product of 2 matri- ces, L · U = A, (VIII-94) where L is lower triangular (has elements only on the diagonal and below) and U is upper triangular (has elements only on the diagonal and above). 2. In the case of a 3 x 3 matrix A, we would have   α11 0 0 α21 α22 0 α31 α32 α33   ·   β11 β12 β13 0 β22 β23 0 0 β33   =   a11 a12 a13 a21 a22 a23 a31 a32 a33   . (VIII-95) a) We can use a decomposition such as Eq. (VIII-94) to solve the linear set A · x = (L · U) · x = L · (U · x) = b (VIII-96) by first solving for the vector y such that L · y = b (VIII-97) VIII–38 PHYS-4007/5007: Computational Physics ii) For each j = 1, 2, 3, ..., N do these 2 procedures: First, for i = 1, 2, ..., j, use Eqs. (VIII-101), (VIII- 102), and (VIII-103) to solve for βij, namely βij = aij − i−1∑ k=1 αikβkj. (VIII-105) (When i = 1 in Eq. (VIII-28), the summation term is taken to mean zero.) iii) Second, for i = j + 1, j + 2, ..., N , use Eq. (VIII- 103) to solve for αij, namely, αij = 1 βjj  aij − j−1∑ k=1 αikβkj   . (VIII-106) Be sure to do both procedures before going on to the next j. iv) In brief, Crout’s method fills in the combined ma- trix of α’s and β’s,   β11 β12 β13 α21 β22 β23 α31 α32 β33   by columns from left to right, and within each col- umn from top to bottom. f) Pivoting is absolutely essential for the stability of Crout’s method. Partial pivoting (interchange of rows) can be implemented efficiently, and this is enough to make the method stable. The Numerical Recipe’s subroutine LUD- CMP is an LU decomposition routine using Crout’s method with partial pivoting. I recommend its use whenever you need to solve a linear set of equations. Donald G. Luttermoser, ETSU VIII–39 k1 k2 k3m1 m2 x1 x2 Lw L1 L2 L3 Figure VIII–1: A two-mass coupled harmonic oscillator with the origin set at the position of the left wall (i.e., Case A). F. Coupled Harmonic Oscillators. 1. A canonical example of a system of linear equations is the case of a coupled harmonic oscillator as shown in Figure VIII-1. Each spring has an unstretched length of L1, L2, and L3 in this example and a spring constant of k1, k2, and k3. In between each spring is an object of mass m1 and m2. Finally, the distance between the non-moving wall is Lw. 2. The equation of motion for block i is dxi dt = vi; dvi dt = Fi mi , (VIII-107) where Fi is the net force on block i. 3. At the steady state, the velocities vi, are zero and the net forces, Fi, are zero =⇒ static equilibrium. 4. When working with coupled oscillators, one must define a frame of reference from which the measurements are made. For in- stance, one could define the reference point to be the left wall of the system (Case A) as shown in Figure (VIII-1), then the net VIII–40 PHYS-4007/5007: Computational Physics k1 k2 k3m1 m2 x1 x2 Lw L1 L2 L3 Figure VIII–2: A two-mass coupled harmonic oscillator with the origins for each coordinate set at the mass’s equilibrium positions (i.e., Case B). force equations become F1 = m1ẍ1 = −k1(x1 − L1) + k2(x2 − x1 − L2) (VIII-108) F2 = m2ẍ2 = −k2(x2 − x1 − L2) +k3(Lw − x2 − L3) . (VIII-109) 5. To solve these equations, we can use the matrix techniques that have been described earlier in this section (e.g., Gaussian elimina- tion or LU decomposition) by writing these equations in matrix form:   F1 F2   =   −k1 − k2 k2 k2 −k2 − k3     x1 x2   (VIII-110) −   −k1L1 + k2L2 −k2L2 + k3(L3 − Lw)   or F = KA · x − b (VIII-111) 6. One could also choose the equilibrium positions of each block as the reference (Case B — see Figure VIII-2), and write the net force equations as
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved