Download Math 54 Cheat Sheet: Definitions and Formulas and more Cheat Sheet Mathematics in PDF only on Docsity! Math 54 Cheat Sheet Vector spaces Subspace: If u and v are in W , then u+ v are in W , and cu is in W Nul(A): Solutions of Ax = 0. Row-reduce A. Row(A): Space spanned by the rows of A: Row-reduce A and choose the rows that contain the pivots. Col(A): Space spanned by columns of A: Row-reduce A and choose the columns of A that contain the pivots Rank(A): = Dim(Col(A)) = number of pivots Rank-Nullity theorem: Rank(A) + dim(Nul(A)) = n, where A is m× n Linear transformation: T (u+ v) = T (u) + T (v), T (cu) = cT (u), where c is a number. T is one-to-one if T (u) = 0⇒ u = 0 T is onto if Col(T ) = Rm. Linearly independence: a1v1 + a2v2 + · · ·+ anvn = 0⇒ a1 = a2 = · · · = an = 0. To show lin. ind, form the matrix of the vectors, and show that Nul(A) = {0} Linear dependence: a1v1 + a2v2 + · · ·+ anvn = 0 for a1, a2, · · · , an, not all zero. Span: Set of linear combinations of v1, · · ·vn Basis B for V : A linearly independent set such that Span (B) = V To show sthg is a basis, show it is linearly independent and spans. To find a basis from a collection of vectors, form the matrix A of the vectors, and find Col(A). To find a basis for a vector space, take any element of that v.s. and express it as a linear combination of ’simpler’ vectors. Then show those vectors form a basis. Dimension: Number of elements in a basis. To find dim, find a basis and find num. elts. Theorem: If V has a basis of vectors, then every basis of V must have n vectors. Basis theorem: If V is an n−dim v.s., then any lin. ind. set with n elements is a basis, and any set of n elts. which spans V is a basis. Matrix of a lin. transf T with respect to bases B and C: For every vector v in B, evaluate T (v), and express T (v) as a linear combination of vectors in C. Put the coefficients in a column vector, and then form the matrix of the column vectors you found! Coordinates: To find [x]B , express x in terms of the vectors in B. x = PB [x]B , where PB is the matrix whole columns are the vectors in B. Invertible matrix theorem: If A is invertible, then: A is row-equivalent to I , A has n pivots, T (x) = Ax is one-to-one and onto, Ax = b has a unique solution for every b, AT is invertible, det(A) 6= 0, the columns of A form a basis for Rn, Nul(A) = {0}, Rank(A) = n[ a b c d ]−1 = 1 ad−bc [ d −b −c a ] [ A | I ] → [ I | A−1 ] Change of basis: [x]C = PC←B [x]B (think of C as the new, cool basis) [C | B]→ [I | PC←B] PC←B is the matrix whose columns are [b]C , where b is in B Diagonalization Diagonalizability: A is diagonalizable if A = PDP−1 for some diagonal D and invertible P . A and B are similar if A = PBP−1 for P invertible Theorem: A is diagonalizable⇔ A has n linearly independent eigenvectors Theorem: IF A has n distinct eigenvalues, THEN A is diagonalizable, but the opposite is not always true!!!! Notes: A can be diagonalizable even if it’s not invertible (Ex: A = [ 0 0 0 0 ] ). Not all matrices are diagonalizable (Ex: [ 1 1 0 1 ] ) Consequence: A = PDP−1 ⇒ An = PDnP−1 How to diagonalize: To find the eigenvalues, calculate det(A− λI), and find the roots of that. To find the eigenvectors, for each λ find a basis for Nul(A− λI), which you do by row-reducing Rational roots theorem: If p(λ) = 0 has a rational root r = a b , then a divides the constant term of p, and b divides the leading coefficient. Use this to guess zeros of p. Once you have a zero that works, use long division! Then A = PDP−1, where D= diagonal matrix of eigenvalues, P = matrix of eigenvectors Complex eigenvalues If λ = a+ bi, and v is an eigenvector, then A = PCP−1, where P = [ Re(v) Im(v) ] , C = [ a b −b a ] C is a scaling of √ det(A) followed by a rotation by θ, where: 1√ det(A) C = [ cos(θ) sin(θ) − sin(θ) cos(θ) ] Orthogonality u,v orthogonal if u · v = 0. ‖u‖ = √ u · u {u1 · · ·un} is orthogonal if ui · uj = 0 if i 6= j, orthonormal if ui · ui = 1 W⊥: Set of v which are orthogonal to every w in W . If {u1 · · ·un} is an orthogonal basis, then: y = c1u1 + · · · cnun ⇒ cj = y·uj uj·uj Orthogonal matrix Q has orthonormal columns! Consequence:QTQ = I , QQT = Orthogonal projection on Col(Q). ‖Qx‖ = ‖x‖ (Qx) · (Qy) = x · y Orthogonal projection: If {u1 · · ·uk} is a basis for W , then orthogonal projection of y on W is: ŷ = ( y·u1 u1u1 ) u1 + · · ·+ ( y·u1 ukuk ) uk y − ŷ is orthogonal to ŷ, shortest distance btw y and W is ‖y − ŷ‖ Gram-Schmidt: Start with B = {u1, · · ·un}. Let: v1 = u1 v2 = u2 − ( u2·v1 v1·v1 ) v1 v3 = u3 − ( u3·v1 v1·v1 ) v1 − ( u3·v2 v2·v2 ) v2 Then {v1 · · ·vn} is an orthogonal basis for Span(B), and if wi = vi ‖vi‖ , then {w1 · · ·wn} is an orthonormal basis for Span(B). QR-factorization: To find Q, apply G-S to columns of A. Then R = QTA Least-squares: To solve Ax = b in the least squares-way, solve ATAx = ATb. Least squares solution makes ‖Ax− b‖ smallest. x̂ = R−1QTb, where A = QR. Inner product spaces f · g = ∫ b a f(t)g(t)dt. G-S applies with this inner product as well. Cauchy-Schwarz: |u · v| ≤ ‖u‖ ‖v‖ Triangle inequality: ‖u+ v‖ ≤ ‖u‖+ ‖v‖ Symmetric matrices (A = AT ) Has n real eigenvalues, always diagonalizable, orthogonally diagonalizable (A = PDPT , P is an orthogonal matrix, equivalent to symmetry!). Theorem: If A is symmetric, then any two eigenvectors from different eigenspaces are orthogonal. How to orthogonally diagonalize: First diagonalize, then apply G-S on each eigenspace and normalize. Then P = matrix of (orthonormal) eigenvectors, D = matrix of eigenvalues. Quadratic forms: To find the matrix, put the x2i -coefficients on the diagonal, and evenly distribute the other terms. For example, if the x1x2−term is 6, then the (1, 2)th and (2, 1)th entry of A is 3. Then orthogonally diagonalize A = PDPT . Then let y = PTx, then the quadratic form becomes λ1y21 + · · ·+ λny2n, where λi are the eigenvalues. Spectral decomposition: λ1u1u1T + λ2u2u2T + · · ·+ λnununT Second-order and Higher-order differential equations Homogeneous solutions: Auxiliary equation: Replace equation by polynomial, so y′′′ becomes r3 etc. Then find the zeros (use the rational roots theorem and long division, see the ‘Diagonalization-section). ’Simple zeros’ give you ert, Repeated zeros (multiplicity m) give you Aert +Btert + · · ·Ztm−1ert, Complex zeros r = a+ bi give you Aeat cos(bt) +Beat sin(bt). Undetermined coefficients: y(t) = y0(t) + yp(t), where y0 solves the hom. eqn. (equation = 0), and yp is a particular solution. To find yp: If the inhom. term is Ctmert, then: yp = ts(Amtm · · ·+A1t+ 1)ert, where if r is a root of aux with multiplicity m, then s = m, and if r is not a root, then s = 0. If the inhom term is Ctmeat sin(βt), then: yp = ts(Amtm · · ·+ A1t+ 1)eat cos(βt) + ts(Bmtm · · ·+B1t+ 1)ert sin(βt), where s = m, if a+ bi is also a root of aux with multiplicity m (s = 0 if not). cos always goes with sin and vice-versa, also, you have to look at a+ bi as one entity. Variation of parameters: First, make sure the leading coefficient (usually the coeff. of y′′) is = 1.. Then y = y0 + yp as above. Now suppose yp(t) = v1(t)y1(t) + v2(t)y2(t), where y1 and y2 are your hom. solutions. Then [ y1 y2 y′1 y ′ 2 ] [ v′1 v′2 ] = [ 0 f(t) ] . Invert the matrix and solve for v′1 and v ′ 2, and integrate to get v1 and v2, and finally use: yp(t) = v1(t)y1(t) + v2(t)y2(t). Useful formulas: [ a b c d ]−1 = 1 ad−bc [ d −b −c a ] ∫ sec(t) = ln |sec(t) + tan(t)|, ∫ tan(t) = ln |sec(t)|,∫ tan2(t) = tan(x)− x, ∫ ln(t) = t ln(t)− t Linear independence: f, g, h are linearly independent if af(t) + bg(t) + ch(t) = 0⇒ a = b = c = 0. To show linear