Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Math 54 Cheat Sheet, Lecture notes of Calculus

Matrix of a lin. transf T with respect to bases B and C: For every vector v in B, evaluate T(v), and express. T(v) as a linear combination of vectors in C.

Typology: Lecture notes

2021/2022

Uploaded on 07/05/2022

allan.dev
allan.dev 🇦🇺

4.5

(85)

1K documents

Partial preview of the text

Download Math 54 Cheat Sheet and more Lecture notes Calculus in PDF only on Docsity! Math 54 Cheat Sheet Vector spaces Subspace: If u and v are inW , then u + v are inW , and cu is inW Nul(A): Solutions ofAx = 0. Row-reduceA. Row(A): Space spanned by the rows ofA: Row-reduceA and choose the rows that contain the pivots. Col(A): Space spanned by columns ofA: Row-reduceA and choose the columns ofA that contain the pivots Rank(A): =Dim(Col(A)) = number of pivots Rank-Nullity theorem:Rank(A) + dim(Nul(A)) = n, whereA is m × n Linear transformation: T (u + v) = T (u) + T (v), T (cu) = cT (u), where c is a number. T is one-to-one if T (u) = 0 ⇒ u = 0 T is onto ifCol(T ) = Rm . Linearly independence: a1v1 +a2v2 + · · ·+anvn = 0 ⇒ a1 = a2 = · · · = an = 0. To show lin. ind, form the matrix of the vectors, and show thatNul(A) = {0} Linear dependence: a1v1 + a2v2 + · · · + anvn = 0 for a1, a2, · · · , an , not all zero. Span: Set of linear combinations of v1, · · · vn BasisB for V : A linearly independent set such thatSpan (B) = V To show sthg is a basis, show it is linearly independent and spans. To find a basis from a collection of vectors, form the matrixA of the vectors, and find Col(A). To find a basis for a vector space, take any element of that v.s. and express it as a linear combination of ’simpler’ vectors. Then show those vectors form a basis. Dimension: Number of elements in a basis. To find dim, find a basis and find num. elts. Theorem: If V has a basis of vectors, then every basis of V must haven vectors. Basis theorem: If V is ann−dim v.s., then any lin. ind. set withn elements is a basis, and any set ofn elts. which spans V is a basis. Matrix of a lin. transf T with respect to basesB and C: For every vector v inB, evaluate T (v), and express T (v) as a linear combination of vectors in C. Put the coefficients in a column vector, and then form the matrix of the column vectors you found! Coordinates: To find [x]B , express x in terms of the vectors inB. x = PB [x]B , wherePB is the matrix whole columns are the vectors inB. Invertible matrix theorem: IfA is invertible, then: A is row-equivalent to I ,A hasn pivots, T (x) = Ax is one-to-one and onto,Ax = b has a unique solution for every b,AT is invertible, det(A) 6= 0, the columns ofA form a basis for Rn , Nul(A) = {0},Rank(A) = n[ a b c d ]−1 = 1 ad−bc [ d −b −c a ] [ A | I ] → [ I | A−1 ] Change of basis: [x]C = PC←B [x]B (think of C as the new, cool basis) [C | B] → [ I | PC←B ] PC←B is the matrix whose columns are [b]C , where b is inB Diagonalization Diagonalizability:A is diagonalizable ifA = PDP−1 for some diagonalD and invertibleP . A andB are similar ifA = PBP−1 forP invertible Theorem:A is diagonalizable⇔A hasn linearly independent eigenvectors Theorem: IFA hasn distinct eigenvalues, THENA is diagonalizable, but the opposite is not always true!!!! Notes:A can be diagonalizable even if it’s not invertible (Ex: A = [ 0 0 0 0 ] ). Not all matrices are diagonalizable (Ex: [ 1 1 0 1 ] ) Consequence: A = PDP−1 ⇒ An = PDnP−1 How to diagonalize: To find the eigenvalues, calculate det(A − λI), and find the roots of that. To find the eigenvectors, for eachλ find a basis forNul(A− λI), which you do by row-reducing Rational roots theorem: If p(λ) = 0 has a rational root r = a b , then a divides the constant term of p, and b divides the leading coefficient. Use this to guess zeros of p. Once you have a zero that works, use long division! Then A = PDP−1 , whereD= diagonal matrix of eigenvalues,P = matrix of eigenvectors Complex eigenvalues Ifλ = a + bi, and v is an eigenvector, then A = PCP−1 , whereP = [ Re(v) Im(v) ] ,C = [ a b −b a ] C is a scaling of √ det(A) followed by a rotation by θ, where: 1√ det(A) C = [ cos(θ) sin(θ) − sin(θ) cos(θ) ] Orthogonality u, v orthogonal if u · v = 0. ‖u‖ = √ u · u {u1 · · ·un} is orthogonal if ui · uj = 0 if i 6= j, orthonormal if ui · ui = 1 W⊥ : Set of v which are orthogonal to every w inW . If {u1 · · ·un} is an orthogonal basis, then: y = c1u1 + · · · cnun ⇒ cj = y·uj uj·uj Orthogonal matrixQ has orthonormal columns! Consequence:QTQ = I , QQT = Orthogonal projection onCol(Q). ‖Qx‖ = ‖x‖ (Qx) · (Qy) = x · y Orthogonal projection: If { u1 · · ·uk } is a basis forW , then orthogonal projection of y onW is: ŷ = ( y·u1 u1u1 ) u1 + · · · + ( y·u1 ukuk ) uk y − ŷ is orthogonal to ŷ, shortest distance btw y andW is ‖y − ŷ‖ Gram-Schmidt: Start withB = {u1, · · ·un}. Let: v1 = u1 v2 = u2 − ( u2·v1 v1·v1 ) v1 v3 = u3 − ( u3·v1 v1·v1 ) v1 − ( u3·v2 v2·v2 ) v2 Then {v1 · · · vn} is an orthogonal basis forSpan(B), and if wi = vi∥∥vi ∥∥ , then {w1 · · ·wn} is an orthonormal basis forSpan(B). QR-factorization: To findQ, apply G-S to columns ofA. ThenR = QTA Least-squares: To solveAx = b in the least squares-way, solveATAx = AT b. Least squares solution makes ‖Ax − b‖ smallest. x̂ = R−1QT b, whereA = QR. Inner product spaces f · g = ∫ b a f(t)g(t)dt. G-S applies with this inner product as well. Cauchy-Schwarz: |u · v| ≤ ‖u‖ ‖v‖ Triangle inequality: ‖u + v‖ ≤ ‖u‖ + ‖v‖ Symmetric matrices (A = AT ) Hasn real eigenvalues, always diagonalizable, orthogonally diagonalizable (A = PDPT ,P is an orthogonal matrix, equivalent to symmetry!). Theorem: IfA is symmetric, then any two eigenvectors from different eigenspaces are orthogonal. How to orthogonally diagonalize: First diagonalize, then apply G-S on each eigenspace and normalize. ThenP = matrix of (orthonormal) eigenvectors,D = matrix of eigenvalues. Quadratic forms: To find the matrix, put the x2i -coefficients on the diagonal, and evenly distribute the other terms. For example, if the x1x2−term is 6, then the (1, 2)th and (2, 1)th entry ofA is 3. Then orthogonally diagonalizeA = PDPT . Then let y = PT x, then the quadratic form becomesλ1y 2 1 + · · · + λny 2 n , whereλi are the eigenvalues. Spectral decomposition:λ1u1u1 T + λ2u2u2 T + · · · + λnunun T Second-order and Higher-order differential equations Homogeneous solutions: Auxiliary equation: Replace equation by polynomial, so y′′′ becomes r3 etc. Then find the zeros (use the rational roots theorem and long division, see the ‘Diagonalization-section). ’Simple zeros’ give you ert , Repeated zeros (multiplicitym) give youAert + Btert + · · ·Ztm−1ert , Complex zeros r = a + bi give youAeat cos(bt) + Beat sin(bt). Undetermined coefficients: y(t) = y0(t) + yp(t), where y0 solves the hom. eqn. (equation = 0), and yp is a particular solution. To find yp : If the inhom. term isCtmert , then: yp = ts(Amt m · · · + A1t + 1)ert , where if r is a root of aux with multiplicitym, then s = m, and if r is not a root, then s = 0. If the inhom term isCtmeat sin(βt), then: yp = ts(Amt m · · · + A1t + 1)eat cos(βt) + ts(Bmt m · · · + B1t + 1)ert sin(βt), where s = m, if a + bi is also a root of aux with multiplicitym (s = 0 if not). cos always goes with sin and vice-versa, also, you have to look at a + bi as one entity. Variation of parameters: First, make sure the leading coefficient (usually the coeff. of y′′ ) is = 1.. Then y = y0 + yp as above. Now suppose yp(t) = v1(t)y1(t) + v2(t)y2(t), where y1 and y2 are your hom. solutions. Then [ y1 y2 y′1 y′2 ] [ v′1 v′2 ] = [ 0 f(t) ] . Invert the matrix and solve for v′1 and v′2 , and integrate to get v1 and v2 , and finally use: yp(t) = v1(t)y1(t) + v2(t)y2(t). Useful formulas: [ a b c d ]−1 = 1 ad−bc [ d −b −c a ] ∫ sec(t) = ln |sec(t) + tan(t)|, ∫ tan(t) = ln |sec(t)|,∫ tan2(t) = tan(x) − x, ∫ ln(t) = t ln(t) − t Linear independence: f, g, h are linearly independent if af(t) + bg(t) + ch(t) = 0 ⇒ a = b = c = 0. To show linear dependence, do it directly. To show linear independence, form the Wronskian: W̃ (t) = [ f(t) g(t) f′(t) g′(t) ] (for 2 functions), W̃ (t) =  f(t) g(t) h(t) f′(t) g′(t) h′(t) f′′(t) g′′(t) h′′(t)  (for 3 functions). Then pick a point t0 where det(W̃ (t0)) is easy to evaluate. If det 6= 0, then f, g, h are linearly independent! Try to look for simplifications before you differentiate. Fundamental solution set: If f, g, h are solutions and linearly independent. Largest interval of existence: First make sure the leading coefficient equals to 1. Then look at the domain of each term. For each domain, consider the part of the interval which contains the initial condition. Finally, intersect the intervals and change any brackets to parentheses. Harmonic oscillator:my′′ + by′ + ky = 0 (m = inertia, b = damping, k = stiffness) Systems of differential equations To solve x′ = Ax: x(t) = Aeλ1tv1 + Beλ2tv2 + eλ3tv3 (λi are your eigenvalues, vi are your eigenvectors) Fundamental matrix: Matrix whose columns are the solutions, without the constants (the columns are solutions and linearly independent) Complex eigenvalues Ifλ = α + iβ, and v = a + ib. Then: x(t) = A ( eαt cos(βt)a − eαt sin(βt)b ) + B ( eαt sin(βt)a + eαt cos(βt)b ) Notes: You only need to consider one complex eigenvalue. For real eigenvalues, use the formula above. Also, 1 a+bi = a−bi a2+b2 Generalized eigenvectors If you only find one eigenvector v (even though there are supposed to be 2), then solve the following equation for u: (A − λI)(u) = v (one solution is enough). Then: x(t) = Aeλtv + B ( teλtv + eλtu )
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved