Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Computational Physics Lecture Notes: Numerical Differentiation and Integration - Prof. Don, Study notes of Physics

These lecture notes cover the topics of numerical differentiation and integration in the computational physics course taught by dr. Donald luttermoser at east tennessee state university. The limitations of numerical differentiation, the taylor series expansion, and the forward and central difference methods for approximating derivatives. It also covers numerical integration, including the trapezoid rule, simpson's rule, and gaussian quadrature.

Typology: Study notes

Pre 2010

Uploaded on 08/13/2009

koofers-user-w17
koofers-user-w17 🇺🇸

10 documents

1 / 23

Toggle sidebar

Related documents


Partial preview of the text

Download Computational Physics Lecture Notes: Numerical Differentiation and Integration - Prof. Don and more Study notes Physics in PDF only on Docsity! PHYS-4007/5007: Computational Physics Course Lecture Notes Section VII Dr. Donald G. Luttermoser East Tennessee State University Version 4.1 Abstract These class notes are designed for use of the instructor and students of the course PHYS-4007/5007: Computational Physics taught by Dr. Donald Luttermoser at East Tennessee State University. Donald G. Luttermoser, ETSU VII–3 x y x-h/2 x x+h/2 x+h Figure VII–1: Forward difference (solid line) and central difference (dashed line) methods for numerical integration. i) The symbol Dc represents center difference. ii) Carrying out the Taylor series for f(x ± h) gives f ′c ' f ′(x) + 1 24 h2f (3)(x) + · · · . (VII-9) iii) The important difference from Eq. (VII-3) is that when f(x− h/2) is subtracted from f(x+ h/2), all terms containing an odd power of h in the Taylor series cancel. iv) Therefore, the central-difference algorithm be- comes accurate to one order higher than h =⇒ h2. v) If (f (3)h2)/24  (f (2)h)/2, then the error with the central-difference method should be smaller than the forward difference. b) For the polynomial given in Eq. (VII-5), the central dif- VII–4 PHYS-4007/5007: Computational Physics ference gives the exact answer regardless of the size of h: f ′c(x) ≈ f(x + h/2) − f(x − h/2) h = 2bx . (VII-10) 4. Errors in Taking Derivatives. a) One should always try to keep calculation errors, tot, at a minimum. This typically occurs when ro ≈ apprx in Eq. (V-33). b) Because differentiation subtracts 2 numbers that are close in value, the limit of roundoff error is essentially machine precision: f ′(x) ≈ f(x + h) − f(x) h ≈ m h , (VII-11) =⇒ ro ≈ m h . (VII-12) c) The approximation error with the forward-difference al- gorithm (Eq. VII-4) is an O(h) term, while that with the central-difference algorithm (Eq. VII-9) is an O(h2) term: fdapprx ≈ f (2)h 2 , (VII-13) cdapprx ≈ f (3)h2 24 . (VII-14) d) The value of h for which roundoff and approximation er- rors are equal is therefore m h ≈ fdapprx = f (2)h 2 , (VII-15) m h ≈ cdapprx = f (3)h2 24 , (VII-16) =⇒ h2fd = 2m f (2) , h3cd = 24m f (3) . (VII-17) Donald G. Luttermoser, ETSU VII–5 e) As an example, for the ex and cos x functions f ′ ≈ f (2) ≈ f (3) in single precision with m ≈ 10−7, one would get hfd ≈ 0.0005 and hcd ≈ 0.01. This makes the central-difference algorithm better for calculating derivatives since a larger h would mean a smaller error =⇒ here the error in the central-difference method is 20 times smaller than the er- ror in the forward-difference method. 5. The Method in Calculating Second Derivatives. a) Taking second derivatives will involve an additional sub- traction step as compared to the first derivative calcula- tion leading to a larger subtraction error. b) We can remedy this with a little algebra in the central- difference method. Taking the second derivative of Eq. (VII-8) and then rewriting the first derivatives with a for- ward and backward difference equation, we get f (2) ' f ′(x + h/2) − f ′(x − h/2) h , (VII-18) ' 1 h2 {[f(x + h/2) − f(x)]− [f(x)− f(x − h/2)]} . (VII-19) Note, however, that one must keep the second derivative formula in this form to minimize can- cellation error. B. Integration. 1. The method of numerical integration is sometimes referred to as numerical quadrature =⇒ summing boxes. a) The definition of an integral given by Riemann is ∫ b a f(x) dx = lim h→0   (b−a)/h∑ i=1 f(xi)   . (VII-20) VII–8 PHYS-4007/5007: Computational Physics [a, b] (square bracket means we include the endpoints, i.e., there are N–2 points in between the endpoints). Note that the trapezoid rule requires an odd number of points N . i) Straight lines connect the points and this piecewise linear function serves as our fitting curve. ii) The integral of this fitting function is easy to com- pute since it is the sum of the areas of trapezoids. The area of a single trapezoid is Ti = 1 2 (xi+1 − xi)(fi+1 + fi) . (VII-26) iii) The true integral is estimated as the sum of the areas of the trapezoids, so I ≈ IT = T1 + T2 + · · ·+ TN−1 = N−1∑ i=1 Ti . (VII-27) Notice that the last term in the sum is N − 1 since there is one fewer panel than grid points. iv) The general formula simplifies if we take equally spaced grid points as given by Eq. (VII-25). v) Then the area for an individual trapezoid (i.e., the area of the one trapezoid within that interval) becomes Ti = 1 2 h(fi+1 + fi) , (VII-28) or in our original integral notation, ∫ xi+h xi f(x) dx ' h(fi + fi+1) 2 = 1 2 hfi + 1 2 hfi+1 . (VII-29) As such, our weights in Eq. (VII-21) for the indi- vidual points are wi = 1 2h. Donald G. Luttermoser, ETSU VII–9 d) We now need to add up all of the trapezoids in the subin- tervals across the entire interval [a, b] =⇒ the trapezoid rule: IT (h) = 1 2 hf1 + hf2 + hf3 + · · · + hfN−1 + 1 2 hfN = 1 2 h(f1 + fN) + h N−1∑ i=2 fi (VII-30) or ∫ b a f(x) dx ≈ h 2 f1 + hf2 + hf3 + · · · + hfN−1 + h 2 fN . (VII-31) e) Note that since each internal point gets counted twice, it has a weight of h, whereas the endpoints get counted just once and thus have weights of h/2: wi = { h 2 , h, ..., h, h 2 } . (VII-32) f) Our approximation error, also called the truncation error or the algorithmic error here, for the trapezoid rule can be written either as apprx = I − IT (x, h) = − 1 12 (b − a)h2f ′′(ζ) (VII-33) for some value x = ζ that lies in [a, b] or as apprx = − 1 12 h2[f ′(b) − f ′(a)] + O(h4) . (VII-34) g) This error is proportional to h2 and Eq. (VII-34) warns you that the trapezoidal rule will have difficulties if the derivative diverges at the end points. VII–10 PHYS-4007/5007: Computational Physics 3. Romberg Integration a) But how many ‘panels’ (i.e., trapezoids) should one use (i.e., how small should h be)? One way to decide is to re- peat the calculation with a smaller interval. If the answer doesn’t vary significantly, then we accept it as correct. i) This doesn’t work with pathological functions or in unusual scenarios (but this won’t happen very often). ii) In the trapezoidal rule, if the number of panels is a power of two, we can halve the interval size without having to recompute all the points. iii) Define the sequence of interval sizes as h1 = (b−a), h2 = 1 2 (b−a), · · · , hn = 1 2n−1 (b−a) . (VII-35) iv) For n = 1, we have only one panel, so IT (h) = 1 2 (b − a)[f(a) + f(b)] . (VII-36) v) There is a simple recursive formula for calculating IT (hn+1) from IT (hn): IT (hn+1) = 1 2 IT (hn) + hn+1 2n+1∑ i=1 f [a + (2i− 1)hn+1] . (VII-37) b) By using this recursive method, we can keep adding pan- els until the answer appears to converge. However, we can greatly improve this process by using a method called Romberg integration. Donald G. Luttermoser, ETSU VII–13 do k = 1, (np-1), 2 ! This loop goes k=1,3,5,...,np-1 sum = sum + func(a+k*h, param) enddo ! ! Compute the first column entry R(i,1) ! R(i,1) = 0.5*R(i-1,1) + h*sum m = 1 do j = 2, i ! Compute the other columns R(i,2),...,R(i,i) m = 4 * m R(i,j) = R(i,j-1) + (R(i,j-1)-R(i-1,j-1))/(m-1) enddo enddo return end 4. Simpson’s Rule (Simpson, eh?). a) Instead of fitting two adjacent points with trapezoids, we will now fit three adjacent points with parabolas: f(x) ≈ αx2 + βx + γ , (VII-43) for each interval, still keeping the intervals evenly spaced. b) The area of each section is then the integral of this parabola: ∫ xi+h xi (αx2 + βx + γ) dx = αx3 3 + βx2 2 + γx ∣∣∣∣∣∣ xi+h xi . (VII-44) c) This is equivalent to integrating the Taylor series up to the quadratic term. Hence the Simpson rule is a second-order polynomial method. d) In order to relate the parameters α, β, and γ to the func- tion, we consider an interval from −1 to +1, ∫ 1 −1 (αx2 + βx + γ) dx = 2α 3 + 2γ . (VII-45) VII–14 PHYS-4007/5007: Computational Physics i) Note, however, for the function itself, f(−1) = α − β + γ , α = f(1)+f(−1)2 − f(0) , f(0) = γ , β = f(1)−f(−1)2 , f(1) = α + β + γ , γ = f(0) . (VII-46) ii) Using the results of Eq. (VII-46) in Eq. (VII-45) yields, ∫ 1 −1 (αx2 + βx + γ) dx = f(−1) 3 + 4f(0) 3 + f(1) 3 . (VII-47) iii) Because 3 values of the function are needed, we evaluate the integral over two adjacent intervals =⇒ evaluate the functions at the two endpoints and the middle: ∫ xi+h xi−h f(x) dx = ∫ xi+h xi f(x) dx + ∫ xi xi−h f(x) dx ' h 3 fi−1 + 4h 3 fi + h 3 fi+1 . (VII-48) e) Simpson’s rule requires the elementary integration to be over pairs of intervals =⇒ this requires the number of intervals to be even, and hence, the number of points N to be odd. f) To integrate across the entire range [a, b], we add up con- tributions from each pair of subintervals, counting all but the first and last endpoints twice: ∫ b a f(x) dx ≈ h3f1 + 4h 3 f2 + 2h 3 f3 + 4h 3 f4 + · · · +4h3 fN−1 + h 3fN . (VII-49) Donald G. Luttermoser, ETSU VII–15 i) From this integral, we see that the total set of weights is wi = { h 3 , 4h 3 , 2h 3 , 4h 3 , ..., 2h 3 , 4h 3 , h 3 } . (VII-50) ii) The sum of your weights provides a useful check on your integration: N∑ i=1 wi = (N − 1) h . (VII-51) Remember, N must be odd. 5. Errors in the Newton-Cotes Methods. a) As we have said above, and as was the case for differ- entiation, our Newton-Cotes methods are essentially just Taylor series expansions of a function. We will highlight here what is derived in more detail in the textbook. b) The approximation (i.e., truncation or algorithmic) error apprx can be estimated by the next higher term in the Taylor series that is not used in the evaluation of the integral: apprx-t = O  [b − a] 3 N2   f (2) , (VII-52) apprx-s = O  [b − a] 5 N4   f (4) , (VII-53) where the subscripts t and s in the subscripts refer to trapezoid and Simpson’s rule, respectively. c) The relative error  is just these approximation errors divided by the value of the function: t,s = apprx-t,s f . (VII-54) VII–18 PHYS-4007/5007: Computational Physics 6. There are higher order Newton-Cotes methods, the 3rd-degree “3/8th” method and the 4th-degree Milne method. See Table 4.1 in your textbook on page 48 for the elementary weights that one would use for these two methods. 7. Gaussian Quadrature. a) Often, one cannot choose an evenly spaced interval of grid points to do a numerical integration. From Eq. (VII-21) we have ∫ b a f(x) dx ≈ w1f(x1) + · · · + wNf(xN) . (VII-68) One then can ask the question, is there an optimal choice for the grid points (or nodes) xi and the weights wi to solve this integral ? b) This question leads us to formulate a new class of integra- tion formulas, known collectively as Gaussian quadra- ture. i) In this class, we will use only the most common formula, namely Gauss-Legendre quadrature. • There are many other kinds of Gaussian quadra- ture that treat specific types of integrands, such as the Gauss-Laguerre formularization which is optimal for integrals of the form ∫∞ 0 e −xf(x) dx. • The derivation of the other Gaussian formulas (see Table 4.2 in your textbook) is similar to our analysis of Gauss-Legendre quadrature. ii) The theory of Gaussian integration is based on the following theorem. • Let q(x) be a polynomial of degree N such that ∫ b a q(x)ρ(x)xk dx = 0 , (VII-69) Donald G. Luttermoser, ETSU VII–19 where k = 1, 2, ..., N − 1 and ρ(x) is a specified weight function. • Call x1, x2, ..., xN the roots of the polynomial q(x). Using these roots as grid points plus a set of weights w1, w2, . . . , wN we construct an integra- tion formula of the form ∫ b a f(x)ρ(x) dx ≈ w1f(x1) + · · · + wNf(xN) . (VII-70) • There is a set of w’s for which the integral for- mula will be exact if f(x) is a polynomial of de- gree < 2N . iii) The weights can be determined from the Three- Point Gaussian-Legendre Rule. For example, consider the interval [−1, 1] with ρ(x) = 1. This gives us a Gaussian-Legendre formula. For inte- grals in the interval [a, b], it is easy to transform them as ∫ b a f(x) dx = b − a 2 ∫ 1 −1 f(z) dz (VII-71) using the change of variable x = 12 [b+a+(b−a)z]. • The first step is to find polynomial q(x). We want a three-point rule so that q(x) is a cubic: q(x) = c0 + c1x + c2x 2 + c3x 3 . (VII-72) • From the theorem of Eq. (VIII-69), we know that ∫ 1 −1 q(x) dx = ∫ 1 −1 xq(x) dx = ∫ 1 −1 x2q(x) dx = 0 . (VII-73) VII–20 PHYS-4007/5007: Computational Physics • Plugging in and doing each integral we get the equations 2c0+ 2 3 c2 = 2 3 c1+ 2 3 c3 = 2 3 c0+ 2 5 c2 = 0 . (VII-74) • The general solution is c0 = 0, c1 = −a, c2 = 0, c3 = 5a/3, where a is some constant. This arbitrary constant cancels out in the second step. • Using this in Eq. (VIII-72) gives a polynomial solution of q(x) = 5 2 x3 − 3 2 x . (VII-75) Notice that this is just the Legendre polynomial P3(x). • Next we need to find the roots of q(x) = P3(x). This cubic is easy to factor. The roots are x1 = − √ 3/5, x2 = 0, and x3 = √ 3/5. Using the grid points in Eq. (VIII-70) gives ∫ 1 −1 f(x) dx ≈ w1f( √ 3/5)+w2f(0)+w3f(− √ 3/5) . (VII-76) • Finally, to find the weights. The above formula must be exact for f(x) = 1, x, ..., x5. We can use this to work out values of w1, w2, and w3. It turns out to be sufficient to consider just f(x) = 1, x, and x2 which produces 3 equations: 2 = w1 + w2 + w3 0 = − √ 3/5w1 + √ 3/5w3 (VII-77) 2 3 = 3 5 w1 + 3 5 w3 . • This linear system of equations is easy to solve giving w1 = 5/9, w2 = 8/9, w3 = 5/9.
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved