Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Choosing the Right Preconditioner for Krylov Subspace Methods in Linear Systems - Prof. Zh, Study notes of Computer Science

The importance of preconditioning in accelerating the convergence rate of iterative methods for solving linear systems. The role of the condition number of a matrix and the distribution of its eigenvalues, as well as the properties of a good preconditioner. The document also provides specific preconditioning techniques for conjugate gradient (cg) and generalized minimum residual (gmres) methods, including the use of cholesky factorization and incomplete lu factorization. The text emphasizes the significance of problem-dependent choices for preconditioners and the ongoing research in this area.

Typology: Study notes

Pre 2010

Uploaded on 07/30/2009

koofers-user-p3b
koofers-user-p3b 🇺🇸

10 documents

1 / 7

Toggle sidebar

Related documents


Partial preview of the text

Download Choosing the Right Preconditioner for Krylov Subspace Methods in Linear Systems - Prof. Zh and more Study notes Computer Science in PDF only on Docsity! ECS231 Handout 9 May 19, 2009 1. In the convergence analysis of CG and GMRES algorithms, we saw that the convergence rate of the CG depends on the condition number of A, or more generally the distribution of A’s eigenvalues. Other Krylov subspace methods have the similar property. Preconditioning means replacing the system Ax = b with the modified systems M−1Ax = M−1b. (1) or AM−1x̂ = b, x = M−1x̂. (2) These are referred to as left and right preconditioning, respectively. If M is SPD, then one can precondition symmetrically and solve the modified linear system L−1AL−T y = L−1b, x = L−T y, (3) where M = LLT . The matrix L could be the Cholesky factor of M or any other matrix satisfying M = LLT . The desired preconditioner M should be chosen so that (a) M−1A or L−1AL−T is “well-conditioned” or approximates “the identity matrix”, (b) linear systems with coefficient matrix M are easy to solve. A careful, problem-dependent choice of M can often make the condition number of the mod- ified system much smaller than the condition number of the original one, and thus accelerate convergence dramatically. Indeed, a good preconditioner is often necessary for an iterative method to converge at all, and much current research in iterative methods is directed at finding better preconditioners. More specifically, a good preconditioner M depends on the iterative method being used. • For CG and related methods, one would like the condition number of the symmetrically preconditioned matrix L−1AL−T to be close to one, in order for the error bound based on the Chebyshev polynomial to be small, or alternatively, has few extreme eigenvalues. • For GMRES, a preconditioned matrix that is close to normal and whose eigenvalues are tightly clustered around some point away from the origin would be good, but other properties might also suffice to define a good preconditioner. 2. Preconditioned Conjugate Gradient. If the CG algorithm is applied directly to the symmetric preconditioned system (3), the iterative kernels satisfy yj+1 = yj + α̂j p̂j r̂j+1 = r̂j − α̂jL −1AL−T p̂j p̂j+1 = r̂j+1 + β̂j p̂j 1 with α̂j = r̂Tj r̂j p̂Tj L −1AL−T p̂j and β̂j = r̂Tj+1r̂j+1 r̂Tj r̂j . Defining xj = L −T yj , rj = Lr̂j , pj = L −T p̂j . The iterative kernels become xj+1 = xj + αjpj rj+1 = rj − αjApj pj+1 = M −1rj+1 + βjpj with αj = rTj M −1rj pTj Apj and βj = rTj+1M −1rj+1 rTj M −1rj . We obtained the following preconditioned CG algorithm for solving Ax = b using the precon- ditioner M = LLT . Preconditioned Conjugate Gradient (PCG) 1. compute r0 = b − Ax0, solve Mz0 = r0 and p0 := z0 2. for j = 0, 1, 2, . . . , until convergence do 3. αj = (r T j zj)/(p T j Apj) 4. xj+1 = xj + αjpj 5. rj+1 = rj − αjApj 6. solve Mzj+1 = rj+1 7. βj = (r T j+1zj+1)/(r T j zj) 8. pj+1 = zj+1 + βjpj 9. endfor 3. Preconditioned GMRES. The GMRES applies to the modified system (1) is straightforward. Preconditioned GMRES 1. compute r0 = M −1(b − Ax0), β = ‖r0‖2 and v1 := r0/β 2. for j = 0, 1, 2, . . . , m do 3. compute w := M−1Avj 4. for i = 1, 2, . . . , j do 5. hij = v T i w 6. w := w − hijwi 7. end do 8. compute hj+1,j = ‖w‖2 and vj+1 = w/hj+1,j 9. end do 10. let ym be the solution of miny ‖βe1 − Ĥmy‖2 11. xm = x0 + Vmym 12. If satisfied, Stop, else set x0 := xm and GOTO 1. Note that in the above algorithm, Vm = [v1, v2, . . . , vm] and Ĥm is a (m + 1) × m upper triangular matrix with the entries hij computed at steps (4) and (8). 2 (a) If A is nonsingular diagonally dominant matrix, then A has an incomplete LU factoriza- tion for any sparsity set Z. Note: A matrix A of order n is diagonally dominant if |aii| ≥ n∑ j=1,j 6=i |aij |, for i = 1, 2, . . . , n. It is strictly diagonally dominant if strictly inequality holds for all j. It can be shown that A strictly diagonally dominant matrix is nonsingular. Be aware that diagonal dominance alone does not imply either nonsingularity or singularity. For examples, let A =   2 −1 0 −1 2 −1 0 −1 2   , B =   1 1 0 1 1 −1 1 2 4   . Then A is nonsingular. On the other hand, B is singular. (b) The incomplete LU factorization also exists for any M-matrix. Note: A matrix is said to be an M-matrix if it satisfies the following properties: (1) aii > 0 for i = 1, . . . n, (2) aij ≤ 0 for i 6= j, i, j = 1, . . . n, (3) A is nonsingular and (4) A−1 is a nonnegative matrix (all entries are nonnegative). 6. The following figure shows the LU factorization of a sparse 20 by 20 matrix 0 5 10 15 20 0 5 10 15 20 nz = 72 matrix A 0 5 10 15 20 0 5 10 15 20 nz = 93 matrix L ("dense LU") 0 5 10 15 20 0 5 10 15 20 nz = 82 matrix U ("dense LU") Then an ILU factorization (and E-factor) of the same matrix. 5 0 5 10 15 20 0 5 10 15 20 nz = 72 matrix A 0 5 10 15 20 0 5 10 15 20 nz = 42 matrix L ("sparse LU") 0 5 10 15 20 0 5 10 15 20 nz = 50 matrix U ("sparse LU") 0 5 10 15 20 0 5 10 15 20 nz = 44 matrix A − LU 7. Block preconditioner is a popular technique for block-tridiagonal matrices arising from the discretization of elliptic problems, such as Poisson’s equation. It can be also be generalized to other sparse matrices. For example, the matrix arises in the solution of 2D Poisson’s equation has the form A =   T −I −I T −I . . . . . . . . . −I T −I −I T   where T is a symmetric tridiagonal matrix, with diagonal entres all 4, and off diagonal entries all −1. In this case, a natural preconditioner is M = diag(T, T, . . . , T ). 8. The following figure shows the convergence history of GMRES with and without precon- ditioning for solving a linear system of equations arising from a discretization of a model convection-diffusion equation. The preconditioner used here is ILU(0), i.e., ILU factorization with the same sparsity pattern of A. 6 0 10 20 30 40 50 60 70 80 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 10 1 10 2 number of iteration lo g of r es id ua l n or m GMRES (x−) vs. P−GMRES (o−), no restarting 9. Iterative methods in Matlab functions methods pcg Preconditioned Conjugate Gradients Method. gmres Generalized Minimum Residual Method. bicg BiConjugate Gradients Method. bicgstab BiConjugate Gradients Stabilized Method. cgs Conjugate Gradients Squared Method. minres Minimum Residual Method. qmr Quasi-Minimal Residual Method. symmlq Symmetric LQ Method. Preconditioners functions preconditioners luinc Incomplete LU factorization. cholinc Incomplete Cholesky factorization. 10. Further Reading • Yousef Saad, Iterative Methods for Sparse Linear Systems, 2nd Edition, SIAM, 2003 • A. Greenbaum, Iterative Methods for Solving Linear Systems. SIAM, 1997. • H. van der Vorst, Iterative Krylov Methods for Large Linear Systems, Cambridge Univ. Press, 2003 • R. Barrett et al, Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, SIAM, 1994 (linked to our class website) 7
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved