Conjugate Gradients
Emilio Ashton Vital Brazil <emilio at impa.br>

Abstract
The Conjugate Gradient algorithm (CG) is one of the best known iterative techniques for solving sparse Symmetric Positive Definite (SPD) linear systems, Ax=b, where A is SPD. Described in one sentence, the method is a realization of an orthogonal projection technique onto the Krylov subspace. [1]

Lack of robustness is a widely recognized weakness of iterative solvers, relative to direct solvers. This drawback hampers the acceptance of iterative methods in industrial applications despite their intrinsic appeal for very large linear systems. Both the efficiency and robustness of iterative techniques can be improved by using preconditioning. [1]

Preconditioning is a technique for improving the condition number of a matrix M. Suppose that M is SPD, matrix that approximates A, but is easier to invert. We can solve Ax=b, indirectly by solving M^(-1)Ax=M^(-1)b. [2]

Application Programming Interface (API)
CG follows the algorithm described on p. 15 in the SIAM Templates book [3] and "An Introduction to the Conjugate Gradient Method Without the Agonizing Pain" of Jonathan Richard Shewchuk [2]

The CG return value indicates convergence within max_iter (input) iterations (0), or no convergence within max_iter iterations (1). Upon successful return, output arguments have the following values: x -- approximate solution to Ax = b; max_iter -- the number of iterations performed before the tolerance was reached; tol -- the residual after the final iteration.


The CG needs three Class: Matrix, Vector and Preconditioner.

Files
Conjugate_Gradients.tar.gz :: 140 kb ::
Documentation :: Doxygen html page ::

References
[1] Iterative methods for sparse linear systems by Yousef Saad
[2] An Introduction to the Conjugate Gradient Method Without the Agonizing Pain by Jonathan Richard Shewchuk.
[3] Netlib Templetes

Instituto Nacional de Matemática Pura e Aplicada - IMPA
Vision and Graphics Laboratory - Visgraf
Last update:: oct. 18, 2007