Congjugate Gradient Methods

It is possible to solve sets of linear equations using iterative methods, and these have a rich historical background. Some of these are discussed in the chapter on solution of partial differential equations, and include Jacobi (link), Gauss-Seidel (link), and over-relaxation methods (link). As the speed of computers has increased, direct methods become preferable for the general case, and these are the methods presented here. One iterative method deserves special mention, however, because of its importance in solving partial differential equations.

The conjugate gradient method is an iterative method which can solve a set of n linear equations in n iterations. Since the method mostly requires multiplying a matrix by a vector, which can be done very efficiently on parallel computers and for sparse matrices, this is a viable method. The orignial method was devised by Hestenes and Stiefel [1952], but current implementations all use a preconditioned conjugate gradient method because it converges faster, provided a good "preconditioner" can be found. A description of an efficient implementation of such a method is by Eisenstat [1981]. The system of n linear equations Ax = f, where A is symmetric and positive definite, is to be solved. A preconditioning matrix M is defined in such a way that the problem Mt = r is easy to solve exactly. (M might be diagonal, for example). Then the preconditioned conjugate gradient method is:

Additional information is available in books by McIntosh [1982] and Hestenes [1980].