Introduction to methods dependent on matrix structure.

Sparse matrices are ones in which the majority of elements are zero. If the zero entries occur in special patterns, efficient techniques can be used to exploit the structure, such as for tri-diagonal matrices (link), block tri-diagonal matrices, arrow matrices (link), etc. These structures typically arise from numerical analysis applied to solve differential equations. Other problems, such as modeling chemical processes, lead to sparse matrices, without a neatly defined structure ­ just a lot of zeros in the matrix. For matrices such as these, special techniques must be employed. Efficient codes are available [Eisenstat, et al., 1981]. These codes usually employ a symbolic factorization, which must be repeated only once for each structure of the matrix. Then an LU factorization is performed, followed by a solution step using the triangular matrices. There is significant overhead in the symbolic factorization step, but this overhead is rendered small and insignificant if matrices with exactly the same structure are to be used over and over. The interested reader is referred to books by Barker [1977], Bunch and Rose [1976], Duff [1981, 1986], and Rice [1983].

The efficiency of a technique for solving sets of linear equations obviously depends greatly on the arrangement of the equations and unknowns, since an efficient arrangement can reduce the bandwidth, for example. Techniques for renumbering the equations and unknowns arising from elliptic partial differential equations are available for finite difference methods [Price and Coats, 1974] and for finite element methods [Bykat, 1977].