A great many algorithms are available for all these problems. The choice of algorithm
depends on three things:
- The structure of A has a large influence on the algorithm. For example,
solving Ax=b using Gaussian elimination takes flops (floating point
operations) if A is a dense matrix. If A is symmetric and positive definite,
we can use the Cholesky algorithm which costs half as much: flops.
If A is triangular (i.e. either zero
above the diagonal or zero below the diagonal), then we can solve Ax=b in
only flops by simple substitution, which is much faster.
Recognizing triangularity is easy, but other structures are more subtle. For example,
if A arises from solving certain elliptic partial differential equations
(like Poisson's equation), then Ax=b can actually be solved in flops using
multigrid, which is essentially as little work as possible.
- The desired accuracy limits which algorithms are suitable. For example,
if one needs to solve Ax=b to 16 decimals of accuracy (more on what this means below),
one might use a careful and expensive implementation of Gaussian elimination, but if
one only needs 3 decimal digits, a few iterations of a much cheaper iterative
method like conjugate gradients might suffice.
- The computer system on which one wants to solve the problem influences
the algorithm choice greatly, and is the main topic of this chapter. Differences in
the computer architecture, available programming languages, and compiler quality
can make large differences in the way one implements a given algorithm;
we saw this in Table 1 above, and will see it later as well.
The computer system can influence algorithm performance
so much that one would choose very different algorithms on different machines.
New parallel architectures have also lead to the invention of new and interesting