next up previous

10 Error Bounds for Solving Ax=b     continued...

In other words, to guarantee a small error we need not to be too large (one can show it is always at least 1), and for to be small. When is not large, we say A is well conditioned. If is large, we say A is ill conditioned. (The exact definition of these terms depends on the context, but their use is widespread.) depends only on the matrix A, not the algorithm we use; we will describe cheap ways to estimate it below.

The other property we need to guarantee a small error bound, being small, is called numerical stability, and is a property of the algorithm. A well designed algorithm will be numerically stable, and a poor algorithm will not be. More formally, we say an algorithm to solve Ax=b is numerically stable if the solution it computes satisfies (or equivalently ), where is close to the machine precision . Recall from Chapter CA

gif that the machine precision measures the roundoff error in individual floating point operations. In IEEE standard single precision floating point arithmetic, which has 23 bits to store the significant bits of a floating point number, . In IEEE standard double precision floating point arithmetic, which has 52 bits to store the significant bits of a floating point number, .