No matter what algorithm is used to solve **Ax=b**, one can compute
very cheaply given the
approximate solution . Just use the following formula

In the case of dense matrices, this formula costs just flops to evaluate, must less than the required by Gaussian elimination. With many iterative methods, is available anyway. To summarize, one can bound the error in the computed solution by

It remains to describe how to estimate the condition number
.
is easy to compute from its definition,
so we concentrate on
.
An obvious approach is to compute explicitly, and
then compute its infinity norm. This approach would cost
twice again as much as solving **Ax=b** in the first place, and
so is not done. Instead, we settle for approximations to
which can be computed very cheaply
once the factorization **PA=LU** has been produced by
Gaussian elimination. In fact, one can usually estimate
to within a factor of two with just
work, given **PA=LU**. Since computing **PA=LU** costs
using Gaussian elimination, estimating the
condition number is a negligible and worthwhile extra cost.
Note that computing the condition number to within a factor
of two is more than accurate enough for an error bound; indeed,
an order-of-magnitude estimate is enough to say how many
correct decimal places are in the answer. The algorithm for
estimating is described in
[113,114,115], and implemented in
LAPACK. The routines which compute error bounds have
the same names as above, but with an 'x' appended. For example,
the routine that solves **Ax=b** for general, dense **A** and
computes error bounds is called ` sgesvx`.

(See exercise 13.)