Note that the original has been entirely ``lost'' from the computation by subtracting
from it. We would have gotten the same LU factors whether
had been 1, 0, **-2**,
or any number such that .
Since the algorithm proceeds to work only with **L** and **U**, it will
get the same answer for all these different , which correspond to completely different
**A** and so completely different ; there is no way to guarantee an accurate answer.
This is called * numerical instability*,
since **L** and **U** are * not* the exact factors of a matrix **A+E** close to **A**
(another way to say this is that is about as
large as ).

Let us see what happens when we go on to solve
for **x** using this **LU** factorization. The correct answer is .
Instead we get the following. Solving yields and
; note that the value 2 has been
``lost'' by subtracting from it. Solving yields
and
, a completely erroneous solution.

The intuition behind this is that if intermediate quantities in computing the
product are very large compared to , then this can lead to
loss of accuracy in the factorization of **A**, because large entries of **L** and **U** get
added to small entries of **A**, and these small entries of **A** get rounded away. If the
intermediate quantities in the product were comparable to those of
**A**, however, we would expect a tiny error **E=A-LU** in the factorization.
This is what pivoting tries to guarantee.