We couch most of our discussion in terms of the Cholesky factorization,
, where **A** is symmetric positive definite (SPD) and **L** is
lower triangular with positive diagonal entries. We focus on Cholesky
factorization primarily because this allows us to discuss parallelism
in relative isolation, without the additional complications of pivoting
for numerical stability. Most of the lessons learned are also
applicable to other matrix factorizations, such as * LU* and * QR*.
We do
not try to give an exhaustive survey of research in this area, which is
currently very active, instead referring the reader to existing
surveys, such as [54].
Our main point in the current
discussion is to explain how the sparse case differs from the dense
case, and examine the performance implications of those differences.