The CG method was originally designed to minimize convex
quadratic functions but, through several variations, has been
extended to the general case [23,52].
The first iteration in CG is
the same as in SD, but successive directions are constructed so that
they form a set of mutually conjugate vectors with respect to the
(positive-definite) Hessian of a general convex quadratic function
. Whereas the rate of convergence for SD
depends on the ratio of the
extremal eigenvalues of , the convergence properties of CG depend
on the entire matrix spectrum. Faster convergence is expected when
the eigenvalues are clustered. In exact arithmetic, convergence is
obtained in at most **n** steps. In particular, if has **m** distinct
eigenvalues, convergence to a solution requires **m** iterations.