SD is simple to implement and requires modest storage,
.
However, progress toward a minimum may be very slow, especially
near a solution. The convergence rate of SD when applied to a convex quadratic
function
is only * linear*. The associated convergence ratio
is no greater than

where Since the convergence ratio measures the reduction of the error at every step for a linear rate), the relevant SD value can be arbitrarily close to 1 when is large. Thus, the SD search vectors may in some cases exhibit very inefficient paths toward a solution, especially close to the solution.

Minimization performance for Rosenbrock's and Beale's function with
**n=2** are shown in Figures 9 and Figure 13
for SD and other methods
that will be discussed in this section.