Before we continue to describe algorithmic details of gradient methods, some programming tips are appropriate. For using gradient and second derivative minimization methods, derivative subroutines must be supplied by the user. Errors in the code for the function and the derivatives are common. Such errors, especially in the derivatives, are often difficult to find since function decrease may still be realized upon minimization. Therefore, it is essential to test the derivative routines of the objective function before applying minimization routines.

We have developed one such testing procedure
using a Taylor series approach. Our general subroutine ` TESTGH` tests
the compatibility of the gradient and Hessian components of a
function against the function routine. Its calling sequence is:

` TESTGH(N,XC,FC,GC,Y,YHY,VEC) `

where ` N` is the dimension; ` XC(N)` the
current vector of control variables; ` FC` the function value at ` XC`;
` GC(N)` the gradient vector at ` XC`; ` Y(N)`
a random perturbation vector;
` YHY` the inner product of ` Y` with ` HY`
(the product of the Hessian
evaluated at ` XC` and the vector ` Y`); and ` VEC(N)`
a work vector. On
input, all quantities except for ` VEC` must be given. The vector ` Y`
should be chosen so that the function value at ` XC+Y` is in a
reasonable range for the problem (see below).