The simplest test for optimality of each in the basic local minimizer algorithm 2.1 involves the following gradient condition:

The parameter is a small positive number such as
square root of *machine precision*,
( is the smallest number such that the floating
point value of
is greater than the floating representation of 1).
For large-scale problems, the Euclidean
norm divided by , or the *max* norm,
,
may be used to replace or
in the left side of equation (26).
This measures instead an average gradient element.

To obtain a measure of progress at each iteration (function reduction, change in , etc.) and possibly halt computations if necessary, the following combination can be used [29]:

Here is a small number that specifies the desired accuracy in the function value. Each step of the algorithm 2.1 can then check conditions (26) and (27), (28), (29). For , only the first is checked. If either the triplet (27), (28), (29) or (26) hold, the iteration process can be halted. While the conditions above are quite useful in practice, many minimization algorithms only incorporate a gradient-norm test in some form.