Before continuing to consider individual optimization algorithms, we describe the conditions which hold at the optimum sought.
The strict definition of the global optimum of is that
where is the set of feasible values of the control variables . Obviously, for an unconstrained problem is infinitely large.
A point is a strong local minimum of if
where is defined as the set of feasible points contained in the neighborhood of , i.e., within some arbitrarily small distance of . For to be a weak local minimum only an inequality need be satisfied
More useful definitions, i.e., more easily identified optimality conditions, can be provided if is a smooth function with continuous first and second derivatives for all feasible . Then a point is a stationary point of if
where is the gradient of . This first derivative vector has components given by
The point is also a strong local minimum of if the Hessian matrix , the symmetric matrix of second derivatives with components
is positive-definite at , i.e., if
This condition is a generalization of convexity, or positive curvature, to higher dimensions.
Figure 1 illustrates the different types of stationary points for unconstrained univariate functions.
As shown in Figure 2, the situation is slightly more complex for constrained optimization problems. The presence of a constraint boundary, in Figure 2 in the form of a simple bound on the permitted values of the control variable, can cause the global minimum to be an extreme value, an extremum (i.e., an endpoint), rather than a true stationary point. Some methods of treating constraints transform the optimization problem into an equivalent unconstrained one, with a different objective function. Such techniques are discussed in Section 4.