Techniques for constrained nonlinear programming problems are clearly more challenging than their unconstrained analogs, and the best approaches to use are still unknown. When the constraints are linear, the problems are simpler, and a better framework for analysis is available. In general, algorithms for constrained problems are based on the optimization methods for the unconstrained case, as introduced in the previous section. The basic approach for solution of a constrained nonlinear multivariate problem is to simplify the problem so it can be reformulated by a sequence of related problems, each of which can be solved by more familiar methods for the unconstrained case. For example, Newton methods for unconstrained problems based on a local quadratic approximation to the objective function may be developed for the constrained problem by restricting the region in which the quadratic model is valid. This restriction is similar in flavor to trust- region methods, mentioned in section 2.2.2. A search vector for the objective function might be chosen by a similar procedure to that described for the unconstrained case (e.g., a Quasi Newton method), but the step length along might be restricted so that the next iterate is a feasible point, i.e., satisfies the constraints of the problem.

An excellent introduction to the optimization of constrained nonlinear functions can be found in the chapter by Gill et al. (pp. 171-210) in the optimization volume edited by Nemhauser et al. [48].