Specific strategies for optimization algorithms have been quite recent and are not yet unified. For parallel computers, natural improvements may involve the following ideas: (1) performing multiple minimization procedures concurrently from different starting points; (2) evaluating function and derivatives concurrently at different points (e.g., for a finite-difference approximation of gradient or Hessian or for an improved line search); (3) performing matrix operations or decompositions in parallel for special structured systems (e.g., Cholesky factorizations of block-band preconditioners).

With increased computer storage and speed, the feasible methods for solution of very large (e.g., or more variable) nonlinear optimization problems arising in important applications (macromolecular structure, meteorology, economics) will undoubtedly expand considerably and make possible solution of larger and far more complex problems in all fields of science and engineering.