Truncated Newton methods were introduced in the early 1980s [18] and have since been gaining popularity. They are based on the idea that an exact solution of the Newton equation at every step is unnecessary and can be computationally wasteful in the framework of a basic descent method. Any descent direction will suffice when the objective function is not well approximated by a convex quadratic and, as a solution to the minimization problem is approached, more effort in solution of the Newton equation may be warranted. Their appeal to scientific applications is their ability to exploit function structure to accelerate convergence.

Thus, the * approximate* a nonzero residual norm
is allowed at each step, its size monitored systematically
according to the progress made.
This formulation leads to a doubly-nested iteration structure: for every
outer Newton iteration **k** (associated with ) there corresponds
an inner loop for .
As a computationally-economical method for solving large
positive-definite linear systems, PCG is the most suitable method
for the inner loop in this context.
Function structure is introduced by using a preconditioner
that is a sparse approximation to the Hessian.