Once the shift vector has been calculated the fraction of this vector, or
must be determined. We want to find a value for
which will
cause
to be as small as possible.
Since we know
and now know
the problem is a one dimensional minimization problem. The strategy used
in TNT to find
is
The particular definition of insignificant used is ``When the predicted
improvement in function value is less than five percent''. The function
value at the point was smaller than it was at the start.
The function value should be even lower with the new value of
.
From the parabola we can predict how much lower the function value will
be at the new point. This difference is the predicted
improvement in function value. If the predicted improvement is small there
is little reason to deliberate the merits of the different values for
.
It is time to start an entirely new cycle.
The one dimensional minimization loops require the calculation of the value of the function but no gradients or curvatures. They take a small proportion of the computer time of the entire cycle of refinement. Therefore they are called ``short loops''. In a fit of bad analogy, the name ``long loop'' has been given to the portion of a cycle which calculates the shift vector. A cycle of refinement consists of a single long loop and two short loops. If the approximations of the calculations are violated in some fashion more short loops will be required. If additional short loops are executed there may be nothing wrong with the cycle but it should be examined for error.