The function minimized in least-squares optimization is
Where is the i(th) observation, known with the standard deviation
, and
is the corresponding
prediction of the model given a set of parameters,
. If the model
is being restrained by several types of observations it is more
convenient to use a separate term for each type. Then the
function becomes
Where is the sum over the first class of observations and
so forth.
In the long loop we want to calculate as well as its gradient and
usually its curvature. The first and second derivative operators are both
linear, which means that the first derivative of a sum is the sum of the
first derivative of each term. We can calculate the gradient and curvature
for each type of observation separately and add them together later.
Using this property TNT has been broken into separate programs. There is a different program for each term of the function which calculates the function value, gradient, and curvature for their term. The central control program, Shift, combines the information for each term and performs the tasks which coordinate the cycle of refinement. The separation of the calculations into programs by observation type allows new data types to be used as restraints, or existing ones to be ignored, without modifying the programs which handle the minimization itself. To add a new type of observation you simply write a program which can calculate the value of that term and its gradient. If possible it should also calculate the curvature so that the more powerful methods of function minimization can be used.