next up previous contents index
Next: The Refinement Script Up: How Far to Shift? Previous: How Far to Shift?

Least Squares Refinement

The function minimized in least-squares optimization is

displaymath1530

Where tex2html_wrap_inline1532 is the i(th) observation, known with the standard deviation tex2html_wrap_inline1536 , and tex2html_wrap_inline1538 is the corresponding prediction of the model given a set of parameters, tex2html_wrap_inline1458 . If the model is being restrained by several types of observations it is more convenient to use a separate term for each type. Then the function becomes

displaymath1542

Where tex2html_wrap_inline1544 is the sum over the first class of observations and so forth.

In the long loop we want to calculate tex2html_wrap_inline1546 as well as its gradient and usually its curvature. The first and second derivative operators are both linear, which means that the first derivative of a sum is the sum of the first derivative of each term. We can calculate the gradient and curvature for each type of observation separately and add them together later.

Using this property TNT has been broken into separate programs. There is a different program for each term of the function which calculates the function value, gradient, and curvature for their term. The central control program, Shift, combines the information for each term and performs the tasks which coordinate the cycle of refinement. The separation of the calculations into programs by observation type allows new data types to be used as restraints, or existing ones to be ignored, without modifying the programs which handle the minimization itself. To add a new type of observation you simply write a program which can calculate the value of that term and its gradient. If possible it should also calculate the curvature so that the more powerful methods of function minimization can be used.



Dale Edwin Tronrud
Thu Jul 6 23:24:57 PDT 2000