This note describes a solution to the problem of evaluating "zoned circularity" from measurements of relative circuference displacements. The algorithm is derived from the the classical primal-feasible simplex method for the Linear Programming Problem (LP), hence it is guaranteed to locate an optimum in finite steps. In practice, convergence is extremely fast, often reached with four or five pivot operations.
International standards such as IEC 1101 describe tolerancing specifications for manufactured machined elements, without providing any particular help in how to obtain or analyze measurements for evaluating the criteria. One such tolerance specification is "zoned circularity." This specification bounds a circumference between a largest inscribed circle and a smallest circumscribed circle, where the two circles have a common center. This center point is not necessarily an axis point as intended in the original drawings. It is not necessarily a point of best balance either, because it is concerned with extrema of the deviations. To evaluate zone circularity from measurements, it is necessary to determine four parameters: two parameters identifying the radii of the bounding circles and two parameters locating the center point for these circles. Once these parameters are identified, the circularity zone estimate is the difference between the two radii.
An algorithm that I will call "the Simple Algorithm" is described by P. B. Dhanish in . I will leave the discussion of prior state of the art to the introduction of that paper, as there is nothing further I can contribute.
There is nothing specifically wrong with the Simple Algorithm. It is quite straightforward to implement. The only real difficulty, and it is a mostly surmountable one, is that real world data data sets are not always in the right form to apply it. If a part is measured along three orthogonal axes with a general purpose CMM (coordinate measuring machine), the data set is in the appropriate form. However, this is not the usual way of obtaining high precision measurements of cylindrical objects. Usually, a cylindrical piece is fixtured carefully on a rotation table, and radial deviations along a circumference are observed using a high precision linear displacement sensor. Compared to a multiple axis CMM, it is much easier to maintain tight tolerances in this manner, using a one degree of freedom mechanism and sensor calibrated over a very short range — though difficult enough even so.
Considering Step 17 of the Simple Algorithm in : "Calculate the distance from the center of the circle to the translated coordinates." This calculation cannot be done using the radial deviation dataset alone because the absolute radial distance information is missing, so the "center of the circle" is not known. The best that can be done is to supply a good estimate for average radius and "cook" the data by this amount, artificially creating coordinates in a global cartesian reference frame. The algorithm does not appear to be at all sensitive to this adjustment, but nevertheless, the estimated adjustment will introduce some higher-order error.
The survey of prior art in reference  also mentions a simplex search algorithm by T. S. R. Murthy , stating that it had the problem of no guarantees of optimality. This note also employs a simplex search, but the problem is formulated here such that the classic proofs of convergence for Dantzig's primal - feasible simplex algorithm for linear programming (the LP problem, e.g.  or ) will assure convergence to the optimal solution in finite steps. In actual practice, the optimum is often achieved after four or five pivot operations.
When measuring a cylindrical part with a rotation table, great effort is put into fixturing the piece so that its center axis aligns with the rotation axis of the table. However, this is not perfect, so there are placement errors, often much larger than the tolerances that are being tested. In this section, we briefly discuss the impacts of centering on the measurement data set and obtain from this a motivation for the bounding functions used by the algorithm.
If measuring a perfectly circular, perfectly manufactured,
perfectly centered part of radius
R with perfect
noise-free sensors, the sensors would see no radial change whatsoever,
hence the data sets would be a "flat line." That is, displacement in
radial direction is constant at every rotation angle.
Now consider the impact of displacing this ideal object of radius
R by a very small vector distance
r. We are not
concerned about the direction of this displacement here, so arbitrarily
allow this displacement to be in the direction of the
normal to the axis of rotation.
It is apparent in this illustration (with size of center shift exaggerated to magnify the effects) that equal angles of rotation with respect to the table — which is what we can control — no longer correspond to equal angles with respect to the part itself. Or equivalently, the sample locations at which displacements are measured are not the same in displaced and non-displaced data sets.
Analyzing the displacements with the assistance of the following vector diagram, we can observe that an expression for apparent radius, with respect to the fixed table rotation axis and measured at the circumference, is
R' = r cos θ + R ( cos tan-1 ([ r sin θ]/R) )
where the terms are:
r is very much smaller than
complicated multiplier on term
R is not significantly
different from 1.0. The value of
r might be on the
order of a 10 microns, where the value of
R is many
centimeters. In other words, a small fixturing displacement
r has the apparent effect of introducing a sinusoidal
displacement of peak magnitude
r through the data set.
This wave will influence each measurement when measuring a less-than-
ideal piece in less-than-perfect fixturing with real-world sensors.
The apparent radial displacements will range roughly from
We know that the displacements introduced by any shift of the center, not just displacements due to fixturing errors, will have the first-order effects discussed in the previous section. The displacement effects upon measurement data are very nearly sinusoidal. The phase of the sinusoids accounts for the direction of the shifts, and the magnitudes account for axis displacement. The model for these curves is:
U(θ) = ru + ds sin(θ) + dc cos(θ)
L(θ) = rl + ds sin(θ) + dc cos(θ)
where the terms in this expression are
The problem is to select values of variables
dc to minimize separation distance
= ru−rl, subject to the bounding conditions on the
measured data set
M(θ) <= U(θ) M(θ) >= L(θ)
Bounding curves are defined as feasible if the constraints are
satisfied, that is, if no data value is higher than the upper bounding
U and no data value is lower than the lower bounding
L. We can define feasible bounding curves in terms
of slack variables. Define two non-negative slack variables for each
measured term. Designate the slack variables for the upper bounding
curve as row vector
Su, and the slack variables for the
lower bounding curve as row vector
Sl. For the upper
bound curve, the curve term minus a slack variable from
yields the measured
M value exactly. For the lower bound curve,
the curve term plus a slack variable from
Sl yields the
M value exactly. Slack variables are not allowed
to become negative, and this assures that the upper bound is truly above
all of the measurement terms, and the lower bound is truly below.
While the sine and cosine curves are inherently nonlinear, these
functions never change and can be pre-evaluated term by term, to yield
Subsequently, the boundary curves can be considered a linear form,
with scalar variable multipliers applied to constant vectors. The combination
of the linear constraints with the linear objective criterion and sign
constraints on slack variables yields a classical linear programming
To represent the problem in LP tableau form, let
Nbe the number of terms in the data set,
I1be a constant vector of length
Nwith all 1 terms
I0be a constant vector of length
Nwith all 0 terms
R0be a constant row of length
Nwith all 0 terms
NxNsquare unit matrix.
2Nrows plus an additional cost coefficient row, with
2N+4columns. (For readers unfamiliar with tableau conventions, the top row is merely identification labels; each row in the middle section provides the coefficients of a linear constraint equation with the constraining value on the right; the last row provides the coefficients of the linear objective function to be optimized.
rh rl ds dc Su Sl ======================================== I1 I0 S C -I 0 = M I0 I1 S C 0 I = M ---------------------------------------- 1 -1 0 0 R0 R0 = -z
All we need to do at this point is apply a classic linear programming
solver with unconstrained sign on the first four variables and nonnegative
sign restrictions on the
2N slack variables. Such a conventional
method does not take advantage of the special structure of the problem,
A well known result from the theory of linear optimization (see  or  for example) is that if
an optimal solution exists (as it will for any adequate data set), a
basic optimal solution will exist. As the four critical variables
rh, rl, ds and
dc will always be involved in
the basic solution that satisfies the
constraints, this means that there will be at least four slack variables
that are zero in the basic optimal solution. Hence, there will be four
and possibly a few more "points of contact" between the two bounding
curves and measured positions at the optimum.
While general simplex method solvers do not assume anything about
the structure of the constraints, this particular problem has an
extremely sparse special structure. The first
constraints can be scaled by -1 "on both sides" by inverting all
signs in the first
N rows. This results in the following
rh rl ds dc Su Sl ======================================= -I1 I0 -S -C I 0 = -M I0 I1 S C 0 I = M --------------------------------------- 1 -1 0 0 R0 R0 = -z
By inspection, the slack variable columns now form a trivial basis,
but not a feasible basis because positive slack variables cannot be equal
both to a non-trivial measurement vector and to its negative. However,
it is easy to construct an initial feasible basis by establishing flat-line
dc temporarily remaining
zero) that bound the entire data set. To do this, find the index
kmax in the first
N rows of the last column
-M is minimum at
M is maximum). Also find the index
kmin in the
N rows such that
M is minimum at row
For readers unfamiliar with conventional terminology of the LP
problem, digress briefly and define a pivot operation as a
special set of row operations applied to a matrix. A pivot operation
changes the appearance of the constraints, but does not invalidate
them. One nonzero term of
the matrix is designated as the pivot element. For every
k other than the row
containing the pivot element, compute the ratio of the two terms in
pivot column v:
is then multiplied times this ratio and added to matrix row
k. Finally, all of the terms in the pivot row are scaled
1/v(kp). The net effect of the pivot operation is to
zero all terms in pivot column
v except for term
v(k), which gets a value 1.0. Various other terms
throughout the matrix are modified by the row operations in the
process. Notice that the pivot affects the last cost row as well as
the constraint rows, but there is no effect on identity matrix columns
for which the nonzero term does not lie in the pivot row. Any slack
variable columns that are known to be unchanging can be omitted
from the tableau to save storage. They can be generated very easily
later as needed.
To start, the first pivot operation is performed in the
first column at location
kmax, and in the second column
kmin. These two pivot operations will have
the effect of eliminating the nonzero cost coefficients from the last
row under columns
2, creating new
nonzero cost coefficients below columns
nonzero term remains in column one and in column two after the two
pivot operation. The values in the last column in rows
kmax provide initial bounds on the
locations of the outer and inner bounding wave. The other terms of the
modified last column will show the values of slack variables, which at
this point are all non-negative. Columns
and the slack variable columns except
4+N+kmin form a feasible basic solution. This solution
is not optimal, however, except in the unlikely case of a perfect
test object and perfect alignment.
We know that in the optimal solution we are seeking, there will
be some adjustment to the center location for the bounding curves.
The center point shift will result in a nonzero sinusoidal component.
In anticipation of this, introduce variables
dc into the basis immediately.
The values of
dc are not sign
constrained, so we need to modify the usual minimum ratio rule of
linear programming for selecting a pivot element. Start with the third
column, the one for variable
ds. To locate the pivot
element in this column, compute the following:
kmin, that is in row
kmax, or that has a zero value.
kpof the ratio value that is smallest in magnitude. The purpose of the testing is to guarantee that the pivot operation will not result in making any of the slack variables negative, which would result in invalid bounding of the dataset. Perform a pivot operation on column 3 with pivot element
Perform the same steps again, this time using column
4 as the
pivot column instead of column
3, and also excluding the previous
pivot row from the ratio test.
After completing these operations, there will be a single term with
value 1 in each of the first four columns, and all other terms in
these columns including the cost row will be 0. The pivot operations
will result in changes to the values in the
column, and will produce non-zero cost coefficient values at the
end of as many as four slack variable columns. From this point on,
the first four columns will remain in the basis and will not change;
they can be ignored, and their pivot rows are not involved in
subsequent ratio testing.
This section is just a brief review of the simplex solution method for the LP problem. Readers familiar with the algorithm can skip to the next section.
If all of the coefficients are zero or positive in the cost coefficient row, the current solution is optimal — any change to the basis must either make the optimality criterion worse or violate one of the sign constraints on a slack variable. Since the basis is full rank, the solution corresponding to that basis must be unique. Actually, it is common for a few extreme points to dominate the data set, hence it is not ununusual for the initial basis that includes the first four columns to be optimal.
At this point, if any of the four nonbasic slack variable columns has a negative cost coefficient, select that column as a new basis column. If there is more than one, the tie can be broken in any desired manner. This particular problem is not subject to the phenomenon of basis cycling so picking the column with the most negative cost coefficient is most likely to yield an optimal solution quickly.
For the elements in the selected pivot column, excluding the rows
corresponding to the four permanent basis variables, find the pivot
kp such that in this row the term is positive, and
the ratio of the term in the
M column divided by the term
in the pivot column is minimum. Pivot on the selected element.
Repeat all of the steps in this section until the condition for optimality is reached. For the same reason that it is not unusual to land upon an optimal solution immediately when first establishing the feasible basis, it is not unusual for a very small number of pivot operations, one or two perhaps, to yield the optimum.
Once the optimum is reached, slack variables corresponding to
columns that are not in the basis set are assigned the value zero,
and will have a nonzero relative cost value. There might also be
additional points where at it happens the slack values also go to
zero. These four or more columns correspond to the "points of
contact" between the bounding curves and the measurement set. The
terms of the
M column for the rows of basis set variables
yield the parameter values for the bounding curves at the optimum.
The estimate of the zone circularity is then obtained from subtracting
ru-rl at the optimal solution.
2N terms for the slack variables start as an identity matrix,
and most of these terms never change. The columns that are actually
needed can be generated on the fly. The following algorithm can be
compared to the previous description to verify that it performs all of
the same computations as the classical algorithm, yielding exactly the
same solution, but generating data only when needed.
6columns, initially all zeros.
1with cost term 0.
2a vector with all values 0 except for a term 1.0 in row
3a vector with all values 0 except for a term 1.0 in row
4with one negative sine cycle, one ordinary sine cycle, and cost coefficient 0.
5with one negative cosine cycle, one ordinary cosine cycle, and cost coefficient 0.
6a vector with all values -1 in the first
Nterms and all values 0 in the second
Nterms, plus one more cost term of 1.
kmaxto the list of solution variable rows, associated with parameter variable
6a vector with all values 1 in the second
Nterms and all values 0 in the first
Nterms, plus one more cost term of -1.
kminto the list of solution variable rows, associated with parameter variable
1, perform a test for pivot without sign restriction to find pivot row
4except term in pivot row, which is set to 1.0.
6at the selected pivot row.
1, perform a test for pivot without sign restriction to find pivot row
5except term in pivot row, which is set to 1.0.
6at the selected pivot row.
Repeat all of the steps in this section until the initial test for optimality is satisfied.
5. If they are all non-negative, the solution is optimal.
kpunder sign restrictions, using column
6at the selected pivot row
When optimality is reached, the index terms associated with columns
3 locate the terms in the first column
where the values
rl can be obtained.
For purposes of a simple tolerance evaluation, report the value
rh-rl. For purposes of a more detailed presentation of
the data, index terms associated with columns
5 locate the values of the
dc parameters in the first column. For all other row
indices that are not associated with one of the solution variables,
the first column provides the values of the slack variables at the
The method described here for zoned circularity can be modified to solve the related problems of finding the minimal circumscribing circle or maximal inscribed circle. Having only one bounding curve, one of the radius parameters is omitted, one set of slack variables is dropped, and one set of N bounding constraints is dropped. This reduces the problem from a 2N+5 x 2N+1 tableau to a N+4 x N+1 tableau. Only three operations are required to establish the initial feasible basic solution, and there will be three or more "points of contact" at the optimum. In other respects, the method of solution is similar.
The following data set contains six points of radial displacement measurement, at equal rotation angles, covering one object rotation.
.0545; .0542; .0488; .0506; .0519; .0469;
After initial construction of the solution matrix first column, it is observed that the lowest number in the upper half of column 1 occurs in row 1, and the lowest number in the second half occurs in row 12. The basic slack variable columns with terms 1.0 in these two rows are constructed in columns 2 and 3. One negative cycle and one positive sine cycle appear in column 4. One negative and one positive cosine cycle appear in column 5. Column 6 is a working scratchpad column and is not shown.
-5.4500e-002 1.0000e+000 0 0 -1.0000e+000 -5.4200e-002 0 0 -8.6603e-001 -5.0000e-001 -4.8800e-002 0 0 -8.6603e-001 5.0000e-001 -5.0600e-002 0 0 0 1.0000e+000 -5.1900e-002 0 0 8.6603e-001 5.0000e-001 -4.6900e-002 0 0 8.6603e-001 -5.0000e-001 5.4500e-002 0 0 0 1.0000e+000 5.4200e-002 0 0 8.6603e-001 5.0000e-001 4.8800e-002 0 0 8.6603e-001 -5.0000e-001 5.0600e-002 0 0 0 -1.0000e+000 5.1900e-002 0 0 -8.6603e-001 -5.0000e-001 4.6900e-002 0 1.0000e+000 -8.6603e-001 5.0000e-001 0 0 0 0 0After the first two pivoting operations, row 1 is associated with the
rhparameter, row 12 is associated with the
rlparameter, and the matrix consists of the following. The parameter variables in rows 1 and 12 are not sign restricted, but happen to be positive because the original data set was all positive. All other values in column 1 are positive, indicating that slack variables are positive and a feasible solution is at hand.
5.4500e-002 -1.0000e+000 0 0 1.0000e+000 3.0000e-004 -1.0000e+000 0 -8.6603e-001 5.0000e-001 5.7000e-003 -1.0000e+000 0 -8.6603e-001 1.5000e+000 3.9000e-003 -1.0000e+000 0 0 2.0000e+000 2.6000e-003 -1.0000e+000 0 8.6603e-001 1.5000e+000 7.6000e-003 -1.0000e+000 0 8.6603e-001 5.0000e-001 7.6000e-003 0 -1.0000e+000 8.6603e-001 5.0000e-001 7.3000e-003 0 -1.0000e+000 1.7321e+000 0 1.9000e-003 0 -1.0000e+000 1.7321e+000 -1.0000e+000 3.7000e-003 0 -1.0000e+000 8.6603e-001 -1.5000e+000 5.0000e-003 0 -1.0000e+000 0 -1.0000e+000 4.6900e-002 0 1.0000e+000 -8.6603e-001 5.0000e-001 -7.6000e-003 1.0000e+000 1.0000e+000 -8.6603e-001 -5.0000e-001
When processing the nonbasic column 4, it has terms of sign opposite the cost coefficient (excluding rows 1 and 12) in six locations, with minimum ratio in row 9. After the column copy, and construction of nonbasic slack variable column with a 1.0 term in row 9, the pivoting operation yields the following.
5.4500e-002 -1.0000e+000 0 0 1.0000e+000 1.2500e-003 -1.0000e+000 -5.0000e-001 5.0000e-001 0 6.6500e-003 -1.0000e+000 -5.0000e-001 5.0000e-001 1.0000e+000 3.9000e-003 -1.0000e+000 0 0 2.0000e+000 1.6500e-003 -1.0000e+000 5.0000e-001 -5.0000e-001 2.0000e+000 6.6500e-003 -1.0000e+000 5.0000e-001 -5.0000e-001 1.0000e+000 6.6500e-003 0 -5.0000e-001 -5.0000e-001 1.0000e+000 5.4000e-003 0 0 -1.0000e+000 1.0000e+000 1.0970e-003 0 -5.7735e-001 5.7735e-001 -5.7735e-001 2.7500e-003 0 -5.0000e-001 -5.0000e-001 -1.0000e+000 5.0000e-003 0 -1.0000e+000 0 -1.0000e+000 4.7850e-002 0 5.0000e-001 5.0000e-001 0 -6.6500e-003 1.0000e+000 5.0000e-001 5.0000e-001 -1.0000e+000
When processing the nonbasic column 5, it has terms of sign
opposite the cost coefficient (excluding rows 1, 12 and 9) in six
locations, with minimum ratio in row 4. After the column copy,
and construction of nonbasic slack variable column with a 1.0
term in row 4, the pivoting operation yields the following. After
the last two pivot operations, the
ds parameter is associated with
row 9 and the
dc parameter is associated with row 4.
5.2550e-002 -5.0000e-001 0 0 -5.0000e-001 1.2500e-003 -1.0000e+000 -5.0000e-001 5.0000e-001 0 4.7000e-003 -5.0000e-001 -5.0000e-001 5.0000e-001 -5.0000e-001 1.9500e-003 -5.0000e-001 0 0 5.0000e-001 -2.2500e-003 0 5.0000e-001 -5.0000e-001 -1.0000e+000 4.7000e-003 -5.0000e-001 5.0000e-001 -5.0000e-001 -5.0000e-001 4.7000e-003 5.0000e-001 -5.0000e-001 -5.0000e-001 -5.0000e-001 3.4500e-003 5.0000e-001 0 -1.0000e+000 -5.0000e-001 2.2228e-003 -2.8868e-001 -5.7735e-001 5.7735e-001 2.8868e-001 4.7000e-003 -5.0000e-001 -5.0000e-001 -5.0000e-001 5.0000e-001 6.9500e-003 -5.0000e-001 -1.0000e+000 0 5.0000e-001 4.7850e-002 0 5.0000e-001 5.0000e-001 0 -4.7000e-003 5.0000e-001 5.0000e-001 5.0000e-001 5.0000e-001
Inspecting the cost coefficients in the last terms of the four
nonbasic columns 2 through 5, it is observed that all of these terms
are positive, hence any pivot operation that re-introduced these
slack variables into the basis would only serve to increase the
separation of the bounding curves, contrary to the goal of minimizing
the separation. Hence, an optimal solution is indicated. The optimal
dc are picked from
rows 9 and 4 in column 1. The radial separation between the two curves
is picked from rows 1 and 12 of column 1. The difference is 4.700e-3,
which is the width of the circularity zone. Alternatively, it can be
observed that the negative of this result appears in the last term of
column 1 and can be accessed directly.
The following reference implementation is intended for clarity of exposition, though in practice, the tenths of a second one might gain from a more efficient implementation probably do not matter.
% -- FILE ZONEFIT.M -- % Load displacement data cycle 'dat' and its length 'N' here. % Reserve storage for the tableau matrix. np1 = N+1; twon = 2*N tnp1 = 2*N+1; tabl = zeros(tnp1,6); % Build innitial contents of the tableau. tabl(1:N,1) = -dat; tabl(np1:twoN,1) = dat; kmax = argmin(tabl(1:N,1)); kmin = argmin(tabl(np1:twon,1)); tabl(kmax,2)=1.0; tabl(kmin,3)=1.0; for iterm = 1:N ang = 2*pi*(iterm-1)/N; tabl(iterm,4)=-sin(ang); tabl(iterm+N,4)=sin(ang); tabl(iterm,5)=-cos(ang); tabl(iterm+N,5)=cos(ang); end % Add rh to basis set basis = zeros(1,5); basis(2)=kmax; tabl(1:N,6)=-ones(N,1); tabl(np1:twon,6)=zeros(N,1); tabl(tnp1,6)=1.0; tabl = pivot(tabl,kmax,6); % Add rl to basis set basis(3)=kmin; tabl(1:N,6)=zeros(N,1); tabl(np1:twon,6)=ones(N,1); tabl(tnp1,6)=-1.0; tabl = pivot(tabl,kmin,6); % Add ds to the basis set tabl(:,6)=tableau(:,4); tabl(:,4)=zeros(tnp1,1); kpiv = unsignpiv(tabl,1,6); tabl(kpiv,4)=1.0; tabl = pivot(tabl,kpiv,6); basis(4)=kpiv; % Add dc to the basis set tabl(:,6)=tableau(:,5); tabl(:,5)=zeros(tnp1,1); kpiv = unsignpiv(tabl,1,6); tabl(kpiv,5)=1.0; tabl = pivot(tabl,kpiv,6); basis(5)=kpiv; % Iterate until an optimal basis is found iterm = argmin(tabl(tnp1,2:5))+1; while tabl(tnp1,iterm) < 0.0 tabl(:,6)=tabl(:,iterm); tabl(:,iterm)=zeros(tnp1,1); kpiv = signpiv(tabl,1,6); tabl(kpiv,iterm)=1.0; tabl = pivot(tabl,kpiv,6) iterm = argmin(tabl(tnp1,2:5))+1; end % Report the results rh = tabl(basis(2),1); rl = tabl(basis(3),1); ds = tabl(basis(4),1); dc = tabl(basis(5),1); zone = -tabl(tnp1,1); % -- FILE UNSIGNPIV.M % Determine pivot element without sign restriction on basis variable function pivrow = unsignpiv(tab,bas,nrows); pivval = 1.0e20; pivrow = 0; for row=1:nrows % Skip rows of basis variables for ibas=1:5 if bas(ibas)==row continue; end end % Examine rows containing non-zero opposite sign of cost term if tab(row,6)!=0.0 & tab(row,6)*tab(nrows+1,6)< 0.0 testval = -tab(row,1)/tab(row,6); if testval < pivval pivrow = row; pivval = testval; end end end % -- FILE SIGNPIV.M % Determine pivot element with sign restriction on basis variable function pivrow = signpiv(tab,bas,nrows); pivval = 1.0e20; pivrow = 0; for row=1:nrows % Skip rows of basis variables for ibas=1:5 if bas(ibas)==row continue; end end % Examine rows containing non-zero opposite sign of cost term if tab(row,6)>0.0 testval = tab(row,1)/tab(row,6); if testval < pivval pivrow = row; pivval = testval; end end end % -- FILE PIVOT.M -- % Perform a full pivot at given row in work column 6 function newtabl = pivot(tabl, prow, nrows) for irow=1:nrows if irow != prow mult = -tabl(irow,6)/tabl(prow,6); for iterm = 1:6 newtabl(irow,iterm) = tabl(irow,iterm) - mult*tabl(prow,iterm); end end end for icol=1:6 newtabl(prow,icol) = tabl(prow,icol)/tabl(prow,6); end
 A simple algorithm for evaluation of minimum zone circularity error from coordinate data, P. B. Dhanish, International Journal of Machine Tools & Manufacture 42 (2002) 1589-1594.
 A comparison of different algorithms for circularity evaluation, T. S. R. Murthy, Precision Engineering 8-1 (1986) 19-23.
 A classic reference, Linear Programming and Extensions, G. B. Dantzig, Princeton University Press, 1963.
 A more recent text, Linear Programs and Related Problems , E.D. Nering and A.W.Tucker, Academic Press, 1990.
 I'd love to have a real data set to thank somebody for here in this footnote.
Site: RidgeRat's Barely Operating Site http://home.earthlink.net/~ltrammell Author: Larry Trammell (RidgeRat) Created: Sept 06, 2003 Revised: Status: Experimental Contact: NOSPAM ltramme1476 AT earthlink DOT net NOSPAM Related: (none) Restrictions: This information is public domain