Improved algebraic methods for circle ﬁtting

: Inspired by the circle ﬁtting algorithm “Hyper” of Al-Sharadqah and Chernov [1], which eliminates the second order “essential bias” that excludes terms of O ( σ 2 /N 2 ), we extend their analysis and show that by a small modiﬁcation the second order bias can be eliminated completely. By numerical experiments, we show that this results in better performance when the number N of points is small and the noise is large.


Introduction
Fitting a circle to a noisy point sequence is a fundamental task in many scientific areas including pattern recognition, computer vision, and various domains of physics, and existing circle fitting algorithms were extensively reviewed by Al-Sharadqah and Chernov [1].They also provided a thorough error analysis and presented a "hyperaccurate" method, or "Hyper", and showed that it numerically outperforms all existing algebraic methods.
This paper extends their contribution and shows that their error analysis can allow for additional insight.Their Hyper eliminates the second order "essential bias" that excludes terms of O(σ 2 /N 2 ), but it turns out that by a small modification the second order bias can be eliminated completely.Numerical experiments show that this results in better performance when the number N of points is small and the noise is large.

Formulation
The analysis of Al-Sharadqah and Chernov [1] is based on the perturbation theory of Kanatani [3], so we use the symbols and notations in [3].We write the circle equation as where f 0 is an appropriate scale constant, without which finite precision numerical computation would incur serious accuracy loss.If we introduce a new variable ξ and a parameter vector Eq. (2.1) can be written as the inner product of the vectors ξ and u: Suppose we observe N points (x α , y α ), α = 1, . . ., N , and let ξ α be the value of ξ for (x α , y α ).Al-Sharadqah and Chernov [1] studied alegraic circle fitting that minimizes where we define Since the trivial solution u = 0 minimizes Eq. (2.4), a scale normalization is necessary for u.However, the solution depends on the normalization we impose.Al-Sharadqah and Chernov [1] exploited this freedom and "optimized" the normalization so that the solution has the highest accuracy in some sense.They considered the class of normalizations for some symmetric matrix N .They showed that this class covers all existing algebraic fits (e.g., Kåsa [4], Pratt [6], and Taubin [5]), for which N is positive definite or semidefinite.Al-Sharadqah and Chernov [1] extended this class to allow nondefinite N .If N is positive definite or semidefinite, the right-hand side of Eq. (2.6) is positive (or 0, which is of no interest), so no generality is lost by assuming it to be 1.If N is nondefinite, on the other hand, (u, N u) can be negative.Al-Sharadqah and Chernov [1] restricted it to be positive (or 1), but here we admit a negative value as well.
As is well known, the solution u that minimizes Eq. (2.4) subject to Eq. (2.6) is obtained by solving the generalized eigenvalue problem M u = λN u. (2.7) Once in this form, the magnitude of u is indeterminate, so we normalize it to u = 1 rather than Eq.(2.6).Our task is to select an appropriate N that gives rise to the best solution u in some sense by doing error analysis.

Error analysis
We summarize the error analysis of Al-Sharadqah and Chernov [1] in terms of our symbols and notations.Each (x α , y α ) is regarded as perturbed from its true position (x α , ȳα ) by (∆x α , ∆y α ), so ξ α is written as where ξα is the true value of ξ α , and ∆ 1 ξ α , and ∆ 2 ξ α are terms of the first and the second order in noise, respectively: The term ∆ 2 ξ α is circle-specific and was not considered in the general theory of Kanatani [3].In effect, the Hyper of Al-Sharadqah and Chernov [1] eliminates the bias caused by this term.If ∆x α and ∆y α are independent random Gaussian variables of mean 0 and standard deviation σ, the covariance matrix of ξ α is where E[ • ] denotes expectation and the matrix V 0 [ξ α ] is given by P. Rangarajan and K. Kanatani/Improved algebraic methods for circle fitting 1078 The Taubin method [5] uses as N where V 0 [ξ α ] is computed by substituting the observations (x α , y α ) for (x α , ȳα ) in Eq. (3.4).Al-Sharadqah and Chernov [1] expanded the matrix M in Eq. (2.5) in the form where M is the noise-free term, and • • • denotes terms of order three and higher in noise.The first and second order terms ∆ 1 M and ∆ 2 M are Accodingly, the solution u is expanded in the form Following the perturbation analysis of Kanatani [3], we obtan where we define Here, M − is the pseudoinverse of M , and ∆ 2 u ⊥ is the component of ∆ 2 u orthogonal to ū.Note that since the norm of u is fixed, we are only interested in the perturbation orthogonal to ū.The first order error ∆ 1 u in Eq. (3.10) is orthogonal to ū as is.
Equations (3.10) and (3.11) are mathematically identical to those reported by Al-Sharadqah and Chernov [1], but their expressions have a different appearance.From the above expressions, however, emerges an important observation, which would be difficult to obtain if we started from a different expression.

Covariance and bias
As Al-Sharadqah and Chernov [1] pointed out, Eq. (3.10) does not contain N , neither does Hence, the covarinace of all algebraic fits is the same to the leading order, and we are unable to reduce it by adjusting N .This leads us to concentrate on the bias.Since ∆ 1 u in Eq. (3.10) has expectation 0, the leading bias term is E[∆ 2 u].From Eq. (3.11), we have From the analysis of Kanatani [3], the expectation E[T ] is evaluated to be where we define and S[ • ] denotes symmetrization (S[A] = (A + A ⊤ )/2).If we define N to be we have E[T ] = σ 2 N .Hence, Eq. (4.1) becomes Since Eq. (4.4) contains the true values ξα and M , we evaluate them by replacing the true values (x α , ȳα ) in their definitions by the observations (x α , y α ).This does not affect the result, because expectations of odd-order error terms vanish and hence the error in Eq. (4.5) is at most O(σ 4 ).Thus, the second order bias is exactly 0. In terms of our notation, the Hyper of Al-Sharadqah and Chernov [1] is equivalent to letting N = N TB + 4S[ ξc e ⊤  1 ].They argued that when N is large and σ is small, we can ignore terms of O(σ 2 /N 2 ), which they said are "non-essential".They dropped these terms and showed that choosing N = N TB + 4S[ ξc e ⊤ 1 ] can eliminate the "essential bias".Our numerical experiments, however, demonstrate that terms of O(σ 2 /N 2 ) are essential when N is small.P. Rangarajan and K. Kanatani/Improved algebraic methods for circle fitting 1080

Numerical experiments
We considered a circle of radius 100 (pixels) centered at the origin and tested the two cases: Eighth circle: 16 equidistant points over an eighth of the circumference.Semicircle: 61 equidistant points over a half of the circumference.
We added independent Gaussian noise of mean 0 and standard deviation σ (pixel) to the coordinates of the points and fitted a circle.The error is measured by the RMS error where ∆u ⊥ (a) is the component of the ath value u (a) orthogonal to the true value ū.Standard linear algebra routines for solving the generalized eigenvalue problem (2.7) assume that N is positive definite.As can be seen from Eq. (3.4), however, the matrix N TB in Eq. (3.5) is positive semidefinite having a row and a column of zeros.The matrix N in Eq. (4.4) is not positive definite, either.Still, Eq. (2.7) can be solved by combining the singular value decomposition (SVD) and singular value thresholding, as Al-Sharadqah and Chernov [1] did.However, a simpler method exists: Eq. (2.7) can be written as ( The matrix M in Eq. (2.5) is positive definite for noisy data; the smallest eigenvalue is 0 only when there is no noise.Thus, we can solve Eq. (5.2) instead of Eq. (2.7), using a standard linear algebra routine.Since the perturbation analysis of Kanatani [3] is based on the assumption that λ ≈ 0, we compute the unit generalized eigenvector for λ with the smallest absolute value (or for 1/λ with the largest absolute value), while Al-Sharadqah and Chernov [1] computed the smallest positive generalized eigenvalue of Eq. (2.7), assuming that (u, Nu) > 0.
The results are listed in Tables 1 and 2, where we compared five methods: LS (the standard least squares obtained by letting N = I), the Taubin method (Eq.(3.5)), the Hyper [1], our proposal of using Eq.(4.4), and the geometric fit (or ML) for which we used the FNS of Chojnacki et al. [2].As we see from Table 1, our proposal always outperforms the Hyper [1] because of the small number of points.We were unable to compute the geomtric fit for σ ≥ 2: the FNS failed to converge.Thus, the geometric fit is not always practical as pointed out by Al-Sharadqah and Chernov [1].In the semicicle case, no practical difference was observed among Taubin, Hyper [1], and Eq.(4.4) with such "good" data (many points over a sufficiently long arc).In this case, the geometric fit exhibits slightly higher accuracy.In conclusion, when the number of points is small and the noise is large, in which case geometric fitting may fail to converge, Eq. (4.4) provides better results than the standard LS, Taubin, or Hyper [1].

Conclusion
We have modified the circle fitting algorithm "Hyper" of Al-Sharadqah and Chernov [1] and eliminated the bias of the solution up to the second order in noise, while the Hyper eliminates only the "essential bias" excluding terms of O(σ 2 /N 2 ).Numerical experiments show that our solution outperforms the Hyper when the number N of points is small and the noise is large, in which situation a stable circle fitting algorithm is most desired in practical applications.

Table 1
RMS error for 16 points over an eight circle