Root-finding algorithm
|
A root-finding algorithm is a numerical method or algorithm for finding a value x such that f(x) = 0, for a given function f. Here, x is a single real number called the root.
When x is a vector, algorithms to find x such that f(x) = 0 are generally called "equation-solving algorithms". These algorithms are a generalization of root-finding and can operate either on linear or non-linear equations. Some root-finding algorithms (such as Newton's method) can be directly generalized to solve non-linear equations.
Root-finding algorithms are studied in numerical analysis.
Specific algorithms
The simplest root-finding algorithm is the bisection method: we start with two points a and b which bracket a root, and at every iteration, we pick either the subinterval [a, c] or [c, b], where c = (a + b) / 2 is the midpoint between a and b. The algorithm always selects a subinterval which contains a root. The bisection method is guaranteed to converge to a root, however, its progress is rather slow (the rate of convergence is linear).
Newton's method, also called the Newton-Raphson method, linearizes the function f at the current approximation to the root. This yields the recurrence relation
- <math> x_{k+1} = x_k - \frac{f(x_k)}{f'(x_k)}. <math>
Newton's method may not converge if you start too far away from a root. However, if it does converge, it is faster than the bisection method (convergence is quadratic). Newton's method is also important because it readily generalizes to higher-dimensional problems.
If we replace the derivative in Newton's method with a finite difference, we get the secant method. It is defined by the recurrence relation
- <math>x_{n+1} = x_n - \frac{x_n-x_{n-1}}{f(x_n)-f(x_{n-1})} f(x_n). <math>
So, the secant method does not require the computation of a derivative, but the price is slower convergence (the order is approximately 1.6).
The false position method, also called the regula falsi method, is like the bisection method. However, it does not cut the interval in two equal parts at every iteration, but it cuts the interval at the point given by the formula for the secant method. The false position method inherits the robustness of the bisection method and the superlinear convergence of the secant method.
The secant method also arises if one approximates the unknown function f by linear interpolation. When quadratic interpolation is used instead, one arrives at Muller's method. It converges faster than the secant method. A particular feature of this method is that the iterates xn may become complex. This can be avoided by interpolating the inverse of f, resulting in the inverse quadratic interpolation method. Again, convergence is asymptotically faster than the secant method, but inverse quadratic interpolation often behaves poorly when the iterates are not close to the root.
Finally, Brent's method is a combination of the bisection method, the secant method and inverse quadratic interpolation. At every iteration, Brent's method decides which method out of these three is likely to do best, and proceeds by doing a step according to that method. This gives a robust and fast method, which therefore enjoys considerable popularity.
Other root-finding algorithms include:
Finding roots of polynomials
Much attention has been given to the special case that the function f is in fact a polynomial. Of course, the method described in the previous section can be used. In particular, it is easy to find the derivative of a polynomial, so Newton's method is a viable candidate. But one can also choose a method that exploits the fact that f is a polynomial.
One possibility is to form the companion matrix of the polynomial. Since the eigenvalues of this matrix coincide with the roots of the polynomial, one can now use any eigenvalue algorithm to find the roots of the original polynomial.
Another possibility is Laguerre's method, which is a rather complicated method that converges very fast. In fact, it exhibits cubic convergence for simple roots, beating the quadratic convergence displayed by Newton's method.
If the polynomial has rational coefficients and only rational roots are of interest, then Ruffini's method can also be used.
In any case, it should be borne in mind that the problem of finding roots can be ill-conditioned, as the example of Wilkinson's polynomial shows.
External links
- Numerical Recipes Homepage (http://www.nr.com)it:Calcolo dello zero di una funzione