Newton polynomial
|
In the mathematical subfield of numerical analysis, a Newton polynomial, named after its inventor Isaac Newton, is the interpolation polynomial for a given set of data points in the Newton form. The Newton polynomial is sometimes called Newton's divided differences interpolation polynomial because the coefficients of the polynomial are calculated using divided differences.
As there is only one interpolation polynomial for a given set of data points it is a bit misleading to call the polynomial Newton interpolation polynomial. The more precise name is interpolation polynomial in the Newton form.
Contents |
Definition
Given a set of k+1 data points
- <math>(x_0, y_0),\ldots,(x_k, y_k)<math>
where no two xj are the same, the interpolation polynomial in the Newton form is linear combination of Newton basis polynomials
- <math>N(x) := \sum_{j=0}^{k} a_{j} n_{j}(x)<math>
with the Newton basis polynomials defined as
- <math>n_j(x) := \prod_{i=0}^{j-1} (x - x_i)<math>
and the coefficients defined as
- <math>a_j := [y_0,\ldots,y_j]<math>
where
- <math>[y_0,\ldots,y_j]<math>
is the notation for divided differences.
Thus the Newton polynomial can be written as
- <math>N(x) := [y_0] + [y_0,y_1](x-x_0) + \ldots + [y_0,\ldots,y_k](x-x_0)\ldots(x-x_{k-1})<math>
Main idea
Solving an interpolation problems leads to a problem in linear algebra where we have to solve a matrix. Using a standard monomial basis for our interpolation polynomial we get the very complicated Vandermonde matrix. By choosing another basis, the Newton basis, we get a much simpler lower triangular matrix which can solved faster.
For k+1 data points we construct the Newton basis as
- <math>n_j(x) := \prod_{i=0}^{j-1} (x - x_i) \qquad j=0,\ldots,k<math>
Using the these polynomials as a basis for <math>\Pi_k<math> we have to solve
- <math>
\begin{bmatrix}
1 & & & & 0 \\ 1 & x_1-x_0 & & & \\ 1 & x_2-x_0 & (x_2-x_0)(x_2-x_1) & & \\ \vdots & \vdots & & \ddots & \\ 1 & x_k-x_0 & \ldots & \ldots & \prod_{j=0}^{k-1}(x_k - x_j)
\end{bmatrix} \begin{bmatrix}
a_0 \\ \vdots \\ a_{k}
\end{bmatrix} = \begin{bmatrix}
y_0 \\ \vdots \\ y_{k}
\end{bmatrix} <math>
to solve the polynomial interpolation problem.
This matrix can be solved recursively by solving
- <math> \sum_{i=0}^{j} a_{i} n_{i}(x_j) = y_j \qquad j = 0,...,k<math>
Application
As can be seen from the definition of the divided differences new data points can be added to the data set to create a new interpolation polynomial without recalculation the old coefficients. And when a data point changes we usually do not have to recalculate all coefficients. Furthermore if the xi are distributed equidistantly the calculation of the divided differences becomes significantly easier. Therefore the Newton form of the interpolation polynomial is usually preferred over the Lagrange form for practical purposes.
Example
The divided differences can be written in the form of a table. For example, for a function <math>f<math> is to be interpolated on points <math>x_0, \ldots, x_n<math>. Write
- <math>\begin{matrix}
x_0 & f(x_0) & & \\ & & {f(x_1)-f(x_0)\over x_1 - x_0} & \\ x_1 & f(x_1) & & {{f(x_2)-f(x_1)\over x_2 - x_1}-{f(x_1)-f(x_0)\over x_1 - x_0} \over x_2 - x_0} \\ & & {f(x_2)-f(x_1)\over x_2 - x_1} & \\ x_2 & f(x_2) & & \vdots \\ & & \vdots & \\
\vdots & & & \vdots \\
& & \vdots & \\ x_n & f(x_n) & & \\
\end{matrix}<math> the interpolating polynomial is formed as above using the topmost entries in each column as coefficients.
For example, if we are to construct the interpolating polynomial to <math>f(x)=\tan{x}<math> using divided differences, at the points
<math>x_0=-1.5<math> | <math>x_1=-0.75<math> | <math>x_2=0<math> | <math>x_3=0.75<math> | <math>x_4=1.5<math> |
<math>f(x_0)=-14.1014<math> | <math>f(x_1)=-0.931596<math> | <math>f(x_2)=0<math> | <math>f(x_3)=0.931596<math> | <math>f(x_4)=14.1014<math> |
we construct the table
- <math>\begin{matrix}
-1.5 & -14.1014 & & & &\\
& & 17.5597 & & &\\
-0.75 & -0.931596 & & -10.8784 & &\\
& & 1.24213 & & 4.83484 & \\
0 & 0 & & 0 & & 0\\
& & 1.24213 & & -4.83484 &\\
0.75 & 0.931596 & & 10.8784 & &\\
& & 17.5597 & & &\\
1.5 & 14.1014 & & & &\\ \end{matrix}<math> Thus, the interpolating polynomial is
- <math>-14.1014+17.5597(x+1.5)-10.8784(x+1.5)(x+0.75)+4.83484(x+1.5)(x+0.75)(x)+0(x+1.5)(x+0.75)(x)(x-0.75)<math>
- <math>=-0.00005-1.4775x-0.00001x^2+4.83484x^3<math>
Neglecting minor numerical stability problems in evaluating the entries of the table, this polynomial is essentially the same as that obtained with the same function and data points in the Lagrange polynomial article.
See also
- Neville's schema
- Polynomial interpolation
- Lagrange form of the interpolation polynomial
- Bernstein form of the interpolation polynomialpl:Postać Newtona wielomianu