Open main menu
This image shows, for four points ((−9, 5), (−4, 2), (−1, −2), (7, 9)), the (cubic) interpolation polynomial L(x) (dashed, black), which is the sum of the scaled basis polynomials y00(x), y11(x), y22(x) and y33(x). The interpolation polynomial passes through all four control points, and each scaled basis polynomial passes through its respective control point and is 0 where x corresponds to the other three control points.

In numerical analysis, Lagrange polynomials are used for polynomial interpolation. For a given set of points with no two values equal, the Lagrange polynomial is the polynomial of lowest degree that assumes at each value the corresponding value (i.e. the functions coincide at each point). The interpolating polynomial of the least degree is unique, however, and since it can be arrived at through multiple methods, referring to "the Lagrange polynomial" is perhaps not as correct as referring to "the Lagrange form" of that unique polynomial.

Although named after Joseph Louis Lagrange, who published it in 1795, the method was first discovered in 1779 by Edward Waring[1] It is also an easy consequence of a formula published in 1783 by Leonhard Euler.[2]

Uses of Lagrange polynomials include the Newton–Cotes method of numerical integration and Shamir's secret sharing scheme in cryptography.

Lagrange interpolation is susceptible to Runge's phenomenon of large oscillation. As changing the points requires recalculating the entire interpolant, it is often easier to use Newton polynomials instead.

Contents

DefinitionEdit

 
Here we plot the Lagrange basis functions of 1st, 2nd, and 3rd order on a bi-unit domain. Linear combinations of Lagrange basis functions are used to construct Lagrange interpolating polynomials. Lagrange basis functions are commonly used in finite element analysis as the bases for the element shape-functions. Furthermore, it is common to use a bi-unit domain as the natural space for the finite-element's definition.

Given a set of k + 1 data points

 

where no two   are the same, the interpolation polynomial in the Lagrange form is a linear combination

 

of Lagrange basis polynomials

 

where  . Note how, given the initial assumption that no two   are the same,  , so this expression is always well-defined. The reason pairs   with   are not allowed is that no interpolation function   such that   would exist; a function can only get one value for each argument  . On the other hand, if also  , then those two points would actually be one single point.

For all  ,   includes the term   in the numerator, so the whole product will be zero at  :

 

On the other hand,

 

In other words, all basis polynomials are zero at  , except  , for which it holds that  , because it lacks the   term.

It follows that  , so at each point  ,  , showing that   interpolates the function exactly.

ProofEdit

The function L(x) being sought is a polynomial in   of the least degree that interpolates the given data set; that is, assumes value   at the corresponding   for all data points  :

 

Observe that:

  1. In   there are k factors in the product and each factor contains one x, so L(x) (which is a sum of these k-degree polynomials) must be a polynomial of degree at most k.
  2.  

We consider what happens when this product is expanded. Because the product skips  , if   then all terms are   (except where  , but that case is impossible, as pointed out in the definition section—in that term,  , and since  ,  , contrary to  ). Also if   then since   does not preclude it, one term in the product will be for  , i.e.  , zeroing the entire product. So

  1.  

where   is the Kronecker delta. So:

 

Thus the function L(x) is a polynomial with degree at most k and where  .

Additionally, the interpolating polynomial is unique, as shown by the unisolvence theorem at the polynomial interpolation article.

A perspective from linear algebraEdit

Solving an interpolation problem leads to a problem in linear algebra amounting to inversion of a matrix. Using a standard monomial basis for our interpolation polynomial  , we must invert the Vandermonde matrix   to solve   for the coefficients   of  . By choosing a better basis, the Lagrange basis,  , we merely get the identity matrix,  , which is its own inverse: the Lagrange basis automatically inverts the analog of the Vandermonde matrix.

This construction is analogous to the Chinese Remainder Theorem. Instead of checking for remainders of integers modulo prime numbers, we are checking for remainders of polynomials when divided by linears.


Furthermore, when the order is large, Fast Fourier Transformation can be used to solve for the coefficients of the interpolated polynomial.

ExamplesEdit

Example 1Edit

We wish to interpolate ƒ(x) = x2 over the range 1 ≤ x ≤ 3, given these three points:

 

The interpolating polynomial is:

 

Example 2Edit

We wish to interpolate ƒ(x) = x3 over the range 1 ≤ x ≤ 3, given these three points:

   
   
   

The interpolating polynomial is:

 

NotesEdit

 
Example of interpolation divergence for a set of Lagrange polynomials.

The Lagrange form of the interpolation polynomial shows the linear character of polynomial interpolation and the uniqueness of the interpolation polynomial. Therefore, it is preferred in proofs and theoretical arguments. Uniqueness can also be seen from the invertibility of the Vandermonde matrix, due to the non-vanishing of the Vandermonde determinant.

But, as can be seen from the construction, each time a node xk changes, all Lagrange basis polynomials have to be recalculated. A better form of the interpolation polynomial for practical (or computational) purposes is the barycentric form of the Lagrange interpolation (see below) or Newton polynomials.

Lagrange and other interpolation at equally spaced points, as in the example above, yield a polynomial oscillating above and below the true function. This behaviour tends to grow with the number of points, leading to a divergence known as Runge's phenomenon; the problem may be eliminated by choosing interpolation points at Chebyshev nodes.[3]

The Lagrange basis polynomials can be used in numerical integration to derive the Newton–Cotes formulas.

Barycentric formEdit

Using

 
 

we can rewrite the Lagrange basis polynomials as

 

or, by defining the barycentric weights[4]

 

we can simply write

 

which is commonly referred to as the first form of the barycentric interpolation formula.

The advantage of this representation is that the interpolation polynomial may now be evaluated as

 

which, if the weights   have been pre-computed, requires only   operations (evaluating   and the weights  ) as opposed to   for evaluating the Lagrange basis polynomials   individually.

The barycentric interpolation formula can also easily be updated to incorporate a new node   by dividing each of the  ,   by   and constructing the new   as above.

We can further simplify the first form by first considering the barycentric interpolation of the constant function  :

 

Dividing   by   does not modify the interpolation, yet yields

 

which is referred to as the second form or true form of the barycentric interpolation formula. This second form has the advantage that   need not be evaluated for each evaluation of  .

Remainder in Lagrange interpolation formulaEdit

When interpolating a given function f by a polynomial of degree k at the nodes   we get the remainder   which can be expressed as[5]

 

where   is the notation for divided differences. Alternatively, the remainder can be expressed as a contour integral in complex domain as

 

The remainder can be bound as

 

Derivation[6]Edit

Clearly,   is zero at nodes. Suppose we want to find   at a point  . Define a new function   and choose   (This ensures   at nodes) where   is the constant we are required to determine for a given  . Now   has   zeroes (at all nodes and  ) between   and   (including endpoints). Let us assume that   is  -times differentiable,   and   are polynomials hence are infinitely differentiable, by Rolle's theoram   has   zeroes,   has   zeroes...   has 1 zero, say  . Explicitly writing  :

 

we have:

  (Because the highest power of   in   is  )

We get:

 

Rearranging:

 

DerivativesEdit

The  th derivatives of the Lagrange polynomial can be written as

 .

For the first derivative, the coefficients are given by

 

and for the second derivative

 .

Through recursion, one can compute formulas for higher derivatives.

Finite fieldsEdit

The Lagrange polynomial can also be computed in finite fields. This has applications in cryptography, such as in Shamir's Secret Sharing scheme.

See alsoEdit

ReferencesEdit

  1. ^ Waring, Edward (9 January 1779). "Problems concerning interpolations". Philosophical Transactions of the Royal Society. 69: 59–67. doi:10.1098/rstl.1779.0008.
  2. ^ Meijering, Erik (2002). "A chronology of interpolation: from ancient astronomy to modern signal and image processing" (PDF). Proceedings of the IEEE. 90 (3): 319–342. doi:10.1109/5.993400.
  3. ^ Quarteroni, Alfio; Saleri, Fausto (2003). Scientific Computing with MATLAB. Texts in computational science and engineering. 2. Springer. p. 66. ISBN 978-3-540-44363-6..
  4. ^ Berrut, Jean-Paul; Trefethen, Lloyd N. (2004). "Barycentric Lagrange Interpolation" (PDF). SIAM Review. 46 (3): 501–517. doi:10.1137/S0036144502417715.
  5. ^ Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 25, eqn 25.2.3". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. p. 878. ISBN 978-0-486-61272-0. LCCN 64-60036. MR 0167642. LCCN 65-12253.
  6. ^ "Interpolation" (PDF).

External linksEdit