Euler–Maclaurin formula

In mathematics, the Euler–Maclaurin formula is a formula for the difference between an integral and a closely related sum. It can be used to approximate integrals by finite sums, or conversely to evaluate finite sums and infinite series using integrals and the machinery of calculus. For example, many asymptotic expansions are derived from the formula, and Faulhaber's formula for the sum of powers is an immediate consequence.

The formula was discovered independently by Leonhard Euler and Colin Maclaurin around 1735. Euler needed it to compute slowly converging infinite series while Maclaurin used it to calculate integrals. It was later generalized to Darboux's formula.

The formulaEdit

If   and   are natural numbers and   is a real or complex valued continuous function for real numbers   in the interval  , then the integral

 

can be approximated by the sum (or vice versa)

 

(see rectangle method). The Euler–Maclaurin formula provides expressions for the difference between the sum and the integral in terms of the higher derivatives   evaluated at the endpoints of the interval, that is to say when   and  .

Explicitly, for   a positive integer and a function   that is   times continuously differentiable on the interval  , we have

 

where   is the  th Bernoulli number (with  ) and   is an error term which depends on  ,  ,  , and   and is usually small for suitable values of  .

The formula is often written with the subscript taking only even values, since the odd Bernoulli numbers are zero except for  . In this case we have[1][2]

 

or alternatively

 

The remainder termEdit

The remainder term arises because the integral is usually not exactly equal to the sum. The formula may be derived by applying repeated integration by parts to successive intervals   for  . The boundary terms in these integrations lead to the main terms of the formula, and the leftover integrals form the remainder term.

The remainder term has an exact expression in terms of the periodized Bernoulli functions  . The Bernoulli polynomials may be defined recursively by   and, for  ,

 

The periodized Bernoulli functions are defined as

 

where   denotes the largest integer less than or equal to   (so that   always lies in the interval  ).

With this notation, the remainder term   equals

 

When  , it can be shown that

 

where   denotes the Riemann zeta function; one approach to prove this inequality is to obtain the Fourier series for the polynomials  . The bound is achieved for even   when   is zero. The term   may be omitted for odd   but the proof in this case is more complex (see Lehmer).[3] Using this inequality, the size of the remainder term can be estimated as

 

Low-order casesEdit

The Bernoulli numbers from   to   are   Therefore the low-order cases of the Euler-Maclaurin formula are:

 

ApplicationsEdit

The Basel problemEdit

The Basel problem is to determine the sum

 

Euler computed this sum to 20 decimal places with only a few terms of the Euler–Maclaurin formula in 1735. This probably convinced him that the sum equals  , which he proved in the same year.[4]

Sums involving a polynomialEdit

If   is a polynomial and   is big enough, then the remainder term vanishes. For instance, if  , we can choose   to obtain, after simplification,

 

Approximation of integralsEdit

The formula provides a means of approximating a finite integral. Let   be the endpoints of the interval of integration. Fix  , the number of points to use in the approximation, and denote the corresponding step size by  . Set  , so that   and  . Then:[5]

 

This may be viewed as an extension of the trapezoid rule by the inclusion of correction terms. Note that this asymptotic expansion is usually not convergent; there is some  , depending upon   and  , such that the terms past order   increase rapidly. Thus, the remainder term generally demands close attention.[5]

The Euler–Maclaurin formula is also used for detailed error analysis in numerical quadrature. It explains the superior performance of the trapezoidal rule on smooth periodic functions and is used in certain extrapolation methods. Clenshaw–Curtis quadrature is essentially a change of variables to cast an arbitrary integral in terms of integrals of periodic functions where the Euler–Maclaurin approach is very accurate (in that particular case the Euler–Maclaurin formula takes the form of a discrete cosine transform). This technique is known as a periodizing transformation.

Asymptotic expansion of sumsEdit

In the context of computing asymptotic expansions of sums and series, usually the most useful form of the Euler–Maclaurin formula is

 

where   and   are integers.[6] Often the expansion remains valid even after taking the limits   or   or both. In many cases the integral on the right-hand side can be evaluated in closed form in terms of elementary functions even though the sum on the left-hand side cannot. Then all the terms in the asymptotic series can be expressed in terms of elementary functions. For example,

 

Here the left-hand side is equal to  , namely the first-order polygamma function defined by  ; the gamma function   is equal to   if   is a positive integer. This results in an asymptotic expansion for  . That expansion, in turn, serves as the starting point for one of the derivations of precise error estimates for Stirling's approximation of the factorial function.

ExamplesEdit

If s is an integer greater than 1 we have:

 

Collecting the constants into a value of the Riemann zeta function, we can write an asymptotic expansion:

 

For s equal to 2 this simplifies to

 

or

 

When s = 1, the corresponding technique gives an asymptotic expansion for the harmonic numbers:

 

where   is the Euler–Mascheroni constant.

ProofsEdit

Derivation by mathematical inductionEdit

We outline the argument given in Apostol.[1]

The Bernoulli polynomials Bn(x) and the periodic Bernoulli functions Pn(x) for n = 0, 1, 2, ... were introduced above.

The first several Bernoulli polynomials are

 

The values Bn(0) are the Bernoulli numbers Bn. Notice that for n ≠ 1 we have

 

and for n = 1,

 

The functions Pn agree with the Bernoulli polynomials on the interval [0, 1] and are periodic with period 1. Furthermore, except when n = 1, they are also continuous. Thus,

 

Let k be an integer, and consider the integral

 

where

 

Integrating by parts, we get

 

Using  ,  , and summing the above from k = 0 to k = n − 1, we get

 

Adding (f(n) − f(0))/2 to both sides and rearranging, we have

 

This is the p = 1 case of the summation formula. To continue the induction, we apply integration by parts to the error term:

 

where

 

The result of integrating by parts is

 

Summing from k = 0 to k = n − 1 and substituting this for the lower order error term results in the p = 2 case of the formula,

 

This process can be iterated. In this way we get a proof of the Euler–Maclaurin summation formula which can be formalized by mathematical induction, in which the induction step relies on integration by parts and on identities for periodic Bernoulli functions.

See alsoEdit

NotesEdit

  1. ^ a b Apostol, T. M. (1 May 1999). "An Elementary View of Euler's Summation Formula". The American Mathematical Monthly. Mathematical Association of America. 106 (5): 409–418. doi:10.2307/2589145. ISSN 0002-9890. JSTOR 2589145.
  2. ^ "Digital Library of Mathematical Functions: Sums and Sequences". National Institute of Standards and Technology.
  3. ^ Lehmer, D. H. (1940). "On the maxima and minima of Bernoulli polynomials". The American Mathematical Monthly. 47 (8): 533–538. doi:10.2307/2303833.
  4. ^ Pengelley, David J. "Dances between continuous and discrete: Euler's summation formula", in: Robert Bradley and Ed Sandifer (Eds), Proceedings, Euler 2K+2 Conference (Rumford, Maine, 2002), Euler Society, 2003.
  5. ^ a b Devries, Paul L.; Hasbrun, Javier E. (2011). A first course in computational physics (2nd ed.). Jones and Bartlett Publishers. p. 156.
  6. ^ Abramowitz & Stegun (1972), 23.1.30

ReferencesEdit