Grönwall's inequality

(Redirected from Gronwall's inequality)

In mathematics, Grönwall's inequality (also called Grönwall's lemma or the Grönwall–Bellman inequality) allows one to bound a function that is known to satisfy a certain differential or integral inequality by the solution of the corresponding differential or integral equation. There are two forms of the lemma, a differential form and an integral form. For the latter there are several variants.

Grönwall's inequality is an important tool to obtain various estimates in the theory of ordinary and stochastic differential equations. In particular, it provides a comparison theorem that can be used to prove uniqueness of a solution to the initial value problem; see the Picard–Lindelöf theorem.

It is named for Thomas Hakon Grönwall (1877–1932). Grönwall is the Swedish spelling of his name, but he spelled his name as Gronwall in his scientific publications after emigrating to the United States.

The inequality was first proven by Grönwall in 1919 (the integral form below with α and β being constants).[1] Richard Bellman proved a slightly more general integral form in 1943.[2]

A nonlinear generalization of the Grönwall–Bellman inequality is known as Bihari–LaSalle inequality. Other variants and generalizations can be found in Pachpatte, B.G. (1998).[3]

Differential form edit

Let   denote an interval of the real line of the form   or   or   with  . Let   and   be real-valued continuous functions defined on  . If   is differentiable in the interior   of   (the interval   without the end points   and possibly  ) and satisfies the differential inequality

 

then   is bounded by the solution of the corresponding differential equation  :

 

for all  .

Remark: There are no assumptions on the signs of the functions   and  .

Proof edit

Define the function

 

Note that   satisfies

 

with   and   for all  . By the quotient rule

 

Thus the derivative of the function   is non-positive and the function is bounded above by its value at the initial point   of the interval  :

 

which is Grönwall's inequality.

Integral form for continuous functions edit

Let I denote an interval of the real line of the form [a, ∞) or [a, b] or [a, b) with a < b. Let α, β and u be real-valued functions defined on I. Assume that β and u are continuous and that the negative part of α is integrable on every closed and bounded subinterval of I.

  • (a) If β is non-negative and if u satisfies the integral inequality
 
then
 
  • (b) If, in addition, the function α is non-decreasing, then
 

Remarks:

  • There are no assumptions on the signs of the functions α and u.
  • Compared to the differential form, differentiability of u is not needed for the integral form.
  • For a version of Grönwall's inequality which doesn't need continuity of β and u, see the version in the next section.

Proof edit

(a) Define

 

Using the product rule, the chain rule, the derivative of the exponential function and the fundamental theorem of calculus, we obtain for the derivative

 

where we used the assumed integral inequality for the upper estimate. Since β and the exponential are non-negative, this gives an upper estimate for the derivative of  . Since  , integration of this inequality from a to t gives

 

Using the definition of   from the first step, and then this inequality and the functional equation of the exponential function, we obtain

 

Substituting this result into the assumed integral inequality gives Grönwall's inequality.

(b) If the function α is non-decreasing, then part (a), the fact α(s) ≤ α(t), and the fundamental theorem of calculus imply that

 

Integral form with locally finite measures edit

Let I denote an interval of the real line of the form [a, ∞) or [a, b] or [a, b) with a < b. Let α and u be measurable functions defined on I and let μ be a continuous non-negative measure on the Borel σ-algebra of I satisfying μ([a, t]) < ∞ for all tI (this is certainly satisfied when μ is a locally finite measure). Assume that u is integrable with respect to μ in the sense that

 

and that u satisfies the integral inequality

 

If, in addition,

  • the function α is non-negative or
  • the function tμ([a, t]) is continuous for tI and the function α is integrable with respect to μ in the sense that
 

then u satisfies Grönwall's inequality

 

for all tI, where Is,t denotes to open interval (s, t).

Remarks edit

  • There are no continuity assumptions on the functions α and u.
  • The integral in Grönwall's inequality is allowed to give the value infinity.[clarification needed]
  • If α is the zero function and u is non-negative, then Grönwall's inequality implies that u is the zero function.
  • The integrability of u with respect to μ is essential for the result. For a counterexample, let μ denote Lebesgue measure on the unit interval [0, 1], define u(0) = 0 and u(t) = 1/t for t(0, 1], and let α be the zero function.
  • The version given in the textbook by S. Ethier and T. Kurtz.[4] makes the stronger assumptions that α is a non-negative constant and u is bounded on bounded intervals, but doesn't assume that the measure μ is locally finite. Compared to the one given below, their proof does not discuss the behaviour of the remainder Rn(t).

Special cases edit

  • If the measure μ has a density β with respect to Lebesgue measure, then Grönwall's inequality can be rewritten as
 
  • If the function α is non-negative and the density β of μ is bounded by a constant c, then
 
  • If, in addition, the non-negative function α is non-decreasing, then
 

Outline of proof edit

The proof is divided into three steps. The idea is to substitute the assumed integral inequality into itself n times. This is done in Claim 1 using mathematical induction. In Claim 2 we rewrite the measure of a simplex in a convenient form, using the permutation invariance of product measures. In the third step we pass to the limit n to infinity to derive the desired variant of Grönwall's inequality.

Detailed proof edit

Claim 1: Iterating the inequality edit

For every natural number n including zero,

 

with remainder

 

where

 

is an n-dimensional simplex and

 

Proof of Claim 1 edit

We use mathematical induction. For n = 0 this is just the assumed integral inequality, because the empty sum is defined as zero.

Induction step from n to n + 1: Inserting the assumed integral inequality for the function u into the remainder gives

 

with

 

Using the Fubini–Tonelli theorem to interchange the two integrals, we obtain

 

Hence Claim 1 is proved for n + 1.

Claim 2: Measure of the simplex edit

For every natural number n including zero and all s < t in I

 

with equality in case tμ([a, t]) is continuous for tI.

Proof of Claim 2 edit

For n = 0, the claim is true by our definitions. Therefore, consider n ≥ 1 in the following.

Let Sn denote the set of all permutations of the indices in {1, 2, . . . , n}. For every permutation σSn define

 

These sets are disjoint for different permutations and

 

Therefore,

 

Since they all have the same measure with respect to the n-fold product of μ, and since there are n! permutations in Sn, the claimed inequality follows.

Assume now that tμ([a, t]) is continuous for tI. Then, for different indices i, j ∈ {1, 2, . . . , n}, the set

 

is contained in a hyperplane, hence by an application of Fubini's theorem its measure with respect to the n-fold product of μ is zero. Since

 

the claimed equality follows.

Proof of Grönwall's inequality edit

For every natural number n, Claim 2 implies for the remainder of Claim 1 that

 

By assumption we have μ(Ia,t) < ∞. Hence, the integrability assumption on u implies that

 

Claim 2 and the series representation of the exponential function imply the estimate

 

for all s < t in I. If the function α is non-negative, then it suffices to insert these results into Claim 1 to derive the above variant of Grönwall's inequality for the function u.

In case tμ([a, t]) is continuous for tI, Claim 2 gives

 

and the integrability of the function α permits to use the dominated convergence theorem to derive Grönwall's inequality.

See also edit

References edit

  1. ^ Gronwall, Thomas H. (1919), "Note on the derivatives with respect to a parameter of the solutions of a system of differential equations", Ann. of Math., 20 (2): 292–296, doi:10.2307/1967124, JFM 47.0399.02, JSTOR 1967124, MR 1502565
  2. ^ Bellman, Richard (1943), "The stability of solutions of linear differential equations", Duke Math. J., 10 (4): 643–647, doi:10.1215/s0012-7094-43-01059-2, MR 0009408, Zbl 0061.18502
  3. ^ Pachpatte, B.G. (1998). Inequalities for differential and integral equations. San Diego: Academic Press. ISBN 9780080534640.
  4. ^ Ethier, Steward N.; Kurtz, Thomas G. (1986), Markov Processes, Characterization and Convergence, New York: John Wiley & Sons, p. 498, ISBN 0-471-08186-8, MR 0838085, Zbl 0592.60049

This article incorporates material from Gronwall's lemma on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.