Minkowski inequality

In mathematical analysis, the Minkowski inequality establishes that the Lp spaces are normed vector spaces. Let S be a measure space, let 1 ≤ p < ∞ and let f and g be elements of Lp(S). Then f + g is in Lp(S), and we have the triangle inequality

${\displaystyle \|f+g\|_{p}\leq \|f\|_{p}+\|g\|_{p}}$

with equality for 1 < p < ∞ if and only if f and g are positively linearly dependent, i.e., f = λg for some λ ≥ 0 or g = 0. Here, the norm is given by:

${\displaystyle \|f\|_{p}=\left(\int |f|^{p}d\mu \right)^{\frac {1}{p}}}$

if p < ∞, or in the case p = ∞ by the essential supremum

${\displaystyle \|f\|_{\infty }=\operatorname {ess\ sup} _{x\in S}|f(x)|.}$

The Minkowski inequality is the triangle inequality in Lp(S). In fact, it is a special case of the more general fact

${\displaystyle \|f\|_{p}=\sup _{\|g\|_{q}=1}\int |fg|d\mu ,\qquad {\tfrac {1}{p}}+{\tfrac {1}{q}}=1}$

where it is easy to see that the right-hand side satisfies the triangular inequality.

Like Hölder's inequality, the Minkowski inequality can be specialized to sequences and vectors by using the counting measure:

${\displaystyle {\biggl (}\sum _{k=1}^{n}|x_{k}+y_{k}|^{p}{\biggr )}^{1/p}\leq {\biggl (}\sum _{k=1}^{n}|x_{k}|^{p}{\biggr )}^{1/p}+{\biggl (}\sum _{k=1}^{n}|y_{k}|^{p}{\biggr )}^{1/p}}$

for all real (or complex) numbers x1, ..., xn, y1, ..., yn and where n is the cardinality of S (the number of elements in S).

The inequality is named after the German mathematician Hermann Minkowski.

Proof

First, we prove that f+g has finite p-norm if f and g both do, which follows by

${\displaystyle |f+g|^{p}\leq 2^{p-1}(|f|^{p}+|g|^{p}).}$

Indeed, here we use the fact that ${\displaystyle h(x)=|x|^{p}}$  is convex over R+ (for p > 1) and so, by the definition of convexity,

${\displaystyle \left|{\tfrac {1}{2}}f+{\tfrac {1}{2}}g\right|^{p}\leq \left|{\tfrac {1}{2}}|f|+{\tfrac {1}{2}}|g|\right|^{p}\leq {\tfrac {1}{2}}|f|^{p}+{\tfrac {1}{2}}|g|^{p}.}$

This means that

${\displaystyle |f+g|^{p}\leq {\tfrac {1}{2}}|2f|^{p}+{\tfrac {1}{2}}|2g|^{p}=2^{p-1}|f|^{p}+2^{p-1}|g|^{p}.}$

Now, we can legitimately talk about ${\displaystyle \|f+g\|_{p}}$ . If it is zero, then Minkowski's inequality holds. We now assume that ${\displaystyle \|f+g\|_{p}}$  is not zero. Using the triangle inequality and then Hölder's inequality, we find that

{\displaystyle {\begin{aligned}\|f+g\|_{p}^{p}&=\int |f+g|^{p}\,\mathrm {d} \mu \\&=\int |f+g|\cdot |f+g|^{p-1}\,\mathrm {d} \mu \\&\leq \int (|f|+|g|)|f+g|^{p-1}\,\mathrm {d} \mu \\&=\int |f||f+g|^{p-1}\,\mathrm {d} \mu +\int |g||f+g|^{p-1}\,\mathrm {d} \mu \\&\leq \left(\left(\int |f|^{p}\,\mathrm {d} \mu \right)^{\frac {1}{p}}+\left(\int |g|^{p}\,\mathrm {d} \mu \right)^{\frac {1}{p}}\right)\left(\int |f+g|^{(p-1)\left({\frac {p}{p-1}}\right)}\,\mathrm {d} \mu \right)^{1-{\frac {1}{p}}}&&{\text{ Hölder's inequality}}\\&=\left(\|f\|_{p}+\|g\|_{p}\right){\frac {\|f+g\|_{p}^{p}}{\|f+g\|_{p}}}\end{aligned}}}

We obtain Minkowski's inequality by multiplying both sides by

${\displaystyle {\frac {\|f+g\|_{p}}{\|f+g\|_{p}^{p}}}.}$

Minkowski's integral inequality

Suppose that (S1, μ1) and (S2, μ2) are two σ-finite measure spaces and F : S1 × S2R is measurable. Then Minkowski's integral inequality is (Stein 1970, §A.1), (Hardy, Littlewood & Pólya 1988, Theorem 202):

${\displaystyle \left[\int _{S_{2}}\left|\int _{S_{1}}F(x,y)\,\mu _{1}(\mathrm {d} x)\right|^{p}\mu _{2}(\mathrm {d} y)\right]^{\frac {1}{p}}\leq \int _{S_{1}}\left(\int _{S_{2}}|F(x,y)|^{p}\,\mu _{2}(\mathrm {d} y)\right)^{\frac {1}{p}}\mu _{1}(\mathrm {d} x),}$

with obvious modifications in the case p = ∞. If p > 1, and both sides are finite, then equality holds only if |F(x, y)| = φ(x)ψ(y) a.e. for some non-negative measurable functions φ and ψ.

If μ1 is the counting measure on a two-point set S1 = {1,2}, then Minkowski's integral inequality gives the usual Minkowski inequality as a special case: for putting fi(y) = F(i, y) for i = 1, 2, the integral inequality gives

${\displaystyle \|f_{1}+f_{2}\|_{p}=\left(\int _{S_{2}}\left|\int _{S_{1}}F(x,y)\,\mu _{1}(\mathrm {d} x)\right|^{p}\mu _{2}(\mathrm {d} y)\right)^{\frac {1}{p}}\leq \int _{S_{1}}\left(\int _{S_{2}}|F(x,y)|^{p}\,\mu _{2}(\mathrm {d} y)\right)^{\frac {1}{p}}\mu _{1}(\mathrm {d} x)=\|f_{1}\|_{p}+\|f_{2}\|_{p}.}$

This notation has been generalized to

${\displaystyle \|f\|_{p,q}=\left(\int _{\mathbb {R} ^{m}}\left[\int _{\mathbb {R} ^{n}}|f(x,y)|^{q}\mathrm {d} y\right]^{\frac {p}{q}}\mathrm {d} x\right)^{\frac {1}{p}}}$

for ${\displaystyle f:\mathbb {R} ^{m+n}\to E}$ , with ${\displaystyle {\mathcal {L}}_{p,q}(\mathbb {R} ^{m+n},E)=\{f\in E^{\mathbb {R} ^{m+n}}:\|f\|_{p,q}<\infty \}}$ . Using this notation, manipulation of the exponents reveals that, if ${\displaystyle p>q}$ , then ${\displaystyle \|f\|_{p,q}\leq \|f\|_{q,p}}$ .

Reverse inequality

When ${\displaystyle p<1}$  the reverse inequality holds:

${\displaystyle \|f+g\|_{p}\geq \|f\|_{p}+\|g\|_{p}}$

We further need the restriction that both ${\displaystyle f}$  and ${\displaystyle g}$  are non-negative, as we can see from the example ${\displaystyle f=-1,g=1}$  and ${\displaystyle p=1}$ : ${\displaystyle \|f+g\|_{1}=0<2=\|f\|_{1}+\|g\|_{1}}$ .

The reverse inequality follows from the same argument as the standard Minkowski, but uses that Holder's inequality is also reversed in this range.

Using the Reverse Minkowski, we may prove that power means with ${\displaystyle p\leq 1}$ , such as the Harmonic Mean and the Geometric Mean are concave.

Generalizations to other functions

The Minkowski inequality can be generalized to other functions ${\displaystyle \phi (x)}$  beyond the power function ${\displaystyle x^{p}}$ . The generalized inequality has the form

${\displaystyle \phi ^{-1}(\sum _{i=1}^{n}\phi (x_{i}+y_{i}))\leq \phi ^{-1}(\sum _{i=1}^{n}\phi (x_{i}))+\phi ^{-1}(\sum _{i=1}^{n}\phi (y_{i}))}$

Various sufficient conditions on ${\displaystyle \phi }$  have been found by Mulholland[1] and others. For example, for ${\displaystyle x\geq 0}$  one set of sufficient conditions from Mulholland is

1. ${\displaystyle \phi (x)}$  is continuous and strictly increasing with ${\displaystyle \phi (0)=0}$ .
2. ${\displaystyle \phi (x)}$  is a convex function of ${\displaystyle x}$ .
3. ${\displaystyle \log \phi (x)}$  is a convex function of ${\displaystyle \log(x)}$ .