# Kolmogorov's inequality

In probability theory, Kolmogorov's inequality is a so-called "maximal inequality" that gives a bound on the probability that the partial sums of a finite collection of independent random variables exceed some specified bound. The inequality is named after the Russian mathematician Andrey Kolmogorov.[citation needed]

## Statement of the inequality

Let X1, ..., Xn : Ω → R be independent random variables defined on a common probability space (Ω, F, Pr), with expected value E[Xk] = 0 and variance Var[Xk] < +∞ for k = 1, ..., n. Then, for each λ > 0,

$\Pr \left(\max _{1\leq k\leq n}|S_{k}|\geq \lambda \right)\leq {\frac {1}{\lambda ^{2}}}\operatorname {Var} [S_{n}]\equiv {\frac {1}{\lambda ^{2}}}\sum _{k=1}^{n}\operatorname {Var} [X_{k}]={\frac {1}{\lambda ^{2}}}\sum _{k=1}^{n}{\text{E}}[X_{k}^{2}],$

where Sk = X1 + ... + Xk.

The convenience of this result is that we can bound the worst case deviation of a random walk at any point of time using its value at the end of time interval.

## Proof

The following argument is due to Kareem Amin and employs discrete martingales. As argued in the discussion of Doob's martingale inequality, the sequence $S_{1},S_{2},\dots ,S_{n}$  is a martingale. Define $(Z_{i})_{i=0}^{n}$  as follows. Let $Z_{0}=0$ , and

$Z_{i+1}=\left\{{\begin{array}{ll}S_{i+1}&{\text{ if }}\displaystyle \max _{1\leq j\leq i}|S_{j}|<\lambda \\Z_{i}&{\text{ otherwise}}\end{array}}\right.$

for all $i$ . Then $(Z_{i})_{i=0}^{n}$  is also a martingale.

For any martingale $M_{i}$  with $M_{0}=0$ , we have that

{\begin{aligned}\sum _{i=1}^{n}{\text{E}}[(M_{i}-M_{i-1})^{2}]&=\sum _{i=1}^{n}{\text{E}}[M_{i}^{2}-2M_{i}M_{i-1}+M_{i-1}^{2}]\\&=\sum _{i=1}^{n}{\text{E}}\left[M_{i}^{2}-2(M_{i-1}+M_{i}-M_{i-1})M_{i-1}+M_{i-1}^{2}\right]\\&=\sum _{i=1}^{n}{\text{E}}\left[M_{i}^{2}-M_{i-1}^{2}\right]-2{\text{E}}\left[M_{i-1}(M_{i}-M_{i-1})\right]\\&={\text{E}}[M_{n}^{2}]-{\text{E}}[M_{0}^{2}]={\text{E}}[M_{n}^{2}].\end{aligned}}

Applying this result to the martingale $(S_{i})_{i=0}^{n}$ , we have

{\begin{aligned}{\text{Pr}}\left(\max _{1\leq i\leq n}|S_{i}|\geq \lambda \right)&={\text{Pr}}[|Z_{n}|\geq \lambda ]\\&\leq {\frac {1}{\lambda ^{2}}}{\text{E}}[Z_{n}^{2}]={\frac {1}{\lambda ^{2}}}\sum _{i=1}^{n}{\text{E}}[(Z_{i}-Z_{i-1})^{2}]\\&\leq {\frac {1}{\lambda ^{2}}}\sum _{i=1}^{n}{\text{E}}[(S_{i}-S_{i-1})^{2}]={\frac {1}{\lambda ^{2}}}{\text{E}}[S_{n}^{2}]={\frac {1}{\lambda ^{2}}}{\text{Var}}[S_{n}]\end{aligned}}

where the first inequality follows by Chebyshev's inequality.

This inequality was generalized by Hájek and Rényi in 1955.