Glivenko–Cantelli theorem

In the theory of probability, the Glivenko–Cantelli theorem (sometimes referred to as the Fundamental Theorem of Statistics), named after Valery Ivanovich Glivenko and Francesco Paolo Cantelli, describes the asymptotic behaviour of the empirical distribution function as the number of independent and identically distributed observations grows.[1] Specifically, the empirical distribution function converges uniformly to the true distribution function almost surely.

The left diagram illustrates Glivenko–Cantelli theorem for uniform distributions. The right diagram illustrates the Donsker–Skorokhod–Kolmogorov theorem
The same diagram for normal distributions

The uniform convergence of more general empirical measures becomes an important property of the Glivenko–Cantelli classes of functions or sets.[2] The Glivenko–Cantelli classes arise in Vapnik–Chervonenkis theory, with applications to machine learning. Applications can be found in econometrics making use of M-estimators.

Statement edit

Assume that   are independent and identically distributed random variables in   with common cumulative distribution function  . The empirical distribution function for   is defined by

 

where   is the indicator function of the set   For every (fixed)     is a sequence of random variables which converge to   almost surely by the strong law of large numbers. Glivenko and Cantelli strengthened this result by proving uniform convergence of   to  

Theorem

  almost surely.[3](p 265)

This theorem originates with Valery Glivenko[4] and Francesco Cantelli,[5] in 1933.

Remarks
  • If   is a stationary ergodic process, then   converges almost surely to   The Glivenko–Cantelli theorem gives a stronger mode of convergence than this in the iid case.
  • An even stronger uniform convergence result for the empirical distribution function is available in the form of an extended type of law of the iterated logarithm.[3](p 268) See asymptotic properties of the empirical distribution function for this and related results.

Proof edit

For simplicity, consider a case of continuous random variable  . Fix   such that   for  . Now for all   there exists   such that  .

 

Therefore,

 

Since   by strong law of large numbers, we can guarantee that for any positive   and any integer   such that  , we can find   such that for all  , we have  . Combined with the above result, this further implies that  , which is the definition of almost sure convergence.

Empirical measures edit

One can generalize the empirical distribution function by replacing the set   by an arbitrary set C from a class of sets   to obtain an empirical measure indexed by sets  

 

Where   is the indicator function of each set  .

Further generalization is the map induced by   on measurable real-valued functions f, which is given by

 

Then it becomes an important property of these classes whether the strong law of large numbers holds uniformly on   or  .

Glivenko–Cantelli class edit

Consider a set   with a sigma algebra of Borel subsets A and a probability measure   For a class of subsets,

 

and a class of functions

 

define random variables

 
 

where   is the empirical measure,   is the corresponding map, and

  assuming that it exists.

Definitions

  • A class   is called a Glivenko–Cantelli class (or GC class, or sometimes strong GC class) with respect to a probability measure P if
  almost surely as  
  • A class is   is a weak Glivenko-Cantelli class with respect to P if it instead satisfies the weaker condition
  in probability as  
  • A class is called a universal Glivenko–Cantelli class if it is a GC class with respect to any probability measure   on  .
  • A class is a weak uniform Glivenko–Cantelli class if the convergence occurs uniformly over all probability measures   on  : For every  ,
  as  
  • A class is a (strong) uniform Glivenko-Cantelli class if it satisfies the stronger condition that for every  ,
  as  

Glivenko–Cantelli classes of functions (as well as their uniform and universal forms) are defined similarly, replacing all instances of   with  .

The weak and strong versions of the various Glivenko-Cantelli properties often coincide under certain regularity conditions. The following definition commonly appears in such regularity conditions:

  • A class of functions   is image-admissible Suslin if there exists a Suslin space   and a surjection   such that the map   is measurable  .
  • A class of measurable sets   is image-admissible Suslin if the class of functions   is image-admissible Suslin, where   denotes the indicator function for the set  .


Theorems

The following two theorems give sufficient conditions for the weak and strong versions of the Glivenko-Cantelli property to be equivalent.

Theorem (Talagrand, 1987)[6]

Let   be a class of functions that is integrable  , and define  . Then the following are equivalent:
  •   is a weak Glivenko-Cantelli class and   is dominated by an integrable function
  •   is a Glivenko-Cantelli class


Theorem (Dudley, Giné, and Zinn, 1991)[7]

Suppose that a function class   is bounded. Also suppose that the set   is image-admissible Suslin. Then   is a weak uniform Glivenko-Cantelli class if and only if it is a strong uniform Glivenko-Cantelli class.

The following theorem is central to statistical learning of binary classification tasks.

Theorem (Vapnik and Chervonenkis, 1968)[8]

Under certain consistency conditions, a universally measurable class of sets   is a uniform Glivenko-Cantelli class if and only if it is a Vapnik–Chervonenkis class.

There exist a variety of consistency conditions for the equivalence of uniform Glivenko-Cantelli and Vapnik-Chervonenkis classes. In particular, either of the following conditions for a class   suffice:[9]

  •   is image-admissible Suslin.
  •   is universally separable: There exists a countable subset   of   such that each set   can be written as the pointwise limit of sets in  .

Examples edit

  • Let   and  . The classical Glivenko–Cantelli theorem implies that this class is a universal GC class. Furthermore, by Kolmogorov's theorem,
 , that is   is uniformly Glivenko–Cantelli class.
  • Let P be a nonatomic probability measure on S and   be a class of all finite subsets in S. Because  ,  ,  , we have that   and so   is not a GC class with respect to P.

See also edit

References edit

  1. ^ Howard G.Tucker (1959). "A Generalization of the Glivenko–Cantelli Theorem". The Annals of Mathematical Statistics. 30 (3): 828–830. doi:10.1214/aoms/1177706212. JSTOR 2237422.
  2. ^ van der Vaart, A. W. (1998). Asymptotic Statistics. Cambridge University Press. p. 279. ISBN 978-0-521-78450-4.
  3. ^ a b van der Vaart, A.W. (1998). Asymptotic Statistics. Cambridge University Press. ISBN 978-0-521-78450-4.
  4. ^ Glivenko, V. (1933). "Sulla determinazione empirica delle leggi di probabilità". Giorn. Ist. Ital. Attuari (in Italian). 4: 92–99.
  5. ^ Cantelli, F.P. (1933). "Sulla determinazione empirica delle leggi di probabilità". Giorn. Ist. Ital. Attuari. 4: 421–424.
  6. ^ Talagrand, M. (1987). "The Glivenko-Cantelli Problem". Annals of Probability. 15: 837–870. doi:10.1214/AOP/1176992069.
  7. ^ Dudley, Richard M.; Giné, Eva; Zinn, Joel C. (1991). "Uniform and universal Glivenko-Cantelli classes". Journal of Theoretical Probability. 4: 485–510. doi:10.1007/BF01210321.
  8. ^ Vapnik, V.N.; Chervonenkis, A.Ya. (1971). "On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities". Theory of Probability & Its Applications. 16 (2): 264–280. doi:10.1137/1116025.
  9. ^ Pestov, Vladimir (2011). "PAC learnability versus VC dimension: A footnote to a basic result of statistical learning". The 2011 International Joint Conference on Neural Networks. pp. 1141–1145. arXiv:1104.2097. doi:10.1109/IJCNN.2011.6033352.

Further reading edit