# Cramér–von Mises criterion

In statistics the Cramér–von Mises criterion is a criterion used for judging the goodness of fit of a cumulative distribution function $F^{*}$ compared to a given empirical distribution function $F_{n}$ , or for comparing two empirical distributions. It is also used as a part of other algorithms, such as minimum distance estimation. It is defined as

$\omega ^{2}=\int _{-\infty }^{\infty }[F_{n}(x)-F^{*}(x)]^{2}\,\mathrm {d} F^{*}(x)$ In one-sample applications $F^{*}$ is the theoretical distribution and $F_{n}$ is the empirically observed distribution. Alternatively the two distributions can both be empirically estimated ones; this is called the two-sample case.

The criterion is named after Harald Cramér and Richard Edler von Mises who first proposed it in 1928–1930. The generalization to two samples is due to Anderson.

The Cramér–von Mises test is an alternative to the Kolmogorov–Smirnov test (1933).

## Cramér–von Mises test (one sample)

Let $x_{1},x_{2},\cdots ,x_{n}$  be the observed values, in increasing order. Then the statistic is:1153

$T=n\omega ^{2}={\frac {1}{12n}}+\sum _{i=1}^{n}\left[{\frac {2i-1}{2n}}-F(x_{i})\right]^{2}.$

If this value is larger than the tabulated value, then the hypothesis that the data came from the distribution $F$  can be rejected.

### Watson test

A modified version of the Cramér–von Mises test is the Watson test which uses the statistic U2, where

$U^{2}=T-n({\bar {F}}-{\tfrac {1}{2}})^{2},$

where

${\bar {F}}={\frac {1}{n}}\sum _{i=1}^{n}F(x_{i}).$

## Cramér–von Mises test (two samples)

Let $x_{1},x_{2},\cdots ,x_{N}$  and $y_{1},y_{2},\cdots ,y_{M}$  be the observed values in the first and second sample respectively, in increasing order. Let $r_{1},r_{2},\cdots ,r_{N}$  be the ranks of the x's in the combined sample, and let $s_{1},s_{2},\cdots ,s_{M}$  be the ranks of the y's in the combined sample. Anderson:1149 shows that

$T={\frac {NM}{N+M}}\omega ^{2}={\frac {U}{NM(N+M)}}-{\frac {4MN-1}{6(M+N)}}$

where U is defined as

$U=N\sum _{i=1}^{N}(r_{i}-i)^{2}+M\sum _{j=1}^{M}(s_{j}-j)^{2}$

If the value of T is larger than the tabulated values,:1154–1159 the hypothesis that the two samples come from the same distribution can be rejected. (Some books[specify] give critical values for U, which is more convenient, as it avoids the need to compute T via the expression above. The conclusion will be the same).

The above assumes there are no duplicates in the $x$ , $y$ , and $r$  sequences. So $x_{i}$  is unique, and its rank is $i$  in the sorted list $x_{1},...x_{N}$ . If there are duplicates, and $x_{i}$  through $x_{j}$  are a run of identical values in the sorted list, then one common approach is the midrank method: assign each duplicate a "rank" of $(i+j)/2$ . In the above equations, in the expressions $(r_{i}-i)^{2}$  and $(s_{j}-j)^{2}$ , duplicates can modify all four variables $r_{i}$ , $i$ , $s_{j}$ , and $j$ .