Invariant Estimator is an intuitively appealing non Bayesian estimator. It is also sometimes called an "equivariant estimator". In the estimation problem we have random vector from space with density function when is from the space . We want to estimate given set of measurements from the distribution . The estimation is denoted by , is a function of the measurements and is in the space . The quality of the result is defined by a loss function which determine a risk function .

Generally speaking invariant estimator is an estimator that obey the 2 following rules:

1. Principle of Rational Invariance: The action taken in a decision problem should not depend on transformation on the measurement used

2. Invariance Principle: If two decision problems have the same formal structure (in terms of , , and ) then the same decision rule should be used in each problem

To define invariant estimator formally we will first set some definitions about groups of transformations:

Invariant Estimation Problem and Invariant Estimator edit

A group of transformation of  , to be denoted by   is a set of (measurable)   and onto transformation of   into itself, which satisfies the following conditions:

1. If   and   then  

2. If   then   ( 

3.   ( )

  and   in   are equivalent if   for some  . All the equivalent points form an equivalence class. Such equivalence class is called orbit (in  ). The   orbit,  , is the set  . If   consist of a single orbit than   is said to be transitive.

A family of densities   is said to be invariant under the group   if, for every   and   there exists a unique   such that   has density  .   will be denoted  .

If   is invariant under the group   than the loss function   is said to be invariant under   if for every   and   there exists an   such that   for all  .   will be denoted  .

  is a group of transformations from   to itself and   is a group of transformations from   to itself.

An estimation problem is invariant under   if there exists such three groups  .

For an estimation problem that is invariant under  , estimator   is invariant estimator under   if for all   and    .

Properties of Invariant Estimators edit

1. The risk function of an invariant estimator   is constant on orbits of  . Equivalently   for all   and  .

2. The risk function of an invariant estimator with transitive   is constant.

For a given problem the invariant estimator with the lowest risk is termed the "best invariant estimator". Best invariant estimator cannot be achieved always. A special case for which it can be achieved is the case when   is transitive.

Location Parameter Problem Example edit

  is a location parameter if the density of   is  . For   and   the problem is invariant under  . The invariant estimator in this case must satisfy   thus it is of the form   ( ).   is transitive on   so we have here constant risk:  . The best invariant estimator is the one that bring the risk   to minimum.

In the case that L is squared error  

Pitman Estimator edit

Given the estimation problem:   that has density   and loss  . This problem is invariant under  ,   and   (additive groups).

The best invariant estimator   is the one that minimize   (Pitman's estimator, 1939).

For the square error loss case we get that  

If   than  

If   than   and   when  

References edit

  • James O. Berger Statistical Decision Theory and Bayesian Analysis. 1980. Springer Series in Statistics. ISBN 0-387-90471-9.
  • The Pitman estimator of the Cauchy location parameter, Gabriela V. Cohen Freue, Journal of Statistical Planning and Inference 137 (2007) 1900 – 1913