Open main menu

Conditional expectation

In probability theory, the conditional expectation, conditional expected value, or conditional mean of a random variable is its expected value – the value it would take “on average” over an arbitrarily large number of occurrences – given that a certain set of "conditions" is known to occur. If the random variable can take on only a finite number of values, the “conditions” are that the variable can only take on a subset of those values. More formally, in the case when the random variable is defined over a discrete probability space, the "conditions" are a partition of this probability space.

With multiple random variables, for one random variable to be mean independent of all others both individually and collectively means that each conditional expectation equals the random variable's (unconditional) expected value. This always holds if the variables are independent, but mean independence is a weaker condition.

Depending on the nature of the conditioning, the conditional expectation can be either a random variable itself or a fixed value. With two random variables, if the expectation of a random variable is expressed conditional on another random variable without a particular value of being specified, then the expectation of conditional on , denoted , is a function of the random variable and hence is itself a random variable. Alternatively, if the expectation of is expressed conditional on the occurrence of a particular value of , denoted , then the conditional expectation is a fixed value.

This concept generalizes to any probability space using measure theory.

In modern probability theory the concept of conditional probability is defined in terms of conditional expectation.

ExamplesEdit

Example 1. Consider the roll of a fair die and let A = 1 if the number is even (i.e. 2, 4, or 6) and A = 0 otherwise. Furthermore, let B = 1 if the number is prime (i.e. 2, 3, or 5) and B = 0 otherwise.

1 2 3 4 5 6
A 0 1 0 1 0 1
B 0 1 1 0 1 0

The unconditional expectation of A is  . But the expectation of A conditional on B = 1 (i.e., conditional on the die roll being 2, 3, or 5) is  , and the expectation of A conditional on B = 0 (i.e., conditional on the die roll being 1, 4, or 6) is  . Likewise, the expectation of B conditional on A = 1 is  , and the expectation of B conditional on A = 0 is  .

Example 2. Suppose we have daily rainfall data (mm of rain each day) collected by a weather station on every day of the ten–year (3652–day) period from Jan 1, 1990 to Dec 31, 1999. The unconditional expectation of rainfall for an unspecified day is the average of the rainfall amounts of those 3652 days. The conditional expectation of rainfall for an otherwise unspecified day known to be (conditional on being) in the month of March is the average of daily rainfall over all 310 days of the ten–year period that falls in March. And the conditional expectation of rainfall conditional on days dated March 2 is the average of the rainfall amounts that occurred on the ten days with that specific date.

HistoryEdit

The related concept of conditional probability dates back at least to Laplace who calculated conditional distributions. It was Andrey Kolmogorov who in 1933 formalized it using the Radon–Nikodym theorem.[1] In works of Paul Halmos[2] and Joseph L. Doob[3] from 1953, conditional expectation was generalized to its modern definition using sub-σ-algebras.[4]

Classical definitionEdit

Conditional expectation with respect to an eventEdit

In classical probability theory the conditional expectation of   given an event   (which may be the event   for a random variable  ) is the average of   over all outcomes in  , that is

 

where   is the cardinality of  .

The sum above can be grouped by different values of  , to get a sum over the range   of  

 

In modern[clarification needed] probability theory, when   is an event with strictly positive probability, it is possible to give a similar formula. This is notably the case for a discrete random variable   and for   in the range of   if the event   is  . Let   be a probability space,   is a random variable on that probability space, and   an event with strictly positive probability  . Then the conditional expectation of   given the event   is

 

where   is the range of   and   is the probability measure defined, for each set  , as  , the conditional probability of   given  .

When   (for instance if   is a continuous random variable and   is the event  , this is in general the case), the Borel–Kolmogorov paradox demonstrates the ambiguity of attempting to define the conditional probability knowing the event  . The above formula shows that this problem transposes to the conditional expectation. So instead one only defines the conditional expectation with respect to a σ-algebra or a random variable.

Conditional expectation with respect to a random variableEdit

If Y is a discrete random variable on the same probability space   having range  , then the conditional expectation of X with respect to Y is the function   of the variable   defined by

 

There is a closely related function from   to   defined by

 

This function, which is different from the previous one, is the conditional expectation of X with respect to the σ-algebra generated by Y. The two are related by

 

As mentioned above, if Y is a continuous random variable, it is not possible to define   by this method. As explained in the Borel–Kolmogorov paradox, we have to specify what limiting procedure produces the set Y = y. If the event space   has a distance function, then one procedure for doing so is as follows. Define the set  . Assume that each   is P-measurable and that   for all  . Then conditional expectation with respect to   is well-defined. Take the limit as   tends to 0 and define

 

Replacing this limiting process by the Radon–Nikodym derivative yields an analogous definition that works more generally.

Formal definitionEdit

Conditional expectation with respect to a sub-σ-algebraEdit

 
Conditional expectation with respect to a σ-algebra: in this example the probability space   is the [0,1] interval with the Lebesgue measure. We define the following σ-algebras:  ;   is the σ-algebra generated by the intervals with end-points 0, ¼, ½, ¾, 1; and   is the σ-algebra generated by the intervals with end-points 0, ½, 1. Here the conditional expectation is effectively the average over the minimal sets of the σ-algebra.

Consider the following:

  •   is a probability space.
  •   is a random variable on that probability space with finite expectation.
  •   is a sub-σ-algebra of  .

Since   is a sub  -algebra of  , the function   is usually not  -measurable, thus the existence of the integrals of the form  , where   and   is the restriction of   to  , cannot be stated in general. However, the local averages   can be recovered in   with the help of the conditional expectation. A conditional expectation of X given  , denoted as  , is any  -measurable function   which satisfies:

 

for each  .[5]

The existence of   can be established by noting that   for  is a finite measure on   that is absolutely continuous with respect to  . If   is the natural injection from   to  , then   is the restriction of   to   and   is the restriction of   to  . Furthermore,   is absolutely continuous with respect to  , because the condition

 

implies

 

Thus, we have

 

where the derivatives are Radon–Nikodym derivatives of measures.

Conditional expectation with respect to a random variableEdit

Consider, in addition to the above,

  • A measurable space  , and
  • A random variable  .

Let   be a  -measurable function such that, for every  -measurable function  ,

 

Then the random variable  , denoted as  , is a conditional expectation of X given  .

This definition is equivalent to defining the conditional expectation with respect to the sub- -field of   (see above) defined by the pre-image of Σ by Y. If we define

 

then

 .

DiscussionEdit

  • This is not a constructive definition; we are merely given the required property that a conditional expectation must satisfy.
    • The definition of   may resemble that of   for an event   but these are very different objects. The former is a  -measurable function  , while the latter is an element of  . Evaluating the former at   yields the latter.
    • Existence of a conditional expectation function may be proven by the Radon–Nikodym theorem. A sufficient condition is that the (unconditional) expected value for X exists.
    • Uniqueness can be shown to be almost sure: that is, versions of the same conditional expectation will only differ on a set of probability zero.
  • The σ-algebra   controls the "granularity" of the conditioning. A conditional expectation   over a finer (larger) σ-algebra   retains information about the probabilities of a larger class of events. A conditional expectation over a coarser (smaller) σ-algebra averages over more events.

Conditioning as factorizationEdit

In the definition of conditional expectation that we provided above, the fact that   is a real random element is irrelevant. Let   be a measurable space, where   is a σ-algebra on  . A  -valued random element is a measurable function  , i.e.   for all  . The distribution of   is the probability measure   defined as the pushforward measure  , that is, such that  .

Theorem. If   is an integrable random variable, then there exists a unique integrable random element  , defined   almost surely, such that

 

for all  .

Proof sketch. Let   be such that  . Then   is a signed measure which is absolutely continuous with respect to  . Indeed   means exactly that  , and since the integral of an integrable function on a set of probability 0 is 0, this proves absolute continuity. The Radon–Nikodym theorem then proves the existence of a density of   with respect to  . This density is  .  

Comparing with conditional expectation with respect to sub-σ-algebras, it holds that

 

We can further interpret this equality by considering the abstract change of variables formula to transport the integral on the right hand side to an integral over Ω:

 

The equation means that the integrals of   and the composition   over sets of the form  , for  , are identical.

This equation can be interpreted to say that the following diagram is commutative on average.

ComputationEdit

When X and Y are both discrete random variables, then the conditional expectation of X given the event Y = y can be considered as function of y for y in the range of Y:

 

where   is the range of X.

If X is a continuous random variable, while Y remains a discrete variable, the conditional expectation is

 

with   (where fX,Y(x, y) gives the joint density of X and Y) being the conditional density of X given Y = y.

If both X and Y are continuous random variables, then the conditional expectation is

 

where   (where fY(y) gives the density of Y).

Basic propertiesEdit

All the following formulas are to be understood in an almost sure sense. The σ-algebra   could be replaced by a random variable  .

  • Pulling out independent factors:
    • If   is independent of  , then  .
Proof

Let  . Then   is independent of  , so we get that

 

Thus the definition of conditional expectation is satisfied by the constant random variable  , as desired.

    • If   is independent of  , then  . Note that this is not necessarily the case if   is only independent of   and of  .
    • If   are independent,   are independent,   is independent of   and   is independent of  , then  .
  • Stability:
    • If   is  -measurable, then  .
    • If Z is a random variable, then  . In its simplest form, this says  .
  • Pulling out known factors:
    • If   is  -measurable, then  .
    • If Z is a random variable, then  .
  • Law of total expectation:  .
  • Tower property:
    • For sub-σ-algebras   we have  .
      • A special case is when Z is a  -measurable random variable. Then   and thus  .
      • Doob martingale property: the above with   (which is  -measurable), and using also  , gives  .
    • For random variables   we have  .
    • For random variables   we have  .
  • Linearity: we have   and   for  .
  • Positivity: If   then  .
  • Monotonicity: If   then  .
  • Monotone convergence: If   then  .
  • Dominated convergence: If   and   with  , then  .
  • Fatou's lemma: If   then  .
  • Jensen's inequality: If   is a convex function, then  .
  • Conditional variance: Using the conditional expectation we can define, by analogy with the definition of the variance as the mean square deviation from the average, the conditional variance
    • Definition:  
    • Algebraic formula for the variance:  
    • Law of total variance:  .
  • Martingale convergence: For a random variable  , that has finite expectation, we have  , if either   is an increasing series of sub-σ-algebras and   or if   is a decreasing series of sub-σ-algebras and  .
  • Conditional expectation as  -projection: If   are in the Hilbert space of square-integrable real random variables (real random variables with finite second moment) then
    • for  -measurable  , we have  , i.e. the conditional expectation   is in the sense of the L2(P) scalar product the orthogonal projection from   to the linear subspace of  -measurable functions. (This allows to define and prove the existence of the conditional expectation based on the Hilbert projection theorem.)
    • the mapping   is self-adjoint:  
  • Conditioning is a contractive projection of Lp spaces  . I.e.,   for any p ≥ 1.
  • Doob's conditional independence property:[6] If   are conditionally independent given  , then   (equivalently,  ).

See alsoEdit

NotesEdit

  1. ^ Kolmogorov, Andrey (1933). Grundbegriffe der Wahrscheinlichkeitsrechnung (in German). Berlin: Julius Springer. p. 46.[page needed]
  2. ^ Oxtoby, J. C. (1953). "Review: Measure theory, by P. R. Halmos" (PDF). Bull. Amer. Math. Soc. 59 (1): 89–91. doi:10.1090/s0002-9904-1953-09662-8.
  3. ^ J. L. Doob (1953). Stochastic Processes. John Wiley & Sons. ISBN 0-471-52369-0.
  4. ^ Olav Kallenberg: Foundations of Modern Probability. 2. edition. Springer, New York 2002, ISBN 0-387-95313-2, S. 573.
  5. ^ Billingsley, Patrick (1995). "Section 34. Conditional Expectation". Probability and Measure (3rd ed.). John Wiley & Sons. p. 445. ISBN 0-471-00710-2.
  6. ^ Kallenberg, Olav (2001). Foundations of Modern Probability (2nd ed.). York, PA, USA: Springer. p. 110. ISBN 0-387-95313-2.

ReferencesEdit

  • William Feller, An Introduction to Probability Theory and its Applications, vol 1, 1950, page 223
  • Paul A. Meyer, Probability and Potentials, Blaisdell Publishing Co., 1966, page 28[page needed]
  • Grimmett, Geoffrey; Stirzaker, David (2001). Probability and Random Processes (3rd ed.). Oxford University Press. ISBN 0-19-857222-0., pages 67–69

External linksEdit