Expected value

In probability theory, the expected value of a random variable , denoted or ,[1][2] is a generalization of the weighted average, and is intuitively the arithmetic mean of a large number of independent realizations of . The expected value is also known as the expectation, mathematical expectation, mean, average, or first moment. Expected value is a key concept in economics, finance, and many other subjects.

By definition, the expected value of a constant random variable is .[3] The expected value of a random variable with equiprobable outcomes is defined as the arithmetic mean of the terms If some of the probabilities of an individual outcome are unequal, then the expected value is defined to be the probability-weighted average of the s, that is, the sum of the products .[4] The expected value of a general random variable involves integration in the sense of Lebesgue.

HistoryEdit

The idea of the expected value originated in the middle of the 17th century from the study of the so-called problem of points, which seeks to divide the stakes in a fair way between two players, who have to end their game before it is properly finished.[5] This problem had been debated for centuries, and many conflicting proposals and solutions had been suggested over the years, when it was posed to Blaise Pascal by French writer and amateur mathematician Chevalier de Méré in 1654. Méré claimed that this problem couldn't be solved, and that it showed just how flawed mathematics was when it came to its application to the real world. Pascal, being a mathematician, was provoked and determined to solve the problem once and for all.

He began to discuss the problem in a now famous series of letters to Pierre de Fermat. Soon enough, they both independently came up with a solution. They solved the problem in different computational ways, but their results were identical because their computations were based on the same fundamental principle. The principle is that the value of a future gain should be directly proportional to the chance of getting it. This principle seemed to have come naturally to both of them. They were very pleased by the fact that they had found essentially the same solution, and this in turn made them absolutely convinced that they had solved the problem conclusively; however, they did not publish their findings. They only informed a small circle of mutual scientific friends in Paris about it.[6]

Three years later, in 1657, a Dutch mathematician Christiaan Huygens, who had just visited Paris, published a treatise (see Huygens (1657)) "De ratiociniis in ludo aleæ" on probability theory. In this book, he considered the problem of points, and presented a solution based on the same principle as the solutions of Pascal and Fermat. Huygens also extended the concept of expectation by adding rules for how to calculate expectations in more complicated situations than the original problem (e.g., for three or more players). In this sense, this book can be seen as the first successful attempt at laying down the foundations of the theory of probability.

In the foreword to his book, Huygens wrote:

It should be said, also, that for some time some of the best mathematicians of France have occupied themselves with this kind of calculus so that no one should attribute to me the honour of the first invention. This does not belong to me. But these savants, although they put each other to the test by proposing to each other many questions difficult to solve, have hidden their methods. I have had therefore to examine and go deeply for myself into this matter by beginning with the elements, and it is impossible for me for this reason to affirm that I have even started from the same principle. But finally I have found that my answers in many cases do not differ from theirs.

— Edwards (2002)

Thus, Huygens learned about de Méré's Problem in 1655 during his visit to France; later on in 1656 from his correspondence with Carcavi, he learned that his method was essentially the same as Pascal's; so that before his book went to press in 1657, he knew about Pascal's priority in this subject.

EtymologyEdit

Neither Pascal nor Huygens used the term "expectation" in its modern sense. In particular, Huygens writes:[7]

That any one Chance or Expectation to win any thing is worth just such a Sum, as wou'd procure in the same Chance and Expectation at a fair Lay. ... If I expect a or b, and have an equal chance of gaining them, my Expectation is worth (a+b)/2.

More than a hundred years later, in 1814, Pierre-Simon Laplace published his tract "Théorie analytique des probabilités", where the concept of expected value was defined explicitly:[8]

… this advantage in the theory of chance is the product of the sum hoped for by the probability of obtaining it; it is the partial sum which ought to result when we do not wish to run the risks of the event in supposing that the division is made proportional to the probabilities. This division is the only equitable one when all strange circumstances are eliminated; because an equal degree of probability gives an equal right for the sum hoped for. We will call this advantage mathematical hope.

NotationsEdit

The use of the letter   to denote expected value goes back to W. A. Whitworth in 1901.[9] The symbol has become popular since then for English writers. In German,   stands for "Erwartungswert", in Spanish for "Esperanza matemática", and in French for "Espérance mathématique".[10]

Another popular notation is  , whereas   is commonly used in physics, and   in Russian-language literature.

DefinitionEdit

Finite caseEdit

Let   be a random variable with a finite number of finite outcomes   occurring with probabilities   respectively. The expectation of   is defined as

  [4]

Since the sum of all probabilities   is 1 ( ), the expected value is the weighted sum of the   values, with the   values being the weights.

If all outcomes   are equiprobable (that is,  ), then the weighted average turns into the simple average. On the other hand, if the outcomes   are not equiprobable, then the simple average must be replaced with the weighted average, which takes into account the fact that some outcomes are more likely than others.

 
An illustration of the convergence of sequence averages of rolls of a die to the expected value of 3.5 as the number of rolls (trials) grows.

ExamplesEdit

  • Let   represent the outcome of a roll of a fair six-sided die. More specifically,   will be the number of pips showing on the top face of the die after the toss. The possible values for   are 1, 2, 3, 4, 5, and 6, all of which are equally likely with a probability of 1/6. The expectation of   is
 
If one rolls the die   times and computes the average (arithmetic mean) of the results, then as   grows, the average will almost surely converge to the expected value, a fact known as the strong law of large numbers.
  • The roulette game consists of a small ball and a wheel with 38 numbered pockets around the edge. As the wheel is spun, the ball bounces around randomly until it settles down in one of the pockets. Suppose random variable   represents the (monetary) outcome of a $1 bet on a single number ("straight up" bet). If the bet wins (which happens with probability 1/38 in American roulette), the payoff is $35; otherwise the player loses the bet. The expected profit from such a bet will be
 
That is, the bet of $1 stands to lose  , so its expected value is  

Countably infinite caseEdit

Intuitively, the expectation of a random variable taking values in a countable set of outcomes is defined analogously as the weighted sum of the outcome values, where the weights correspond to the probabilities of realizing that value. However, convergence issues associated with the infinite sum necessitate a more careful definition. A rigorous definition first defines expectation of a non-negative random variable, and then adapts it to general random variables.

Let   be a non-negative random variable with a countable set of outcomes   occurring with probabilities   respectively. Analogous to the discrete case, the expected value of   is then defined as the series

 

Note that since  , the infinite sum is well-defined and does not depend on the order in which it is computed. Unlike the finite case, the expectation here can be equal to infinity, if the infinite sum above increases without bound.

For a general (not necessarily non-negative) random variable   with a countable number of outcomes, set   and  . By definition,

 

Like with non-negative random variables,   can, once again, be finite or infinite. The third option here is that   is no longer guaranteed to be well defined at all. The latter happens whenever  .

ExamplesEdit

  • Suppose   and   for  , where   (with   being the natural logarithm) is the scale factor such that the probabilities sum to 1. Then, using the direct definition for non-negative random variables, we have
 
  • An example where the expectation is infinite arises in the context of the St. Petersburg paradox. Let   and   for  . Once again, since the random variable is non-negative, the expected value calculation gives
 
  • For an example where the expectation is not well-defined, suppose the random variable   takes values   with respective probabilities  , ..., where   is a normalizing constant that ensures the probabilities sum up to one.
Then it follows that   takes value   with probability   for   and takes value   with remaining probability. Similarly,   takes value   with probability   for   and takes value   with remaining probability. Using the definition for non-negative random variables, one can show that both   and   (see Harmonic series). Hence, the expectation of   is not well-defined.

Absolutely continuous caseEdit

If   is a random variable with a probability density function of  , then the expected value is defined as the Lebesgue integral

 

where the values on both sides are well defined or not well defined simultaneously.

Example. A random variable that has the Cauchy distribution[11] has a density function, but the expected value is undefined since the distribution has large "tails".

General caseEdit

In general, if   is a random variable defined on a probability space  , then the expected value of  , denoted by  , is defined as the Lebesgue integral

 

For multidimensional random variables, their expected value is defined per component. That is,

 

and, for a random matrix   with elements  ,  

Basic propertiesEdit

The basic properties below (and their names in bold) replicate or follow immediately from those of Lebesgue integral. Note that the letters "a.s." stand for "almost surely"—a central property of the Lebesgue integral. Basically, one says that an inequality like   is true almost surely, when the probability measure attributes zero-mass to the complementary event  .

  • For a general random variable  , define as before   and  , and note that  , with both   and   nonnegative, then:
 
  • Let   denote the indicator function of an event  , then  
  • Formulas in terms of CDF: If   is the cumulative distribution function of the probability measure   and   is a random variable, then
 
where the values on both sides are well defined or not well defined simultaneously, and the integral is taken in the sense of Lebesgue-Stieltjes. Here,   is the extended real line.
Additionally,
 
with the integrals taken in the sense of Lebesgue.
The proof of the second formula follows.
  • Non-negativity: If   (a.s.), then  .
  • Linearity of expectation:[3] The expected value operator (or expectation operator)   is linear in the sense that, for any random variables   and  , and a constant  ,
 
whenever the right-hand side is well-defined. This means that the expected value of the sum of any finite number of random variables is the sum of the expected values of the individual random variables, and the expected value scales linearly with a multiplicative constant.
  • Monotonicity: If   (a.s.), and both   and   exist, then  .
Proof follows from the linearity and the non-negativity property for  , since   (a.s.).
  • Non-multiplicativity: In general, the expected value is not multiplicative, i.e.   is not necessarily equal to  . If   and   are independent, then one can show that  . If the random variables are dependent, then generally  , although in special cases of dependency the equality may hold.
  • Law of the unconscious statistician: The expected value of a measurable function of  ,  , given that   has a probability density function  , is given by the inner product of   and  :
  [3]
This formula also holds in multidimensional case, when   is a function of several random variables, and   is their joint density.[3][12]
  • Non-degeneracy: If  , then   (a.s.).
  • For a random variable   with well-defined expectation:  .
  • The following statements regarding a random variable   are equivalent:
    •   exists and is finite.
    • Both   and   are finite.
    •   is finite.
For the reasons above, the expressions "  is integrable" and "the expected value of   is finite" are used interchangeably throughout this article.
  • If   then   (a.s.). Similarly, if   then   (a.s.).
  • If   and   then  
  • If   (a.s.), then  . In other words, if X and Y are random variables that take different values with probability zero, then the expectation of X will equal the expectation of Y.
  • If   (a.s.) for some constant  , then  . In particular, for a random variable   with well-defined expectation,  . A well defined expectation implies that there is one number, or rather, one constant that defines the expected value. Thus follows that the expectation of this constant is just the original expected value.
  • For a non-negative integer-valued random variable  
 

Uses and applicationsEdit

The expectation of a random variable plays an important role in a variety of contexts. For example, in decision theory, an agent making an optimal choice in the context of incomplete information is often assumed to maximize the expected value of their utility function. For a different example, in statistics, where one seeks estimates for unknown parameters based on available data, the estimate itself is a random variable. In such settings, a desirable criterion for a "good" estimator is that it is unbiased; that is, the expected value of the estimate is equal to the true value of the underlying parameter.

It is possible to construct an expected value equal to the probability of an event, by taking the expectation of an indicator function that is one if the event has occurred and zero otherwise. This relationship can be used to translate properties of expected values into properties of probabilities, e.g. using the law of large numbers to justify estimating probabilities by frequencies.

The expected values of the powers of X are called the moments of X; the moments about the mean of X are expected values of powers of X − E[X]. The moments of some random variables can be used to specify their distributions, via their moment generating functions.

To empirically estimate the expected value of a random variable, one repeatedly measures observations of the variable and computes the arithmetic mean of the results. If the expected value exists, this procedure estimates the true expected value in an unbiased manner and has the property of minimizing the sum of the squares of the residuals (the sum of the squared differences between the observations and the estimate). The law of large numbers demonstrates (under fairly mild conditions) that, as the size of the sample gets larger, the variance of this estimate gets smaller.

This property is often exploited in a wide variety of applications, including general problems of statistical estimation and machine learning, to estimate (probabilistic) quantities of interest via Monte Carlo methods, since most quantities of interest can be written in terms of expectation, e.g.  , where   is the indicator function of the set  .

 
The mass of probability distribution is balanced at the expected value, here a Beta(α,β) distribution with expected value α/(α+β).

In classical mechanics, the center of mass is an analogous concept to expectation. For example, suppose X is a discrete random variable with values xi and corresponding probabilities pi. Now consider a weightless rod on which are placed weights, at locations xi along the rod and having masses pi (whose sum is one). The point at which the rod balances is E[X].

Expected values can also be used to compute the variance, by means of the computational formula for the variance

 

A very important application of the expectation value is in the field of quantum mechanics. The expectation value of a quantum mechanical operator   operating on a quantum state vector   is written as  . The uncertainty in   can be calculated using the formula  .

Interchanging limits and expectationEdit

In general, it is not the case that   despite   pointwise. Thus, one cannot interchange limits and expectation, without additional conditions on the random variables. To see this, let   be a random variable distributed uniformly on  . For   define a sequence of random variables

 

with   being the indicator function of the event  . Then, it follows that   (a.s). But,   for each  . Hence,  

Analogously, for general sequence of random variables  , the expected value operator is not  -additive, i.e.

 

An example is easily obtained by setting   and   for  , where   is as in the previous example.

A number of convergence results specify exact conditions which allow one to interchange limits and expectations, as specified below.

  • Monotone convergence theorem: Let   be a sequence of random variables, with   (a.s) for each  . Furthermore, let   pointwise. Then, the monotone convergence theorem states that  
Using the monotone convergence theorem, one can show that expectation indeed satisfies countable additivity for non-negative random variables. In particular, let   be non-negative random variables. It follows from monotone convergence theorem that
 
  • Fatou's lemma: Let   be a sequence of non-negative random variables. Fatou's lemma states that
 
Corollary. Let   with   for all  . If   (a.s), then  
Proof is by observing that   (a.s.) and applying Fatou's lemma.
  • Dominated convergence theorem: Let   be a sequence of random variables. If   pointwise (a.s.),   (a.s.), and  . Then, according to the dominated convergence theorem,
    •  ;
    •  
    •  
  • Uniform integrability: In some cases, the equality   holds when the sequence   is uniformly integrable.

InequalitiesEdit

There are a number of inequalities involving the expected values of functions of random variables. The following list includes some of the more basic ones.

  • Markov's inequality: For a nonnegative random variable   and  , Markov's inequality states that
 
  • Bienaymé-Chebyshev inequality: Let   be an arbitrary random variable with finite expected value   and finite variance  . The Bienaymé-Chebyshev inequality states that, for any real number  ,
 
  • Jensen's inequality: Let   be a Borel convex function and   a random variable such that  . Then
 
(Note that the right-hand side is well defined even if   is non-finite. Indeed, as noted above, the finiteness of   implies that   is finite a.s.; thus   is defined a.s.).
  • Lyapunov's inequality:[13] Let  . Lyapunov's inequality states that
 
Proof. Applying Jensen's inequality to   and  , obtain  . Taking the   root of each side completes the proof.
 
  • Hölder's inequality: Let   and   satisfy  ,  , and  . The Hölder's inequality states that
 
  • Minkowski inequality: Let   be a positive real number satisfying  . Let, in addition,   and  . Then, according to the Minkowski inequality,   and
 

Expected values of common distributionsEdit

Distribution Notation Mean E(X)
Bernoulli    
Binomial    
Poisson    
Geometric    
Uniform    
Exponential    
Normal    
Standard Normal    
Pareto     if  
Cauchy   undefined

Relationship with characteristic functionEdit

The probability density function   of a scalar random variable   is related to its characteristic function   by the inversion formula:

 

For the expected value of   (where   is a Borel function), we can use this inversion formula to obtain

 

If   is finite, changing the order of integration, we get, in accordance with Fubini–Tonelli theorem,

 

where

 

is the Fourier transform of   The expression for   also follows directly from Plancherel theorem.

See alsoEdit

ReferencesEdit

  1. ^ "List of Probability and Statistics Symbols". Math Vault. 2020-04-26. Retrieved 2020-09-11.
  2. ^ "Expectation | Mean | Average". www.probabilitycourse.com. Retrieved 2020-09-11.
  3. ^ a b c d Weisstein, Eric W. "Expectation Value". mathworld.wolfram.com. Retrieved 2020-09-11.
  4. ^ a b "Expected Value | Brilliant Math & Science Wiki". brilliant.org. Retrieved 2020-08-21.
  5. ^ History of Probability and Statistics and Their Applications before 1750. Wiley Series in Probability and Statistics. 1990. doi:10.1002/0471725161. ISBN 9780471725169.
  6. ^ Ore, Oystein (1960). "Ore, Pascal and the Invention of Probability Theory". The American Mathematical Monthly. 67 (5): 409–419. doi:10.2307/2309286. JSTOR 2309286.
  7. ^ Huygens, Christian. "The Value of Chances in Games of Fortune. English Translation" (PDF).
  8. ^ Laplace, Pierre Simon, marquis de, 1749-1827. (1952) [1951]. A philosophical essay on probabilities. Dover Publications. OCLC 475539.CS1 maint: multiple names: authors list (link)
  9. ^ Whitworth, W.A. (1901) Choice and Chance with One Thousand Exercises. Fifth edition. Deighton Bell, Cambridge. [Reprinted by Hafner Publishing Co., New York, 1959.]
  10. ^ "Earliest uses of symbols in probability and statistics".
  11. ^ Richard W Hamming (1991). "Example 8.7–1 The Cauchy distribution". The art of probability for scientists and engineers. Addison-Wesley. p. 290 ff. ISBN 0-201-40686-1. Sampling from the Cauchy distribution and averaging gets you nowhere — one sample has the same distribution as the average of 1000 samples!
  12. ^ Papoulis, A. (1984), Probability, Random Variables, and Stochastic Processes, New York: McGraw–Hill, pp. 139–152
  13. ^ Agahi, Hamzeh; Mohammadpour, Adel; Mesiar, Radko (November 2015). "Generalizations of some probability inequalities and $L^{p}$ convergence of random variables for any monotone measure". Brazilian Journal of Probability and Statistics. 29 (4): 878–896. doi:10.1214/14-BJPS251. ISSN 0103-0752.

LiteratureEdit

  • Edwards, A.W.F (2002). Pascal's arithmetical triangle: the story of a mathematical idea (2nd ed.). JHU Press. ISBN 0-8018-6946-3.CS1 maint: ref=harv (link)
  • Huygens, Christiaan (1657). De ratiociniis in ludo aleæ (English translation, published in 1714).CS1 maint: ref=harv (link)