# Expected value

(Redirected from Expected number)

In probability theory, the expected value of a random variable, intuitively, is the long-run average value of repetitions of the experiment it represents. For example, the expected value in rolling a six-sided die is 3.5, because the average of all the numbers that come up in an extremely large number of rolls is close to 3.5. Less roughly, the law of large numbers states that the arithmetic mean of the values almost surely converges to the expected value as the number of repetitions approaches infinity. The expected value is also known as the expectation, mathematical expectation, EV, average, mean value, mean, or first moment.

More practically, the expected value of a discrete random variable is the probability-weighted average of all possible values. In other words, each possible value the random variable can assume is multiplied by its probability of occurring, and the resulting products are summed to produce the expected value. The same principle applies to an absolutely continuous random variable, except that an integral of the variable with respect to its probability density replaces the sum. The formal definition subsumes both of these and also works for distributions which are neither discrete nor absolutely continuous; the expected value of a random variable is the integral of the random variable with respect to its probability measure.[1][2]

The expected value does not exist for random variables having some distributions with large "tails", such as the Cauchy distribution.[3] For random variables such as these, the long-tails of the distribution prevent the sum/integral from converging.

The expected value is a key aspect of how one characterizes a probability distribution; it is one type of location parameter. By contrast, the variance is a measure of dispersion of the possible values of the random variable around the expected value. The variance itself is defined in terms of two expectations: it is the expected value of the squared deviation of the variable's value from the variable's expected value.

The expected value plays important roles in a variety of contexts. In regression analysis, one desires a formula in terms of observed data that will give a "good" estimate of the parameter giving the effect of some explanatory variable upon a dependent variable. The formula will give different estimates using different samples of data, so the estimate it gives is itself a random variable. A formula is typically considered good in this context if it is an unbiased estimator—that is, if the expected value of the estimate (the average value it would give over an arbitrarily large number of separate samples) can be shown to equal the true value of the desired parameter.

In decision theory, and in particular in choice under uncertainty, an agent is described as making an optimal choice in the context of incomplete information. For risk neutral agents, the choice involves using the expected values of uncertain quantities, while for risk averse agents it involves maximizing the expected value of some objective function such as a von Neumann–Morgenstern utility function. One example of using expected value in reaching optimal decisions is the Gordon–Loeb model of information security investment. According to the model, one can conclude that the amount a firm spends to protect information should generally be only a small fraction of the expected loss (i.e., the expected value of the loss resulting from a cyber/information security breach).[4]

## DefinitionEdit

### Finite caseEdit

Let ${\displaystyle X}$  be a random variable with a finite number of outcomes ${\displaystyle x_{1}}$ , ${\displaystyle x_{2}}$ , ..., ${\displaystyle x_{k}}$  occurring with probabilities ${\displaystyle p_{1}}$ , ${\displaystyle p_{2}}$ , ..., ${\displaystyle p_{k}}$ , respectively. The expectation of ${\displaystyle X}$  is defined as

${\displaystyle \operatorname {E} [X]=x_{1}p_{1}+x_{2}p_{2}+\cdots +x_{k}p_{k}.}$

Since all probabilities ${\displaystyle p_{i}}$  add up to 1 (${\displaystyle p_{1}+p_{2}+\ldots +p_{k}=1}$ ), the expected value is the weighted average, with ${\displaystyle p_{i}}$ ’s being the weights.

If all outcomes ${\displaystyle x_{i}}$  are equiprobable (that is, ${\displaystyle p_{1}=p_{2}=\ldots =p_{k}}$ ), then the weighted average turns into the simple average. This is intuitive: the expected value of a random variable is the average of all values it can take; thus the expected value is what one expects to happen on average. If the outcomes ${\displaystyle x_{i}}$  are not equally probable, then the simple average must be replaced with the weighted average, which takes into account the fact that some outcomes are more likely than the others. The intuition however remains the same: the expected value of ${\displaystyle X}$  is what one expects to happen on average.

An illustration of the convergence of sequence averages of rolls of a die to the expected value of 3.5 as the number of rolls (trials) grows.

#### ExamplesEdit

• Let ${\displaystyle X}$  represent the outcome of a roll of a fair six-sided die. More specifically, ${\displaystyle X}$  will be the number of pips showing on the top face of the die after the toss. The possible values for ${\displaystyle X}$  are 1, 2, 3, 4, 5, and 6, all equally likely (each having the probability of 1/6). The expectation of ${\displaystyle X}$  is
${\displaystyle \operatorname {E} [X]=1\cdot {\frac {1}{6}}+2\cdot {\frac {1}{6}}+3\cdot {\frac {1}{6}}+4\cdot {\frac {1}{6}}+5\cdot {\frac {1}{6}}+6\cdot {\frac {1}{6}}=3.5.}$
If one rolls the die ${\displaystyle n}$  times and computes the average (arithmetic mean) of the results, then as ${\displaystyle n}$  grows, the average will almost surely converge to the expected value, a fact known as the strong law of large numbers. One example sequence of ten rolls of the die is 2, 3, 1, 2, 5, 6, 2, 2, 2, 6, which has the average of 3.1, with the distance of 0.4 from the expected value of 3.5. The convergence is relatively slow: the probability that the average falls within the range 3.5 ± 0.1 is 21.6% for ten rolls, 46.1% for a hundred rolls and 93.7% for a thousand rolls. See the figure for an illustration of the averages of longer sequences of rolls of the die and how they converge to the expected value of 3.5. More generally, the rate of convergence can be roughly quantified by e.g. Chebyshev's inequality and the Berry–Esseen theorem.
• The roulette game consists of a small ball and a wheel with 38 numbered pockets around the edge. As the wheel is spun, the ball bounces around randomly until it settles down in one of the pockets. Suppose random variable ${\displaystyle X}$  represents the (monetary) outcome of a $1 bet on a single number ("straight up" bet). If the bet wins (which happens with probability 1/38 in American roulette), the payoff is$35; otherwise the player loses the bet. The expected profit from such a bet will be
${\displaystyle \operatorname {E} [\,{\text{gain from }}\1{\text{ bet}}\,]=-\1\cdot {\frac {37}{38}}+\35\cdot {\frac {1}{38}}=-\0.0526.}$
That is, the bet of $1 stands to lose$0.0526, so its expected value is -\$0.0526.

### Countably infinite caseEdit

Let ${\displaystyle X}$  be a random variable with a countable set of outcomes ${\displaystyle x_{1}}$ , ${\displaystyle x_{2}}$ , ... occurring with probabilities ${\displaystyle p_{1}}$ , ${\displaystyle p_{2}}$ , ..., respectively. The expected value of ${\displaystyle X}$  is defined as the infinite sum

${\displaystyle \operatorname {E} [X]=\sum _{i=1}^{\infty }x_{i}\,p_{i},}$

provided that this series converges absolutely. If the series does not converge absolutely, we say that the expected value of ${\displaystyle X}$  does not exist.[5]

#### ExampleEdit

• Suppose ${\displaystyle x_{i}=i}$  and ${\displaystyle p_{i}={\frac {k}{i2^{i}}},}$  for ${\displaystyle i=1,2,3,\ldots }$ , where ${\displaystyle k={\frac {1}{\ln 2}}}$  (with ${\displaystyle \ln }$  being the natural logarithm) is the scale factor such that the probabilities sum to 1. Then
${\displaystyle \operatorname {E} [X]=1\left({\frac {k}{2}}\right)+2\left({\frac {k}{8}}\right)+3\left({\frac {k}{24}}\right)+\dots ={\frac {k}{2}}+{\frac {k}{4}}+{\frac {k}{8}}+\dots =k.}$
Since this series converges absolutely, the expected value of ${\displaystyle X}$  is ${\displaystyle k}$ .
• For an example that is not absolutely convergent, suppose random variable ${\displaystyle X}$  takes values 1, −2, 3, −4, ..., with respective probabilities ${\displaystyle {\frac {c}{1^{2}}},{\frac {c}{2^{2}}},{\frac {c}{3^{2}}},{\frac {c}{4^{2}}}}$ , ..., where ${\displaystyle c={\frac {6}{\pi ^{2}}}}$  is a normalizing constant that ensures the probabilities sum up to one. Then the infinite sum
${\displaystyle \sum _{i=1}^{\infty }x_{i}\,p_{i}=c\,{\bigg (}1-{\frac {1}{2}}+{\frac {1}{3}}-{\frac {1}{4}}+\dotsb {\bigg )}}$
converges and its sum is equal to ${\displaystyle {\frac {6\ln 2}{\pi ^{2}}}\approx 0.421383}$ . However it would be incorrect to claim that the expected value of ${\displaystyle X}$  is equal to this number—in fact ${\displaystyle \operatorname {E} [X]}$  does not exist, as this series does not converge absolutely (see Alternating harmonic series). This is because an expected value calculation must not depend on the order in which the possible outcomes are presented, whereas in a conditionally convergent series such as this one, different orderings give different sums, both finite and infinite (via the Riemann rearrangement theorem).
• An example that diverges arises in the context of the St. Petersburg paradox. Let ${\displaystyle x_{i}=2^{i}}$  and ${\displaystyle p_{i}={\frac {1}{2^{i}}}}$  for ${\displaystyle i=1,2,3,\ldots }$ . The expected value calculation gives
${\displaystyle \sum _{i=1}^{\infty }x_{i}\,p_{i}=2\cdot {\frac {1}{2}}+4\cdot {\frac {1}{4}}+8\cdot {\frac {1}{8}}+16\cdot {\frac {1}{16}}+\cdots =1+1+1+1+\cdots \,.}$
Since this does not converge but instead keeps growing, the expected value is infinite.

### Absolutely continuous caseEdit

If the probability distribution of ${\displaystyle X}$  admits a probability density function ${\displaystyle f(x)}$ , then the expected value can be expressed through the following Lebesgue integral:

${\displaystyle \operatorname {E} [X]=\int _{-\infty }^{\infty }xf(x)\,dx.}$

### General caseEdit

In general, if ${\displaystyle X}$  is a random variable defined on a probability space ${\displaystyle (\Omega ,\Sigma ,\operatorname {P} )}$ , then the expected value of ${\displaystyle X}$ , denoted by ${\displaystyle \operatorname {E} [X]}$ , ${\displaystyle \langle X\rangle }$ , or ${\displaystyle {\bar {X}}}$ , is defined as the Lebesgue integral

${\displaystyle \operatorname {E} [X]=\int _{\Omega }X(\omega )\,d\operatorname {P} (\omega ).}$

If ${\displaystyle F_{X}(x)=\operatorname {P} (X\leq x)}$  is the cumulative distribution function of ${\displaystyle X}$ , then

${\displaystyle \operatorname {E} [X]=\int _{-\infty }^{+\infty }x\,dF_{X}(x),}$

where the integral is interpreted in the sense of Lebesgue–Stieltjes.

An example of a distribution for which there is no expected value is Cauchy distribution.

For multidimensional random variables, their expected value is defined per component, i.e.

${\displaystyle \operatorname {E} [(X_{1},\ldots ,X_{n})]=(\operatorname {E} [X_{1}],\ldots ,\operatorname {E} [X_{n}])}$

and, for a random matrix ${\displaystyle X}$  with elements ${\displaystyle X_{ij}}$ ,

${\displaystyle (\operatorname {E} [X])_{ij}=\operatorname {E} [X_{ij}]}$ .

## Basic propertiesEdit

### ${\displaystyle \operatorname {E} (X)=\operatorname {E} (X_{+})-\operatorname {E} (X_{-})}$ Edit

If ${\displaystyle X}$  is a random variable, then ${\displaystyle \operatorname {E} (X)=\operatorname {E} (X_{+})-\operatorname {E} (X_{-})}$ , where

${\displaystyle X_{+}(\omega )={\begin{cases}X(\omega )&{\text{if}}\ X(\omega )\geq 0,\\0&{\text{if}}\ X(\omega )<0,\end{cases}}}$

and

${\displaystyle X_{-}(\omega )={\begin{cases}-X(\omega )&{\text{if}}\ X(\omega )<0,\\0&{\text{if}}\ X(\omega )\geq 0.\end{cases}}}$

This property is part of the definition of Lebesgue integral.

### ${\displaystyle \operatorname {E} (X)}$  exists if and only if ${\displaystyle \operatorname {E} |X|}$  doesEdit

The following statements regarding a random variable ${\displaystyle X}$  are equivalent:

• ${\displaystyle \operatorname {E} (X)}$  exists.
• Both ${\displaystyle \operatorname {E} (X_{+})}$  and ${\displaystyle \operatorname {E} (X_{-})}$  exist.
• ${\displaystyle \operatorname {E} |X|}$  exists.

The equivalency relies on the definition of Lebesgue integral and measurability of ${\displaystyle X}$ .

For the reasons above, the expressions "${\displaystyle X}$  is integrable" and "the expected value of ${\displaystyle X}$  exists" are used interchangeably when speaking of a random variable throughout this article.

### Expected value of a constant is constantEdit

If ${\displaystyle c}$  is a constant random variable, then ${\textstyle \operatorname {E} [c]=c}$ . This implies that for any random variable ${\displaystyle X}$ , ${\displaystyle \operatorname {E} [\operatorname {E} [X]]=\operatorname {E} [X]}$ .

### LinearityEdit

The expected value operator (or expectation operator) ${\displaystyle \operatorname {E} [\cdot ]}$  is linear in the sense that

{\displaystyle {\begin{aligned}\operatorname {E} [X+Y]&=\operatorname {E} [X]+\operatorname {E} [Y],\\[6pt]\operatorname {E} [aX]&=a\operatorname {E} [X],\end{aligned}}}

where ${\displaystyle X}$  and ${\displaystyle Y}$  are (arbitrary) random variables, and ${\displaystyle a}$  is a scalar.

### MonotonicityEdit

If ${\displaystyle X\leq Y}$  (a.s.), then ${\displaystyle \operatorname {E} (X)\leq \operatorname {E} (Y)}$ .

### If ${\displaystyle \operatorname {E} |X|<\infty }$  then ${\displaystyle X\neq \pm \infty }$  (a.s.)Edit

Proof. Let ${\displaystyle \Omega _{\infty }=\{\omega \in \Omega \mid X(\omega )=\pm \infty \}}$ . We have ${\displaystyle |XI_{\Omega _{\infty }}|\leq |X|}$ , and the random variable ${\displaystyle |XI_{\Omega _{\infty }}|:\Omega \to \{0,+\infty \}}$  takes on only two values. By monotonicity,

${\displaystyle \infty >\operatorname {E} |X|\geq \operatorname {E} |XI_{\Omega _{\infty }}|=\infty \cdot \operatorname {P} (\Omega _{\infty }),}$

which is only possible when ${\displaystyle \operatorname {P} (\Omega _{\infty })=0.}$

### ${\displaystyle |\operatorname {E} (X)|\leq \operatorname {E} |X|}$ Edit

For an arbitrary random variable ${\displaystyle X}$ , if ${\displaystyle \operatorname {E} |X|<\infty }$ , then ${\displaystyle |\operatorname {E} (X)|\leq \operatorname {E} |X|}$ .

Proof. Since ${\displaystyle \operatorname {E} |X|<\infty }$ , we conclude that ${\displaystyle \operatorname {E} (X)}$  exists, and

{\displaystyle {\begin{aligned}|\operatorname {E} (X)|&={\Bigl |}\operatorname {E} (X_{+})-\operatorname {E} (X_{-}){\Bigr |}\leq {\Bigl |}\operatorname {E} (X_{+}){\Bigr |}+{\Bigl |}\operatorname {E} (X_{-}){\Bigr |}\\[5pt]&=\operatorname {E} (X_{+})+\operatorname {E} (X_{-})=\operatorname {E} (X_{+}+X_{-})\\[5pt]&=\operatorname {E} |X|.\end{aligned}}}

Note that this result can also be proved based on Jensen's inequality.

### Non-multiplicativityEdit

In general, the expected value operator is not multiplicative, i.e. ${\displaystyle \operatorname {E} [XY]}$  is not necessarily equal to ${\displaystyle \operatorname {E} [X]\cdot \operatorname {E} [Y]}$ . The amount by which multiplicativity fails is called the covariance:

${\displaystyle \operatorname {Cov} (X,Y)=\operatorname {E} [XY]-\operatorname {E} [X]\operatorname {E} [Y].}$

If, however, the random variables ${\displaystyle X\in (\Omega _{1},{\mathcal {F}}_{1},\operatorname {P} _{1})}$  and ${\displaystyle Y\in (\Omega _{2},{\mathcal {F}}_{2},\operatorname {P} _{2})}$  are independent, then ${\displaystyle \operatorname {E} [XY]=\operatorname {E} [X]\operatorname {E} [Y]}$ , and ${\displaystyle \operatorname {Cov} (X,Y)=0}$ .

## InequalitiesEdit

### Cauchy–Bunyakovsky–Schwarz inequalityEdit

The Cauchy–Bunyakovsky–Schwarz inequality states that

${\displaystyle (\operatorname {E} [XY])^{2}\leq \operatorname {E} [X^{2}]\cdot \operatorname {E} [Y^{2}].}$

### Markov's inequalityEdit

For a nonnegative random variable ${\displaystyle X}$  and ${\displaystyle a>0}$ , the Markov's inequality states that

${\displaystyle \operatorname {P} (X\geq a)\leq {\frac {\operatorname {E} [X]}{a}}.}$

#### Corollary: if ${\displaystyle \operatorname {E} |X|=0}$ , then ${\displaystyle X=0}$  (a.s.)Edit

For non-negative random variables, this follows directly from Markov's inequality. In the general case, since ${\displaystyle \operatorname {E} [X_{+}]\geq 0}$ , ${\displaystyle \operatorname {E} [X_{-}]\geq 0}$ , and

${\displaystyle \operatorname {E} [X_{+}]+\operatorname {E} [X_{-}]=\operatorname {E} |X|=0,}$

we conclude that ${\displaystyle X_{+}=X_{-}=0}$  (a.s.), and therefore ${\displaystyle X=X_{+}-X_{-}=0}$  (a.s.).

### Bienaymé-Chebyshev inequalityEdit

Let ${\displaystyle X}$  be an arbitrary random variable with finite expected value ${\displaystyle \operatorname {E} [X]}$  and finite variance ${\displaystyle \operatorname {Var} [X]\neq 0}$ . The Bienaymé-Chebyshev inequality states that, for any real number ${\displaystyle k>0}$ ,

${\displaystyle \operatorname {P} {\Bigl (}{\Bigl |}X-\operatorname {E} [X]{\Bigr |}\geq k\operatorname {Var} [X]{\Bigr )}\leq {\frac {1}{k^{2}}}.}$

### Jensen's inequalityEdit

Let ${\displaystyle f:{\mathbb {R} }\to {\mathbb {R} }}$  be a Borel convex function and ${\displaystyle X}$  a random variable such that ${\displaystyle \operatorname {E} |X|<\infty }$ . Jensen's inequality states that

${\displaystyle f(\operatorname {E} (X))\leq \operatorname {E} (f(X)).}$

This implies that ${\displaystyle |\operatorname {E} (X)|\leq \operatorname {E} |X|}$  since the absolute value function is convex.

### Lyapunov’s inequalityEdit

Let ${\displaystyle 0 . Lyapunov’s inequality states that

${\displaystyle {\Bigl (}\operatorname {E} |X|^{s}{\Bigr )}^{1/s}\leq \left(\operatorname {E} |X|^{t}\right)^{1/t}.}$

Proof. Applying Jensen's inequality to ${\displaystyle |X|^{s}}$  and ${\displaystyle g(x)=|x|^{t/s}}$ , obtain ${\displaystyle {\Bigl |}\operatorname {E} |X^{s}|{\Bigr |}^{t/s}\leq \operatorname {E} |X^{s}|^{t/s}=\operatorname {E} |X|^{t}}$ . Taking the ${\displaystyle t}$ th root of each side completes the proof.

Corollary.

${\displaystyle \operatorname {E} |X|\leq {\Bigl (}\operatorname {E} |X|^{2}{\Bigr )}^{1/2}\leq \ldots \leq {\Bigl (}\operatorname {E} |X|^{n}{\Bigr )}^{1/n}\leq \ldots }$

### Hölder’s inequalityEdit

Let the integers ${\displaystyle p}$  and ${\displaystyle q}$  satisfy ${\displaystyle 1\leq p\leq \infty }$ , ${\displaystyle 1\leq q\leq \infty }$ , and ${\displaystyle 1/p+1/q=1}$ . The Hölder’s inequality states that

${\displaystyle \operatorname {E} |XY|\leq (\operatorname {E} |X|^{p})^{1/p}(\operatorname {E} |X|^{q})^{1/q}.}$

### Minkowski inequalityEdit

Let ${\displaystyle p}$  be an integer satisfying ${\displaystyle 1\leq p\leq \infty }$ . Let, in addition, ${\displaystyle \operatorname {E} |X|^{p}<\infty }$  and ${\displaystyle \operatorname {E} |Y|^{p}<\infty }$ . Then, according to the Minkowski inequality, ${\displaystyle \operatorname {E} |X+Y|^{p}<\infty }$  and

${\displaystyle {\Bigl (}\operatorname {E} |X+Y|^{p}{\Bigr )}^{1/p}\leq {\Bigl (}\operatorname {E} |X|^{p}{\Bigr )}^{1/p}+{\Bigl (}\operatorname {E} |Y|^{p}{\Bigr )}^{1/p}.}$

## Taking limits under the ${\displaystyle \operatorname {E} }$ signEdit

### Dominated Convergence TheoremEdit

Let ${\displaystyle \{X_{n}\}_{n}}$  be a sequence of random variables, ${\displaystyle X_{n}\to X}$  pointwise (a.s.), ${\displaystyle |X_{n}|\leq Y}$  (a.s.), and ${\displaystyle \operatorname {E} [Y]<\infty }$ , then, according to the dominated convergence theorem, ${\displaystyle \operatorname {E} |X|<\infty }$  and ${\displaystyle \operatorname {E} [X_{n}]\to \operatorname {E} [X]}$ .

## Relationship with characteristic functionEdit

The probability density function ${\displaystyle f_{X}}$  of a scalar random variable ${\displaystyle X}$  is related to its characteristic function ${\displaystyle \varphi _{X}}$  by the inversion formula:

${\displaystyle f_{X}(x)={\frac {1}{2\pi }}\int _{\mathbb {R} }e^{-itx}\varphi _{X}(t)\,dt.}$

For the expected value of ${\displaystyle g(X)}$  (where ${\displaystyle g:{\mathbb {R} }\to {\mathbb {R} }}$  is a Borel function), we can use this inversion formula to obtain

${\displaystyle \operatorname {E} [g(X)]={\frac {1}{2\pi }}\int _{\mathbb {R} }g(x)\left[{\int _{\mathbb {R} }e^{-itx}\varphi _{X}(t)\,dt}\right]dx.}$

If ${\displaystyle \operatorname {E} [g(X)]}$  exists, changing the order of integration, we get, in accordance with Fubini-Tonelli theorem,

${\displaystyle \operatorname {E} [g(X)]={\frac {1}{2\pi }}\int _{\mathbb {R} }G(t)\varphi _{X}(t)\,dt,}$

where

${\displaystyle G(t)=\int _{\mathbb {R} }g(x)e^{-itx}\,dx}$

is the Fourier transform of ${\displaystyle g(x).}$  The expression for ${\displaystyle \operatorname {E} [g(X)]}$  also follows directly from Plancherel theorem.

## Uses and applicationsEdit

It is possible to construct an expected value equal to the probability of an event by taking the expectation of an indicator function that is one if the event has occurred and zero otherwise. This relationship can be used to translate properties of expected values into properties of probabilities, e.g. using the law of large numbers to justify estimating probabilities by frequencies.

The expected values of the powers of X are called the moments of X; the moments about the mean of X are expected values of powers of X − E[X]. The moments of some random variables can be used to specify their distributions, via their moment generating functions.

To empirically estimate the expected value of a random variable, one repeatedly measures observations of the variable and computes the arithmetic mean of the results. If the expected value exists, this procedure estimates the true expected value in an unbiased manner and has the property of minimizing the sum of the squares of the residuals (the sum of the squared differences between the observations and the estimate). The law of large numbers demonstrates (under fairly mild conditions) that, as the size of the sample gets larger, the variance of this estimate gets smaller.

This property is often exploited in a wide variety of applications, including general problems of statistical estimation and machine learning, to estimate (probabilistic) quantities of interest via Monte Carlo methods, since most quantities of interest can be written in terms of expectation, e.g. ${\displaystyle \operatorname {P} ({X\in {\mathcal {A}}})=\operatorname {E} [I_{\mathcal {A}}(X)]}$  where ${\displaystyle I_{\mathcal {A}}(X)}$  is the indicator function for set ${\displaystyle {\mathcal {A}}}$ , i.e. ${\displaystyle X\in {\mathcal {A}}\rightarrow I_{\mathcal {A}}(X)=1,X\not \in {\mathcal {A}}\rightarrow I_{\mathcal {A}}(X)=0}$ .

The mass of probability distribution is balanced at the expected value, here a Beta(α,β) distribution with expected value α/(α+β).

In classical mechanics, the center of mass is an analogous concept to expectation. For example, suppose X is a discrete random variable with values xi and corresponding probabilities pi. Now consider a weightless rod on which are placed weights, at locations xi along the rod and having masses pi (whose sum is one). The point at which the rod balances is E[X].

Expected values can also be used to compute the variance, by means of the computational formula for the variance

${\displaystyle \operatorname {Var} (X)=\operatorname {E} [X^{2}]-(\operatorname {E} [X])^{2}.}$

A very important application of the expectation value is in the field of quantum mechanics. The expectation value of a quantum mechanical operator ${\displaystyle {\hat {A}}}$  operating on a quantum state vector ${\displaystyle |\psi \rangle }$  is written as ${\displaystyle \langle {\hat {A}}\rangle =\langle \psi |A|\psi \rangle }$ . The uncertainty in ${\displaystyle {\hat {A}}}$  can be calculated using the formula ${\displaystyle (\Delta A)^{2}=\langle {\hat {A}}^{2}\rangle -\langle {\hat {A}}\rangle ^{2}}$ .

## The law of the unconscious statisticianEdit

The expected value of a measurable function of ${\displaystyle X}$ , ${\displaystyle g(X)}$ , given that ${\displaystyle X}$  has a probability density function ${\displaystyle f(x)}$ , is given by the inner product of ${\displaystyle f}$  and ${\displaystyle g}$ :

${\displaystyle \operatorname {E} [g(X)]=\int _{-\infty }^{\infty }g(x)f(x)\,dx.}$

This is sometimes called the law of the unconscious statistician. This formula also holds in multidimensional case, when ${\displaystyle g}$  is a function of several random variables, and ${\displaystyle f}$  is their joint density.[6][7]

## Alternative formula for expected valueEdit

### Formula for non-negative random variablesEdit

#### Finite and countably infinite caseEdit

For a non-negative integer-valued random variable ${\displaystyle X:\Omega \to \{0,1,2,3,\ldots \}}$ ,

${\displaystyle \operatorname {E} [X]=\sum _{i=1}^{\infty }\operatorname {P} (X\geq i).}$

Proof.

${\displaystyle \sum _{i=1}^{\infty }\operatorname {P} (X\geq i)=\sum _{i=1}^{\infty }\sum _{j=i}^{\infty }\operatorname {P} (X=j).}$

If

${\displaystyle M={\begin{bmatrix}\operatorname {P} (X=1)&\operatorname {P} (X=2)&\operatorname {P} (X=3)&\ldots &\operatorname {P} (X=n)&\ldots \\&\operatorname {P} (X=2)&\operatorname {P} (X=3)&\ldots &\operatorname {P} (X=n)&\ldots \\&&\operatorname {P} (X=3)&\ldots &\operatorname {P} (X=n)&\ldots \\\cdots &\cdots &\cdots &\cdots &\cdots &\cdots \\&&&&\operatorname {P} (X=n)&\cdots \\\cdots &\cdots &\cdots &\cdots &\cdots &\cdots \end{bmatrix}}}$

is an infinite upper triangular matrix, the double sum ${\displaystyle \sum _{i=1}^{\infty }\sum _{j=i}^{\infty }\operatorname {P} (X=j)}$  is the sum of ${\displaystyle M}$ 's elements if the summation is done row by row. Switching the summation order from row-by-row to column-by-column, we have

{\displaystyle {\begin{aligned}\sum _{i=1}^{\infty }\sum _{j=i}^{\infty }\operatorname {P} (X=j)&=\sum _{j=1}^{\infty }\sum _{i=1}^{j}\operatorname {P} (X=j)\\&=\sum _{j=1}^{\infty }j\operatorname {P} (X=j)\\&=\operatorname {E} [X].\end{aligned}}}
##### ExampleEdit

In a coin tossing experiment, let the probability of heads be ${\displaystyle p}$ . Including the final attempt, how many tosses can we expect until the first head?

Solution. If ${\displaystyle N}$  is the random variable indicating the numbers of coin tosses before and including the first head, then, for ${\displaystyle i\geq 1}$ ,

{\displaystyle {\begin{aligned}\operatorname {P} (N\geq i)&=1-\operatorname {P} (N\leq i-1)\\[1pt]&=1-\sum \limits _{j=0}^{i-1}\operatorname {P} (N=j)\\[1pt]&=1-\sum \limits _{j=1}^{i-1}(1-p)^{j-1}p\\[1pt]&=1-{\frac {1-(1-p)^{i-1}}{p}}\cdot p\\[1pt]&=(1-p)^{i-1},\end{aligned}}}

where we took into account the geometric series summation formula. We now compute

{\displaystyle {\begin{aligned}\operatorname {E} [N]&=\sum \limits _{i=1}^{\infty }\operatorname {P} (N\geq i)\\&=\sum \limits _{i=1}^{\infty }(1-p)^{i-1}\\&={\frac {1}{p}}.\end{aligned}}}

#### General caseEdit

If ${\displaystyle X}$  is a non-negative real-valued random variable, then

${\displaystyle \operatorname {E} [X]=\int _{0}^{\infty }\operatorname {P} (X\geq x)\,dx=\int _{0}^{\infty }\operatorname {P} (X>x)\,dx.}$

Proof. For every ${\displaystyle \omega \in \Omega }$ ,

${\displaystyle X(\omega )=\int \limits _{0}^{X(\omega )}dx=\int \limits _{0}^{\infty }I_{(0,X(\omega ))}(x)\,dx=\int \limits _{0}^{\infty }I_{(0,X(\omega )]}(x)\,dx,}$

where ${\displaystyle I_{(0,X(\omega ))}}$  and ${\displaystyle I_{(0,X(\omega )]}}$  are the indicator functions of ${\displaystyle (0,X(\omega ))}$  and ${\displaystyle (0,X(\omega )]}$ , respectively. Substituting this into the definition of ${\displaystyle \operatorname {E} [X]}$ , obtain

{\displaystyle {\begin{aligned}\operatorname {E} [X]&=\int \limits _{\Omega }Xd\operatorname {P} \\&=\int \limits _{\Omega }\int \limits _{0}^{\infty }I_{(0,X(\omega )]}(x)\,dx\,d\operatorname {P} (\omega ).\end{aligned}}}

Since ${\displaystyle X(\omega )\geq 0}$  and ${\displaystyle I_{(0,X(\omega )]}(x)\geq 0}$ , this integral converges absolutely, thus meeting the requirements of Fubini-Tonelli theorem. Changing the order of integration gives us

{\displaystyle {\begin{aligned}&\int \limits _{0}^{\infty }\int \limits _{\Omega }I_{(0,X(\omega )]}(x)\,d\operatorname {P} (\omega )\,dx\\&=\int \limits _{0}^{\infty }\operatorname {P} (X\geq x)\,dx,\end{aligned}}}

and similarly,

${\displaystyle \operatorname {E} [X]=\int \limits _{0}^{\infty }\operatorname {P} (X(\omega )>x)\,dx.}$

### Formula for non-positive random variablesEdit

If ${\displaystyle X}$  is a non-positive random variable, using the previous technique, one can show that

${\displaystyle \operatorname {E} [X]=\int _{-\infty }^{0}\operatorname {P} (X\leq x)\,dx=\int _{-\infty }^{0}\operatorname {P} (X

If, in addition, ${\displaystyle X}$  is integer-valued, i.e. ${\displaystyle X:\Omega \to \{\ldots ,-3,-2,-1,0\}}$ , then

${\displaystyle \operatorname {E} [X]=\sum _{i=-1}^{-\infty }\operatorname {P} (X\leq i).}$

### General caseEdit

If ${\displaystyle X}$  can be both positive and negative, then ${\displaystyle \operatorname {E} [X]=\operatorname {E} [X_{+}]-\operatorname {E} [X_{-}]}$ , and the above results may be applied to ${\displaystyle X_{+}}$  and ${\displaystyle X_{-}}$  separately.

## HistoryEdit

The idea of the expected value originated in the middle of the 17th century from the study of the so-called problem of points, which seeks to divide the stakes in a fair way between two players who have to end their game before it's properly finished. This problem had been debated for centuries, and many conflicting proposals and solutions had been suggested over the years, when it was posed in 1654 to Blaise Pascal by French writer and amateur mathematician Chevalier de Méré. de Méré claimed that this problem couldn't be solved and that it showed just how flawed mathematics was when it came to its application to the real world. Pascal, being a mathematician, was provoked and determined to solve the problem once and for all. He began to discuss the problem in a now famous series of letters to Pierre de Fermat. Soon enough they both independently came up with a solution. They solved the problem in different computational ways but their results were identical because their computations were based on the same fundamental principle. The principle is that the value of a future gain should be directly proportional to the chance of getting it. This principle seemed to have come naturally to both of them. They were very pleased by the fact that they had found essentially the same solution and this in turn made them absolutely convinced they had solved the problem conclusively. However, they did not publish their findings. They only informed a small circle of mutual scientific friends in Paris about it.[8]

Three years later, in 1657, a Dutch mathematician Christiaan Huygens, who had just visited Paris, published a treatise (see Huygens (1657)) "De ratiociniis in ludo aleæ" on probability theory. In this book he considered the problem of points and presented a solution based on the same principle as the solutions of Pascal and Fermat. Huygens also extended the concept of expectation by adding rules for how to calculate expectations in more complicated situations than the original problem (e.g., for three or more players). In this sense this book can be seen as the first successful attempt of laying down the foundations of the theory of probability.

In the foreword to his book, Huygens wrote: "It should be said, also, that for some time some of the best mathematicians of France have occupied themselves with this kind of calculus so that no one should attribute to me the honour of the first invention. This does not belong to me. But these savants, although they put each other to the test by proposing to each other many questions difficult to solve, have hidden their methods. I have had therefore to examine and go deeply for myself into this matter by beginning with the elements, and it is impossible for me for this reason to affirm that I have even started from the same principle. But finally I have found that my answers in many cases do not differ from theirs." (cited by Edwards (2002)). Thus, Huygens learned about de Méré's Problem in 1655 during his visit to France; later on in 1656 from his correspondence with Carcavi he learned that his method was essentially the same as Pascal's; so that before his book went to press in 1657 he knew about Pascal's priority in this subject.

Neither Pascal nor Huygens used the term "expectation" in its modern sense. In particular, Huygens writes: "That my Chance or Expectation to win any thing is worth just such a Sum, as wou'd procure me in the same Chance and Expectation at a fair Lay. ... If I expect a or b, and have an equal Chance of gaining them, my Expectation is worth a+b/2." More than a hundred years later, in 1814, Pierre-Simon Laplace published his tract "Théorie analytique des probabilités", where the concept of expected value was defined explicitly:

… this advantage in the theory of chance is the product of the sum hoped for by the probability of obtaining it; it is the partial sum which ought to result when we do not wish to run the risks of the event in supposing that the division is made proportional to the probabilities. This division is the only equitable one when all strange circumstances are eliminated; because an equal degree of probability gives an equal right for the sum hoped for. We will call this advantage mathematical hope.

The use of the letter E to denote expected value goes back to W.A. Whitworth in 1901,[9] who used a script E. The symbol has become popular since for English writers it meant "Expectation", for Germans "Erwartungswert", for Spanish "Esperanza matemática" and for French "Espérance mathématique".[10]

## NotesEdit

1. ^ Sheldon M Ross (2007). "§2.4 Expectation of a random variable". Introduction to probability models (9th ed.). Academic Press. p. 38 ff. ISBN 0-12-598062-0.
2. ^ Richard W Hamming (1991). "§2.5 Random variables, mean and the expected value". The art of probability for scientists and engineers. Addison–Wesley. p. 64 ff. ISBN 0-201-40686-1.
3. ^ Richard W Hamming (1991). "Example 8.7–1 The Cauchy distribution". The art of probability for scientists and engineers. Addison-Wesley. p. 290 ff. ISBN 0-201-40686-1. Sampling from the Cauchy distribution and averaging gets you nowhere — one sample has the same distribution as the average of 1000 samples!
4. ^ Gordon, Lawrence; Loeb, Martin (November 2002). "The Economics of Information Security Investment". ACM Transactions on Information and System Security. 5 (4): 438–457. doi:10.1145/581271.581274.
5. ^ Leonid Koralov, Yakov G. Sinai "Theory of Probability and Random Processes" (Springer 2007), Def. 1.23 on page 9.
6. ^ Expectation Value, retrieved August 8, 2017
7. ^ Papoulis, A. (1984), Probability, Random Variables, and Stochastic Processes, New York: McGraw–Hill, pp. 139–152
8. ^ "Ore, Pascal and the Invention of Probability Theory". The American Mathematical Monthly. 67 (5): 409–419. 1960. doi:10.2307/2309286.
9. ^ Whitworth, W.A. (1901) Choice and Chance with One Thousand Exercises. Fifth edition. Deighton Bell, Cambridge. [Reprinted by Hafner Publishing Co., New York, 1959.]
10. ^

## LiteratureEdit

• Edwards, A.W.F (2002). Pascal's arithmetical triangle: the story of a mathematical idea (2nd ed.). JHU Press. ISBN 0-8018-6946-3.
• Huygens, Christiaan (1657). De ratiociniis in ludo aleæ (English translation, published in 1714:).