Law of total probability

In probability theory, the law (or formula) of total probability is a fundamental rule relating marginal probabilities to conditional probabilities. It expresses the total probability of an outcome which can be realized via several distinct events—hence the name.

StatementEdit

The law of total probability is[1] a theorem that, in its discrete case, states if   is a finite or countably infinite partition of a sample space (in other words, a set of pairwise disjoint events whose union is the entire sample space) and each event   is measurable, then for any event   of the same probability space:

 

or, alternatively,[1]

 

where, for any   for which   these terms are simply omitted from the summation, because   is finite.

The summation can be interpreted as a weighted average, and consequently the marginal probability,  , is sometimes called "average probability";[2] "overall probability" is sometimes used in less formal writings.[3]

The law of total probability, can also be stated for conditional probabilities.

 

Taking the   as above, and assuming   is an event independent of any of the  :

 

Informal formulationEdit

The above mathematical statement might be interpreted as follows: given an event  , with known conditional probabilities given any of the   events, each with a known probability itself, what is the total probability that   will happen? The answer to this question is given by  .

Continuous caseEdit

The law of total probability extends to the case of conditioning on events generated by continuous random variables. Let   be a probability space. Suppose   is a random variable with distribution function  , and   an event on  . Then the law of total probability states

 

If   admits a density function  , then the result is

 

Moreover, for the specific case where  , where   is a borel set, then this yields

 

ExampleEdit

Suppose that two factories supply light bulbs to the market. Factory X's bulbs work for over 5000 hours in 99% of cases, whereas factory Y's bulbs work for over 5000 hours in 95% of cases. It is known that factory X supplies 60% of the total bulbs available and Y supplies 40% of the total bulbs available. What is the chance that a purchased bulb will work for longer than 5000 hours?

Applying the law of total probability, we have:

 

where

  •   is the probability that the purchased bulb was manufactured by factory X;
  •   is the probability that the purchased bulb was manufactured by factory Y;
  •   is the probability that a bulb manufactured by X will work for over 5000 hours;
  •   is the probability that a bulb manufactured by Y will work for over 5000 hours.

Thus each purchased light bulb has a 97.4% chance to work for more than 5000 hours.

Other namesEdit

The term law of total probability is sometimes taken to mean the law of alternatives, which is a special case of the law of total probability applying to discrete random variables.[citation needed] One author uses the terminology of the "Rule of Average Conditional Probabilities",[4] while another refers to it as the "continuous law of alternatives" in the continuous case.[5] This result is given by Grimmett and Welsh[6] as the partition theorem, a name that they also give to the related law of total expectation.

See alsoEdit

NotesEdit

  1. ^ a b Zwillinger, D., Kokoska, S. (2000) CRC Standard Probability and Statistics Tables and Formulae, CRC Press. ISBN 1-58488-059-7 page 31.
  2. ^ Paul E. Pfeiffer (1978). Concepts of probability theory. Courier Dover Publications. pp. 47–48. ISBN 978-0-486-63677-1.
  3. ^ Deborah Rumsey (2006). Probability for dummies. For Dummies. p. 58. ISBN 978-0-471-75141-0. CS1 maint: discouraged parameter (link)
  4. ^ Jim Pitman (1993). Probability. Springer. p. 41. ISBN 0-387-97974-3.
  5. ^ Kenneth Baclawski (2008). Introduction to probability with R. CRC Press. p. 179. ISBN 978-1-4200-6521-3.
  6. ^ Probability: An Introduction, by Geoffrey Grimmett and Dominic Welsh, Oxford Science Publications, 1986, Theorem 1B.

ReferencesEdit

  • Introduction to Probability and Statistics by Robert J. Beaver, Barbara M. Beaver, Thomson Brooks/Cole, 2005, page 159.
  • Theory of Statistics, by Mark J. Schervish, Springer, 1995.
  • Schaum's Outline of Probability, Second Edition, by John J. Schiller, Seymour Lipschutz, McGraw–Hill Professional, 2010, page 89.
  • A First Course in Stochastic Models, by H. C. Tijms, John Wiley and Sons, 2003, pages 431–432.
  • An Intermediate Course in Probability, by Alan Gut, Springer, 1995, pages 5–6.