Open main menu

Binomial distribution

In probability theory and statistics, the binomial distribution with parameters n and p is the discrete probability distribution of the number of successes in a sequence of n independent experiments, each asking a yes–no question, and each with its own boolean-valued outcome: a random variable containing a single bit of information: success/yes/true/one (with probability p) or failure/no/false/zero (with probability q = 1 − p). A single success/failure experiment is also called a Bernoulli trial or Bernoulli experiment and a sequence of outcomes is called a Bernoulli process; for a single trial, i.e., n = 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance.

Binomial distribution
Probability mass function
Probability mass function for the binomial distribution
Cumulative distribution function
Cumulative distribution function for the binomial distribution
NotationB(n, p)
ParametersnN0 — number of trials
p ∈ [0,1] — success probability in each trial
Supportk ∈ { 0, …, n } — number of successes
Median or
Mode or
Ex. kurtosis
in shannons. For nats, use the natural log in the log.
Fisher information (for fixed )
Binomial distribution for
with n and k as in Pascal's triangle

The probability that a ball in a Galton box with 8 layers (n = 8) ends up in the central bin (k = 4) is .

The binomial distribution is frequently used to model the number of successes in a sample of size n drawn with replacement from a population of size N. If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. However, for N much larger than n, the binomial distribution remains a good approximation, and is widely used.



Probability mass functionEdit

In general, if the random variable X follows the binomial distribution with parameters n ∈ ℕ and p ∈ [0,1], we write X ~ B(np). The probability of getting exactly k successes in n trials is given by the probability mass function:


for k = 0, 1, 2, ..., n, where


is the binomial coefficient, hence the name of the distribution. The formula can be understood as follows. k successes occur with probability pk and n − k failures occur with probability (1 − p)n − k. However, the k successes can occur anywhere among the n trials, and there are   different ways of distributing k successes in a sequence of n trials.

In creating reference tables for binomial distribution probability, usually the table is filled in up to n/2 values. This is because for k > n/2, the probability can be calculated by its complement as


Looking at the expression ƒ(knp) as a function of k, there is a k value that maximizes it. This k value can be found by calculating


and comparing it to 1. There is always an integer M that satisfies


ƒ(knp) is monotone increasing for k < M and monotone decreasing for k > M, with the exception of the case where (n + 1)p is an integer. In this case, there are two values for which ƒ is maximal: (n + 1)p and (n + 1)p − 1. M is the most probable (most likely) outcome of the Bernoulli trials and is called the mode. Note that the probability of it occurring can be fairly small.

Cumulative distribution functionEdit

The cumulative distribution function can be expressed as:


where   is the "floor" under k, i.e. the greatest integer less than or equal to k.

It can also be represented in terms of the regularized incomplete beta function, as follows:[1]


Some closed-form bounds for the cumulative distribution function are given below.


Suppose a biased coin comes up heads with probability 0.3 when tossed. What is the probability of achieving 0, 1,..., 6 heads after six tosses?



If X ~ B(n, p), that is, X is a binomially distributed random variable, n being the total number of experiments and p the probability of each experiment yielding a successful result, then the expected value of X is:[3]


For example, if n = 100, and p = 1/4, then the average number of successful results will be 25.

Proof: We calculate the mean, μ, directly calculated from its definition


and the binomial theorem:


It is also possible to deduce the mean from the equation   whereby all   are Bernoulli distributed random variables with   (  if the ith experiment succeeds and   otherwise). We get:  


The variance is:


Proof: Let   where all   are independently Bernoulli distributed random variables. Since  , we get:



Usually the mode of a binomial B(n, p) distribution is equal to  , where   is the floor function. However, when (n + 1)p is an integer and p is neither 0 nor 1, then the distribution has two modes: (n + 1)p and (n + 1)p − 1. When p is equal to 0 or 1, the mode will be 0 and n correspondingly. These cases can be summarized as follows:


Proof: Let


For   only   has a nonzero value with  . For   we find   and   for  . This proves that the mode is 0 for   and   for  .

Let  . We find


From this follows


So when   is an integer, then   and   is a mode. In the case that  , then only   is a mode.[4]


In general, there is no single formula to find the median for a binomial distribution, and it may even be non-unique. However several special results have been established:

  • If np is an integer, then the mean, median, and mode coincide and equal np.[5][6]
  • Any median m must lie within the interval ⌊np⌋ ≤ m ≤ ⌈np⌉.[7]
  • A median m cannot lie too far away from the mean: |mnp| ≤ min{ ln 2, max{p, 1 − p} }.[8]
  • The median is unique and equal to m = round(np) in cases when either p ≤ 1 − ln 2 or p ≥ ln 2 or |m − np| ≤ min{p, 1 − p} (except for the case when p = 1/2 and n is odd).[7][8]
  • When p = 1/2 and n is odd, any number m in the interval 1/2(n − 1) ≤ m ≤ 1/2(n + 1) is a median of the binomial distribution. If p = 1/2 and n is even, then m = n/2 is the unique median.

Covariance between two binomialsEdit

If two binomially distributed random variables X and Y are observed together, estimating their covariance can be useful. The covariance is


In the case n = 1 (the case of Bernoulli trials) XY is non-zero only when both X and Y are one, and μX and μY are equal to the two probabilities. Defining pB as the probability of both happening at the same time, this gives


and for n independent pairwise trials


If X and Y are the same variable, this reduces to the variance formula given above.

Related distributionsEdit

Sums of binomialsEdit

If X ~ B(np) and Y ~ B(mp) are independent binomial variables with the same probability p, then X + Y is again a binomial variable; its distribution is Z=X+Y ~ B(n+mp):


However, if X and Y do not have the same probability p, then the variance of the sum will be smaller than the variance of a binomial variable distributed as  

Ratio of two binomial distributionsEdit

This result was first derived by Katz et al. in 1978.[9]

Let p1 and p2 be the probabilities of success in the binomial distributions B(X,n) and B(Y,m) respectively. Let T = (X/n)/(Y/m).

Then log(T) is approximately normally distributed with mean log(p1/p2) and variance ((1/p1) - 1)/n + ((1/p2) - 1)/m.

Conditional binomialsEdit

If X ~ B(np) and, conditional on X, Y ~ B(Xq), then Y is a simple binomial variable with distribution Y ~ B(npq).

For example, imagine throwing n balls to a basket UX and taking the balls that hit and throwing them to another basket UY. If p is the probability to hit UX then X ~ B(np) is the number of balls that hit UX. If q is the probability to hit UY then the number of balls that hit UY is Y ~ B(Xq) and therefore Y ~ B(npq).


Since   and  , by the law of total probability,


Since  , the equation above can be expressed as


Factoring   and pulling all the terms that don't depend on   out of the sum now yields


After substituting   in the expression above, we get


Notice that the sum (in the parentheses) above equals   by the binomial theorem. Substituting this in finally yields


and thus   as desired.

Bernoulli distributionEdit

The Bernoulli distribution is a special case of the binomial distribution, where n = 1. Symbolically, X ~ B(1, p) has the same meaning as X ~ B(p). Conversely, any binomial distribution, B(np), is the distribution of the sum of n Bernoulli trials, B(p), each with the same probability p.[10]

Poisson binomial distributionEdit

The binomial distribution is a special case of the Poisson binomial distribution, or general binomial distribution, which is the distribution of a sum of n independent non-identical Bernoulli trials B(pi).[11]

Normal approximationEdit

Binomial probability mass function and normal probability density function approximation for n = 6 and p = 0.5

If n is large enough, then the skew of the distribution is not too great. In this case a reasonable approximation to B(np) is given by the normal distribution


and this basic approximation can be improved in a simple way by using a suitable continuity correction. The basic approximation generally improves as n increases (at least 20) and is better when p is not near to 0 or 1.[12] Various rules of thumb may be used to decide whether n is large enough, and p is far enough from the extremes of zero or one:

  • One rule[12] is that for n > 5 the normal approximation is adequate if the absolute value of the skewness is strictly less than 1/3; that is, if
  • A stronger rule states that the normal approximation is appropriate only if everything within 3 standard deviations of its mean is within the range of possible values; that is, only if
This 3-standard-deviation rule is equivalent to the following conditions, which also imply the first rule above.

The rule   is totally equivalent to request that


Moving terms around yields:


Since  , we can apply the square power and divide by the respective factors   and  , to obtain the desired conditions:


Notice that these conditions automatically imply that  . On the other hand, apply again the square root and divide by 3,


Subtracting the second set of inequalities from the first one yields:


and so, the desired first rule is satisfied,

  • Another commonly used rule is that both values   and   must be greater than or equal to 5. However, the specific number varies from source to source, and depends on how good an approximation one wants. In particular, if one uses 9 instead of 5, the rule implies the results stated in the previous paragraphs.

Assume that both values   and   are greater than 9. Since  , we easily have that


We only have to divide now by the respective factors   and  , to deduce the alternative form of the 3-standard-deviation rule:


The following is an example of applying a continuity correction. Suppose one wishes to calculate Pr(X ≤ 8) for a binomial random variable X. If Y has a distribution given by the normal approximation, then Pr(X ≤ 8) is approximated by Pr(Y ≤ 8.5). The addition of 0.5 is the continuity correction; the uncorrected normal approximation gives considerably less accurate results.

This approximation, known as de Moivre–Laplace theorem, is a huge time-saver when undertaking calculations by hand (exact calculations with large n are very onerous); historically, it was the first use of the normal distribution, introduced in Abraham de Moivre's book The Doctrine of Chances in 1738. Nowadays, it can be seen as a consequence of the central limit theorem since B(np) is a sum of n independent, identically distributed Bernoulli variables with parameter p. This fact is the basis of a hypothesis test, a "proportion z-test", for the value of p using x/n, the sample proportion and estimator of p, in a common test statistic.[13]

For example, suppose one randomly samples n people out of a large population and ask them whether they agree with a certain statement. The proportion of people who agree will of course depend on the sample. If groups of n people were sampled repeatedly and truly randomly, the proportions would follow an approximate normal distribution with mean equal to the true proportion p of agreement in the population and with standard deviation  

Poisson approximationEdit

The binomial distribution converges towards the Poisson distribution as the number of trials goes to infinity while the product np remains fixed or at least p tends to zero. Therefore, the Poisson distribution with parameter λ = np can be used as an approximation to B(n, p) of the binomial distribution if n is sufficiently large and p is sufficiently small. According to two rules of thumb, this approximation is good if n ≥ 20 and p ≤ 0.05, or if n ≥ 100 and np ≤ 10.[14]

Concerning the accuracy of Poisson approximation, see Novak,[15] ch. 4, and references therein.

Limiting distributionsEdit

approaches the normal distribution with expected value 0 and variance 1.[citation needed] This result is sometimes loosely stated by saying that the distribution of X is asymptotically normal with expected value np and variance np(1 − p). This result is a specific case of the central limit theorem.

Beta distributionEdit

Beta distributions provide a family of prior probability distributions for binomial distributions in Bayesian inference:[16]


Confidence intervalsEdit

Even for quite large values of n, the actual distribution of the mean is significantly nonnormal.[17] Because of this problem several methods to estimate confidence intervals have been proposed.

In the equations for confidence intervals below, the variables have the following meaning:

  • n1 is the number of successes out of n, the total number of trials
  •   is the proportion of successes
  •   is the   quantile of a standard normal distribution (i.e., probit) corresponding to the target error rate  . For example, for a 95% confidence level the error   = 0.05, so   = 0.975 and   = 1.96.

Wald methodEdit

A continuity correction of 0.5/n may be added.[clarification needed]

Agresti–Coull method[18]Edit

Here the estimate of p is modified to

ArcSine method[19]Edit


Wilson (score) method[20]Edit

The notation in the formula below differs from the previous formulas in two respects:

  • Firstly, zx has a slightly different interpretation in the formula below: it has its ordinary meaning of 'the xth quantile of the standard normal distribution', rather than being a shorthand for 'the (1 − x)-th quantile'.
  • Secondly, this formula does not use a plus-minus to define the two bounds. Instead, one may use   to get the lower bound, or use   to get the upper bound. For example: for a 95% confidence level the error   = 0.05, so one gets the lower bound by using  , and one gets the upper bound by using  .


The exact (Clopper–Pearson) method is the most conservative.[17]

The Wald method, although commonly recommended in textbooks, is the most biased.[clarification needed]

Generating binomial random variatesEdit

Methods for random number generation where the marginal distribution is a binomial distribution are well-established.[22][23]

One way to generate random samples from a binomial distribution is to use an inversion algorithm. To do so, one must calculate the probability that P(X=k) for all values k from 0 through n. (These probabilities should sum to a value close to one, in order to encompass the entire sample space.) Then by using a pseudorandom number generator to generate samples uniformly between 0 and 1, one can transform the calculated samples U[0,1] into discrete numbers by using the probabilities calculated in step one.

Tail boundsEdit

For knp, upper bounds for the lower tail of the distribution function can be derived. Recall that  , the probability that there are at most k successes.

Hoeffding's inequality yields the bound


and Chernoff's inequality can be used to derive the bound


Moreover, these bounds are reasonably tight when p = 1/2, since the following expression holds for all k ≥ 3n/8[24]


However, the bounds do not work well for extreme values of p. In particular, as p   1, value F(k;n,p) goes to zero (for fixed k, n with k < n) while the upper bound above goes to a positive constant. In this case a better bound is given by [25]


where D(a || p) is the relative entropy between an a-coin and a p-coin (i.e. between the Bernoulli(a) and Bernoulli(p) distribution):


Asymptotically, this bound is reasonably tight; see [25] for details. An equivalent formulation of the bound is


Both these bounds are derived directly from the Chernoff bound. It can also be shown that,


This is proved using the method of types (see for example chapter 11 of Elements of Information Theory by Cover and Thomas [26]).

We can also change the   in the denominator to  , by approximating the binomial coefficient with Stirling's formula.[27]


This distribution was derived by James Bernoulli. He considered the case where p = r/(r + s) where p is the probability of success and r and s are positive integers. Blaise Pascal had earlier considered the case where p = 1/2.

See alsoEdit


  1. ^ Wadsworth, G. P. (1960). Introduction to Probability and Random Variables. New York: McGraw-Hill. p. 52.
  2. ^ Hamilton Institute. "The Binomial Distribution" October 20, 2010.
  3. ^ See Proof Wiki
  4. ^ See also the answer to the question "finding mode in Binomial distribution"
  5. ^ Neumann, P. (1966). "Über den Median der Binomial- and Poissonverteilung". Wissenschaftliche Zeitschrift der Technischen Universität Dresden (in German). 19: 29–33.
  6. ^ Lord, Nick. (July 2010). "Binomial averages when the mean is an integer", The Mathematical Gazette 94, 331-332.
  7. ^ a b Kaas, R.; Buhrman, J.M. (1980). "Mean, Median and Mode in Binomial Distributions". Statistica Neerlandica. 34 (1): 13–18. doi:10.1111/j.1467-9574.1980.tb00681.x.
  8. ^ a b Hamza, K. (1995). "The smallest uniform upper bound on the distance between the mean and the median of the binomial and Poisson distributions". Statistics & Probability Letters. 23: 21–25. doi:10.1016/0167-7152(94)00090-U.
  9. ^ Katz D. et al.(1978) Obtaining confidence intervals for the risk ratio in cohort studies. Biometrics 34:469–474
  10. ^ Taboga, Marco. "Lectures on Probability Theory and Mathematical Statistics". Retrieved 18 December 2017.
  11. ^ Wang, Y. H. (1993). "On the number of successes in independent trials" (PDF). Statistica Sinica. 3 (2): 295–312. Archived from the original (PDF) on 2016-03-03.
  12. ^ a b Box, Hunter and Hunter (1978). Statistics for experimenters. Wiley. p. 130.
  13. ^ NIST/SEMATECH, "7.2.4. Does the proportion of defectives meet requirements?" e-Handbook of Statistical Methods.
  14. ^ a b NIST/SEMATECH, " Counts Control Charts", e-Handbook of Statistical Methods.
  15. ^ Novak S.Y. (2011) Extreme value methods with applications to finance. London: CRC/ Chapman & Hall/Taylor & Francis. ISBN 9781-43983-5746.
  16. ^ MacKay, David (2003). Information Theory, Inference and Learning Algorithms. Cambridge University Press; First Edition. ISBN 978-0521642989.
  17. ^ a b Brown, Lawrence D.; Cai, T. Tony; DasGupta, Anirban (2001), "Interval Estimation for a Binomial Proportion", Statistical Science, 16 (2): 101–133, CiteSeerX, doi:10.1214/ss/1009213286, retrieved 2015-01-05
  18. ^ Agresti, Alan; Coull, Brent A. (May 1998), "Approximate is better than 'exact' for interval estimation of binomial proportions" (PDF), The American Statistician, 52 (2): 119–126, doi:10.2307/2685469, retrieved 2015-01-05
  19. ^ Pires MA Confidence intervals for a binomial proportion: comparison of methods and software evaluation.
  20. ^ Wilson, Edwin B. (June 1927), "Probable inference, the law of succession, and statistical inference" (PDF), Journal of the American Statistical Association, 22 (158): 209–212, doi:10.2307/2276774, archived from the original (PDF) on 2015-01-13, retrieved 2015-01-05
  21. ^ "Confidence intervals". Engineering Statistics Handbook. NIST/Sematech. 2012. Retrieved 2017-07-23.
  22. ^ Devroye, Luc (1986) Non-Uniform Random Variate Generation, New York: Springer-Verlag. (See especially Chapter X, Discrete Univariate Distributions)
  23. ^ Kachitvichyanukul, V.; Schmeiser, B. W. (1988). "Binomial random variate generation". Communications of the ACM. 31 (2): 216–222. doi:10.1145/42372.42381.
  24. ^ Matoušek, J, Vondrak, J: The Probabilistic Method (lecture notes) [1].
  25. ^ a b R. Arratia and L. Gordon: Tutorial on large deviations for the binomial distribution, Bulletin of Mathematical Biology 51(1) (1989), 125–131 [2].
  26. ^ Theorem 11.1.3 in Cover, T.; Thomas, J. (2006). Elements of Information Theory (2nd ed.). Wiley. p. 350.
  27. ^ "Sharper Lower Bounds for Binomial/Chernoff Tails".
  28. ^ Mandelbrot, B. B., Fisher, A. J., & Calvet, L. E. (1997). A multifractal model of asset returns. 3.2 The Binomial Measure is the Simplest Example of a Multifractal

External linksEdit