# Bernoulli process

In probability and statistics, a Bernoulli process is a finite or infinite sequence of binary random variables, so it is a discrete-time stochastic process that takes only two values, canonically 0 and 1. The component Bernoulli variables Xi are identical and independent. Prosaically, a Bernoulli process is a repeated coin flipping, possibly with an unfair coin (but with consistent unfairness). Every variable Xi in the sequence is associated with a Bernoulli trial or experiment. They all have the same Bernoulli distribution. Much of what can be said about the Bernoulli process can also be generalized to more than two outcomes (such as the process for a six-sided die); this generalization is known as the Bernoulli scheme.

The problem of determining the process, given only a limited sample of the Bernoulli trials, may be called the problem of checking if a coin is fair.

## Definition

A Bernoulli process is a finite or infinite sequence of independent random variables X1X2X3, ..., such that

• For each i, the value of Xi is either 0 or 1;
• For all values of i, the probability that Xi = 1 is the same number p.

In other words, a Bernoulli process is a sequence of independent identically distributed Bernoulli trials.

Independence of the trials implies that the process is memoryless. Given that the probability p is known, past outcomes provide no information about future outcomes. (If p is unknown, however, the past informs about the future indirectly, through inferences about p.)

If the process is infinite, then from any point the future trials constitute a Bernoulli process identical to the whole process, the fresh-start property.

### Interpretation

The two possible values of each Xi are often called "success" and "failure". Thus, when expressed as a number 0 or 1, the outcome may be called the number of successes on the ith "trial".

Two other common interpretations of the values are true or false and yes or no. Under any interpretation of the two values, the individual variables Xi may be called Bernoulli trials with parameter p.

In many applications time passes between trials, as the index i increases. In effect, the trials X1X2, ... Xi, ... happen at "points in time" 1, 2, ..., i, .... That passage of time and the associated notions of "past" and "future" are not necessary, however. Most generally, any Xi and Xj in the process are simply two from a set of random variables indexed by {1, 2, ..., n} or by {1, 2, 3, ...}, the finite and infinite cases.

Several random variables and probability distributions beside the Bernoullis may be derived from the Bernoulli process:

The negative binomial variables may be interpreted as random waiting times.

↑Jump back a section

## Formal definition

The Bernoulli process can be formalized in the language of probability spaces as a random sequence of independent realisations of a random variable that can take values of heads or tails. The state space for an individual value is denoted by $2=\{H,T\} .$

Specifically, one considers the countably infinite direct product of copies of $2=\{H,T\}$. It is common to examine either the one-sided set $\Omega=2^\mathbb{N}=\{H,T\}^\mathbb{N}$ or the two-sided set $\Omega=2^\mathbb{Z}$. There is a natural topology on this space, called the product topology. The sets in this topology are finite sequences of coin flips, that is, finite-length strings of H and T, with the rest of (infinitely long) sequence taken as "don't care". These sets of finite sequences are referred to as cylinder sets in the product topology. The set of all such strings form a sigma algebra, specifically, a Borel algebra. This algebra is then commonly written as $(\Omega, \mathcal{F})$ where the elements of $\mathcal{F}$ are the finite-length sequences of coin flips (the cylinder sets).

If the chances of flipping heads or tails are given by the probabilities $\{p,1-p\}$, then one can define a natural measure on the product space, given by $P=\{p, 1-p\}^\mathbb{N}$ (or by $P=\{p, 1-p\}^\mathbb{Z}$ for the two-sided process). Given a cylinder set, that is, a specific sequence of coin flip results $[\omega_1, \omega_2,\cdots\omega_n]$ at times $1,2,\cdots,n$, the probability of observing this particular sequence is given by

$P([\omega_1, \omega_2,\cdots ,\omega_n])= p^k (1-p)^{n-k}$

where k is the number of times that H appears in the sequence, and n-k is the number of times that T appears in the sequence. There are several different kinds of notations for the above; a common one is to write

$P(X_1=\omega_1, X_2=\omega_2,\cdots, X_n=\omega_n)= p^k (1-p)^{n-k}$

where each $X_i$ is a binary-valued random variable. It is common to write $x_i$ for $\omega_i$. This probability P is commonly called the Bernoulli measure.[1]

Note that the probability of any specific, infinitely long sequence of coin flips is exactly zero; this is because $\lim_{n\to\infty}p^n=0$, for any $0\le p<1$. One[who?] says that any given infinite sequence has measure zero. Nevertheless, one can still say that some classes of infinite sequences of coin flips are far more likely than others, this is given by the asymptotic equipartition property.

To conclude the formal definition, a Bernoulli process is then given by the probability triple $(\Omega, \mathcal{F}, P)$, as defined above.

↑Jump back a section

## Binomial distribution

The law of large numbers states that, on average, the expectation value of flipping heads for any one coin flip is p. That is, one writes

$E[X_i=H]=p$

for any one given random variable $X_i$ out of the infinite sequence of Bernoulli trials that compose the Bernoulli process.

One is often interested in knowing how often one will observe H in a sequence of n coin flips. This is given by simply counting: Given n successive coin flips, that is, given the set of all possible strings of length n, the number N(k,n) of such strings that contain k occurrences of H is given by the binomial coefficient

$N(k,n) = {n \choose k}=\frac{n!}{k! (n-k)!}$

If the probability of flipping heads is given by p, then the total probability of seeing a string of length n with k heads is

$E[X_i=H\mbox{ k out of n times}]= P(k,n)={n\choose k} p^k (1-p)^{n-k}$

This probability is known as the Binomial distribution.

Of particular interest is the question of the value of P(k,n) for very, very long sequences of coin flips, that is, for the limit $n\to\infty$. In this case, one may make use of Stirling's approximation to the factorial, and write

$n! = \sqrt{2\pi n} \;n^n e^{-n} \left(1 + \mathcal{O}\left(\frac{1}{n}\right)\right)$

Inserting this into the expression for P(k,n), one obtains the Gaussian distribution; this is the content of the central limit theorem, and this is the simplest example thereof.

The combination of the law of large numbers, together with the central limit theorem, leads to an interesting and perhaps surprising result: the asymptotic equipartition property. Put informally, one notes that, yes, over many coin flips, one will observe H exactly p fraction of the time, and that this corresponds exactly with the peak of the Gaussian. The asymptotic equipartition property essentially states that this peak is infinitely sharp, with infinite fall-off on either side. That is, given the set of all possible infinitely long strings of H and T occurring in the Bernoulli process, this set is partitioned into two: those strings that occur with probability 1, and those that occur with probability 0. This partitioning is known as the Kolmogorov 0-1 law.

The size of this set is interesting, also, and can be explicitly determined: the logarithm of it is exactly the entropy of the Bernoulli process. Once again, consider the set of all strings of length n. The size of this set is $2^n$. Of these, only a certain subset are likely; the size of this set is $2^{nH}$ for $H\le 1$. By using Stirling's approximation, putting it into the expression for P(k,n), solving for the location and width of the peak, and finally taking $n\to\infty$ one finds that

$H=-p\log_2 p - (1-p)\log_2(1-p)$

This value is the Bernoulli entropy entropy of a Bernoulli process. Here, H stands for entropy; do not confuse it with the same symbol H standing for heads.

von Neumann posed a curious question about the Bernoulli process: is it ever possible that a given process is isomorphic to another, in the sense of the isomorphism of dynamical systems? The question long defied analysis, but was finally and completely answered with the Ornstein isomorphism theorem. This breakthrough resulted in the understanding that the Bernoulli process is unique and universal; in a certain sense, it is the single most random process possible; nothing is 'more' random than the Bernoulli process (although one must be careful with this informal statement; certainly, systems that are mixing are, in a certain sense, 'stronger' than the Bernoulli process, which is merely ergodic but not mixing. However, such processes do not consist of independent random variables: indeed, many purely deterministic, non-random systems can be mixing).

↑Jump back a section

## As a dynamical system

The Bernoulli process can also be understood to be a dynamical system, specifically, a measure-preserving dynamical system. This arises because there is a natural translation symmetry on the (two-sided) product space $\Omega=2^\mathbb{Z}$ given by the shift operator

$TX_i = X_{i+1}$

The measure is translation-invariant; that is, given any cylinder set $\omega\in\Omega$, one has

$P(T\omega)=P(\omega)$

and thus the Bernoulli measure is a Haar measure.

The shift operator should be understood to be an operator acting on the sigma algebra $(\Omega, \mathcal{F})$, so that one has

$T:\mathcal{F}\to\mathcal{F}$

In this guise, the shift operator is known as the transfer operator or the Ruelle-Frobenius-Perron operator. It is interesting to consider the eigenfunctions of this operator, and how they differ when restricted to different subspaces of $(\Omega, \mathcal{F})$. When restricted to the standard topology of the real numbers, the eigenfunctions are curiously the Bernoulli polynomials![2][3] This coincidence of naming was presumably not known to Bernoulli.

↑Jump back a section

## Bernoulli sequence

The term Bernoulli sequence is often used informally to refer to a realization of a Bernoulli process. However, the term has an entirely different formal definition as given below.

Suppose a Bernoulli process formally defined as a single random variable (see preceding section). For every infinite sequence x of coin flips, there is a sequence of integers

$\mathbb{Z}^x = \{n\in \mathbb{Z} : X_n(x) = 1 \} \,$

called the Bernoulli sequence[verification needed] associated with the Bernoulli process. For example, if x represents a sequence of coin flips, then the associated Bernoulli sequence is the list of natural numbers or time-points for which the coin toss outcome is heads.

So defined, a Bernoulli sequence $\mathbb{Z}^x$ is also a random subset of the index set, the natural numbers $\mathbb{N}$.

Almost all Bernoulli sequences $\mathbb{Z}^x$ are ergodic sequences.[verification needed]

↑Jump back a section

## Randomness extraction

From any Bernoulli process one may derive a Bernoulli process with p = 1/2 by the von Neumann extractor, the earliest randomness extractor, which actually extracts uniform randomness.

Represent the observed process as a sequence of zeroes and ones, or bits, and group that input stream in non-overlapping pairs of successive bits, such as (11)(00)(10)... . Then for each pair,

• if the bits are equal, discard;
• if the bits are not equal, output the first bit.

This table summarizes the computation.

input output
00 discard
01 0
10 1
11 discard

In the output stream 0 and 1 are equally likely, as 10 and 01 are equally likely in the original, both having probability pq = qp. This extraction of uniform randomness does not require the input trials to be independent, only uncorrelated. More generally, it works for any exchangeable sequence of bits: all sequences that are finite rearrangements are equally likely.

The Von Neumann extractor uses two input bits to produce either zero or one output bits, so the output is shorter than the input by a factor of at least 2. On average the computation discards proportion p2 + (1 − p)2 of the input pairs, or proportion p2 + q2, which is near one when p is near zero or one.

The discard of input pairs is at least proportion 1/2, the minimum which occurs where p = 1/2 for the original process. In that case the output stream is 1/4 the length of the input on average.

↑Jump back a section

## References

1. ^ Achim Klenke, Probability Theory, (2006) Springer-Verlag ISBN 978-1-848000-047-6 doi:10.1007/978-1-848000-048-3
2. ^ Pierre Gaspard, "r-adic one-dimensional maps and the Euler summation formula", Journal of Physics A, 25 (letter) L483-L485 (1992).
3. ^ Dean J. Driebe, Fully Chaotic Maps and Broken Time Symmetry, (1999) Kluwer Academic Publishers, Dordrecht Netherlands ISBN 0-7923-5564-4
↑Jump back a section

## Further reading

• Carl W. Helstrom, Probability and Stochastic Processes for Engineers, (1984) Macmillan Publishing Company, New York ISBN 0-02-353560-1.
• Dimitri P. Bertsekas and John N. Tsitsiklis, Introduction to Probability, (2002) Athena Scientific, Massachusetts ISBN 1-886529-40-X
↑Jump back a section

## Read in another language

This page is available in 13 languages

Last modified on 18 April 2013, at 15:22