# Experiment (probability theory)

In probability theory, an experiment or trial (see below) is any procedure that can be infinitely repeated and has a well-defined set of possible outcomes, known as the sample space.[1] An experiment is said to be random if it has more than one possible outcome, and deterministic if it has only one. A random experiment that has exactly two (mutually exclusive) possible outcomes is known as a Bernoulli trial.[2]

When an experiment is conducted, one (and only one) outcome results— although this outcome may be included in any number of events, all of which would be said to have occurred on that trial. After conducting many trials of the same experiment and pooling the results, an experimenter can begin to assess the empirical probabilities of the various outcomes and events that can occur in the experiment and apply the methods of statistical analysis.

## Experiments and trials

Random experiments are often conducted repeatedly, so that the collective results may be subjected to statistical analysis. A fixed number of repetitions of the same experiment can be thought of as a composed experiment, in which case the individual repetitions are called trials. For example, if one were to toss the same coin one hundred times and record each result, each toss would be considered a trial within the experiment composed of all hundred tosses.[3]

## Mathematical description

A random experiment is described or modelled by a mathematical construct known as a probability space. A probability space is constructed and defined with a specific kind of experiment or trial in mind.

A mathematical description of an experiment consists of three parts:

1. A sample space, Ω (or S), which is the set of all possible outcomes.
2. A set of events ${\displaystyle \scriptstyle {\mathcal {F}}}$ , where each event is a set containing zero or more outcomes.
3. The assignment of probabilities to the events—that is, a function P mapping from events to probabilities.

An outcome is the result of a single execution of the model. Since individual outcomes might be of little practical use, more complicated events are used to characterize groups of outcomes. The collection of all such events is a sigma-algebra ${\displaystyle \scriptstyle {\mathcal {F}}}$ . Finally, there is a need to specify each event's likelihood of happening; this is done using the probability measure function, P.

Once an experiment is designed and established, it is assumed that “nature” makes its move and selects a single outcome, ω, from the sample space Ω. All the events in ${\displaystyle \scriptstyle {\mathcal {F}}}$  that contain the selected outcome ω (recall that each event is a subset of Ω) are said to “have occurred”. The probability function P is defined in such a way that, if the experiment were to be repeated an infinite number of times, the relative frequencies of occurrence of each of the events would approach agreement with the values P assigns them.

As a simple experiment, we may flip a coin twice. The sample space (where the order of the two flips is relevant) is {(H, T), (T, H), (T, T), (H, H)} where "H" means "heads" and "T" means "tails". Note that each of (H, T), (T, H), ... are possible outcomes of the experiment. We may define an event which occurs when a "heads" occurs in either of the two flips. This event contains all of the outcomes except (T, T).