Open main menu

In information theory, the cross entropy between two probability distributions and over the same underlying set of events measures the average number of bits needed to identify an event drawn from the set if a coding scheme used for the set is optimized for an estimated probability distribution , rather than the true distribution .



The cross entropy for the distributions   and   over a given set is defined as follows:


The definition may be formulated using the Kullback–Leibler divergence   of   from   (also known as the relative entropy of   with respect to   — note the reversal of emphasis).


where   is the entropy of  .

For discrete probability distributions   and   with the same support   this means







The situation for continuous distributions is analogous. We have to assume that   and   are absolutely continuous with respect to some reference measure   (usually   is a Lebesgue measure on a Borel σ-algebra). Let   and   be probability density functions of   and   with respect to  . Then


and therefore







NB: The notation   is also used for a different concept, the joint entropy of   and  .


In information theory, the Kraft–McMillan theorem establishes that any directly decodable coding scheme for coding a message to identify one value   out of a set of possibilities   can be seen as representing an implicit probability distribution   over  , where   is the length of the code for   in bits. Therefore, cross entropy can be interpreted as the expected message-length per datum when a wrong distribution   is assumed while the data actually follows a distribution  . That is why the expectation is taken over the true probability distribution   and not  . Indeed the expected message-length under the true distribution   is,



There are many situations where cross-entropy needs to be measured but the distribution of   is unknown. An example is language modeling, where a model is created based on a training set  , and then its cross-entropy is measured on a test set to assess how accurate the model is in predicting the test data. In this example,   is the true distribution of words in any corpus, and   is the distribution of words as predicted by the model. Since the true distribution is unknown, cross-entropy cannot be directly calculated. In these cases, an estimate of cross-entropy is calculated using the following formula:


where   is the size of the test set, and   is the probability of event   estimated from the training set. The sum is calculated over  . This is a Monte Carlo estimate of the true cross entropy, where the test set is treated as samples from  [citation needed].

Relation to log-likelihoodEdit

In classification problems we want to estimate the probability of different outcomes. If the estimated probability of outcome   is  , while the frequency (empirical probability) of outcome   in the training set is  , and there are N samples in the training set, then the likelihood of the training set is


so the log-likelihood, divided by   is


so that maximizing the likelihood is the same as minimizing the cross entropy.

Cross-entropy minimizationEdit

Cross-entropy minimization is frequently used in optimization and rare-event probability estimation; see the cross-entropy method.

When comparing a distribution   against a fixed reference distribution  , cross entropy and KL divergence are identical up to an additive constant (since   is fixed): both take on their minimal values when  , which is   for KL divergence, and   for cross entropy.[1] In the engineering literature, the principle of minimising KL Divergence (Kullback's "Principle of Minimum Discrimination Information") is often called the Principle of Minimum Cross-Entropy (MCE), or Minxent.

However, as discussed in the article Kullback–Leibler divergence, sometimes the distribution   is the fixed prior reference distribution, and the distribution   is optimised to be as close to   as possible, subject to some constraint. In this case the two minimisations are not equivalent. This has led to some ambiguity in the literature, with some authors attempting to resolve the inconsistency by redefining cross-entropy to be  , rather than  .

Cross-entropy error function and logistic regressionEdit

Cross entropy can be used to define a loss function in machine learning and optimization. The true probability   is the true label, and the given distribution   is the predicted value of the current model.

More specifically, consider logistic regression, which (in its most basic form) deals with classifying a given set of data points into two possible classes generically labelled   and  . The logistic regression model thus predicts an output  , given an input vector  . The probability is modeled using the logistic function  . Namely, the probability of finding the output   is given by


where the vector of weights   is optimized through some appropriate algorithm such as gradient descent. Similarly, the complementary probability of finding the output   is simply given by


The true (observed) probabilities can be expressed similarly as   and  .

Having set up our notation,   and  , we can use cross entropy to get a measure of dissimilarity between   and  :


The typical cost function that one uses in logistic regression is computed by taking the average of all cross-entropies in the sample. For example, suppose we have   samples with each sample indexed by  . The loss function is then given by:


where  , with   the logistic function as before.

The logistic loss is sometimes called cross-entropy loss. It is also known as log loss (In this case, the binary label is often denoted by {-1,+1}).[2]

See alsoEdit


  1. ^ Ian Goodfellow, Yoshua Bengio, and Aaron Courville (2016). Deep Learning. MIT Press. Online
  2. ^ Murphy, Kevin (2012). Machine Learning: A Probabilistic Perspective. MIT. ISBN 978-0262018029.

External linksEdit