In information theory, the cross-entropy between two probability distributions and , over the same underlying set of events, measures the average number of bits needed to identify an event drawn from the set when the coding scheme used for the set is optimized for an estimated probability distribution , rather than the true distribution .

Definition edit

The cross-entropy of the distribution   relative to a distribution   over a given set is defined as follows:

 ,

where   is the expected value operator with respect to the distribution  .

The definition may be formulated using the Kullback–Leibler divergence  , divergence of   from   (also known as the relative entropy of   with respect to  ).

 

where   is the entropy of  .

For discrete probability distributions   and   with the same support  , this means

 .

(Eq.1)

The situation for continuous distributions is analogous. We have to assume that   and   are absolutely continuous with respect to some reference measure   (usually   is a Lebesgue measure on a Borel σ-algebra). Let   and   be probability density functions of   and   with respect to  . Then

 

and therefore

 .

(Eq.2)

NB: The notation   is also used for a different concept, the joint entropy of   and  .

Motivation edit

In information theory, the Kraft–McMillan theorem establishes that any directly decodable coding scheme for coding a message to identify one value   out of a set of possibilities   can be seen as representing an implicit probability distribution   over  , where   is the length of the code for   in bits. Therefore, cross-entropy can be interpreted as the expected message-length per datum when a wrong distribution   is assumed while the data actually follows a distribution  . That is why the expectation is taken over the true probability distribution   and not  . Indeed the expected message-length under the true distribution   is

 

Estimation edit

There are many situations where cross-entropy needs to be measured but the distribution of   is unknown. An example is language modeling, where a model is created based on a training set  , and then its cross-entropy is measured on a test set to assess how accurate the model is in predicting the test data. In this example,   is the true distribution of words in any corpus, and   is the distribution of words as predicted by the model. Since the true distribution is unknown, cross-entropy cannot be directly calculated. In these cases, an estimate of cross-entropy is calculated using the following formula:

 

where   is the size of the test set, and   is the probability of event   estimated from the training set. In other words,   is the probability estimate of the model that the i-th word of the text is  . The sum is averaged over the   words of the test. This is a Monte Carlo estimate of the true cross-entropy, where the test set is treated as samples from  [citation needed].

Relation to maximum likelihood edit

The cross entropy arises in classification problems when introducing a logarithm in the guise of the log-likelihood function.

The section is concerned with the subject of estimation of the probability of different possible discrete outcomes. To this end, denote a parametrized family of distributions by  , with   subject to the optimization effort. Consider a given finite sequence of   values   from a training set, obtained from conditionally independent sampling. The likelihood assigned to any considered parameter   of the model is then given by the product over all probabilities  . Repeated occurrences are possible, leading to equal factors in the product. If the count of occurrences of the value equal to   (for some index  ) is denoted by  , then the frequency of that value equals  . Denote the latter by  , as it may be understood as empirical approximation to the probability distribution underlying the scenario. Further denote by   the perplexity, which can be seen to equal   by the calculation rules for the logarithm, and where the product is over the values without double counting. So

 

or

 

Since the logarithm is a monotonically increasing function, it does not affect extremization. So observe that the likelihood maximization amounts to minimization of the cross-entropy.

Cross-entropy minimization edit

Cross-entropy minimization is frequently used in optimization and rare-event probability estimation. When comparing a distribution   against a fixed reference distribution  , cross-entropy and KL divergence are identical up to an additive constant (since   is fixed): According to the Gibbs' inequality, both take on their minimal values when  , which is   for KL divergence, and   for cross-entropy. In the engineering literature, the principle of minimizing KL divergence (Kullback's "Principle of Minimum Discrimination Information") is often called the Principle of Minimum Cross-Entropy (MCE), or Minxent.

However, as discussed in the article Kullback–Leibler divergence, sometimes the distribution   is the fixed prior reference distribution, and the distribution   is optimized to be as close to   as possible, subject to some constraint. In this case the two minimizations are not equivalent. This has led to some ambiguity in the literature, with some authors attempting to resolve the inconsistency by restating cross-entropy to be  , rather than  . In fact, cross-entropy is another name for relative entropy; see Cover and Thomas[1] and Good.[2] On the other hand,   does not agree with the literature and can be misleading.

Cross-entropy loss function and logistic regression edit

Cross-entropy can be used to define a loss function in machine learning and optimization. Mao, Mohri, and Zhong (2023) give an extensive analysis of the properties of the family of cross-entropy loss functions in machine learning, including theoretical learning guarantees and extensions to adversarial learning.[3] The true probability   is the true label, and the given distribution   is the predicted value of the current model. This is also known as the log loss (or logarithmic loss[4] or logistic loss);[5] the terms "log loss" and "cross-entropy loss" are used interchangeably.[6]

More specifically, consider a binary regression model which can be used to classify observations into two possible classes (often simply labelled   and  ). The output of the model for a given observation, given a vector of input features  , can be interpreted as a probability, which serves as the basis for classifying the observation. In logistic regression, the probability is modeled using the logistic function   where   is some function of the input vector  , commonly just a linear function. The probability of the output   is given by

 

where the vector of weights   is optimized through some appropriate algorithm such as gradient descent. Similarly, the complementary probability of finding the output   is simply given by

 

Having set up our notation,   and  , we can use cross-entropy to get a measure of dissimilarity between   and  :

 
 
Plot shows different loss functions that can be used to train a binary classifier. Only the case where the target output is 1 is shown. It is observed that the loss is zero when the target is equal to the output and increases as the output becomes increasingly incorrect.

Logistic regression typically optimizes the log loss for all the observations on which it is trained, which is the same as optimizing the average cross-entropy in the sample. Other loss functions that penalize errors differently can be also used for training, resulting in models with different final test accuracy.[7] For example, suppose we have   samples with each sample indexed by  . The average of the loss function is then given by:

 

where  , with   the logistic function as before.

The logistic loss is sometimes called cross-entropy loss. It is also known as log loss.[duplication?] (In this case, the binary label is often denoted by {−1,+1}.[8])

Remark: The gradient of the cross-entropy loss for logistic regression is the same as the gradient of the squared-error loss for linear regression. That is, define

 
 
 

Then we have the result

 

The proof is as follows. For any  , we have

 
 
 
 
 
 

In a similar way, we eventually obtain the desired result.

Amended Cross-Entropy Cost: An Approach for Encouraging Diversity in Classification Ensemble edit

In some cases one would like to train an ensemble of models that have diversity so when we combine them it would provide best results[9].[10] Assuming we use a simple ensemble of averaging   classifiers. Then the Amended Cross-Entropy Cost is

 

where   is the cost function of the   classifier,   is the probability of the   classifier,   is the true probability that we need to estimate and   is a parameter between 0 and 1 that define the diversity that we would like to establish. When   we want each classifier to do its best regardless of the ensemble and when   we would like the classifier to be as diverse as possible.

See also edit

References edit

  1. ^ Thomas M. Cover, Joy A. Thomas, Elements of Information Theory, 2nd Edition, Wiley, p. 80
  2. ^ I. J. Good, Maximum entropy for hypothesis formulation, especially for multidimensional contingency tables, Ann. of Math. Statistics, 1963
  3. ^ Anqi Mao, Mehryar Mohri, Yutao Zhong. Cross-entropy loss functions: Theoretical analysis and applications. ICML 2023. https://arxiv.org/pdf/2304.07288.pdf
  4. ^ The Mathematics of Information Coding, Extraction and Distribution, by George Cybenko, Dianne P. O'Leary, Jorma Rissanen, 1999, p. 82
  5. ^ Probability for Machine Learning: Discover How To Harness Uncertainty With Python, Jason Brownlee, 2019, p. 220: "Logistic loss refers to the loss function commonly used to optimize a logistic regression model. It may also be referred to as logarithmic loss (which is confusing) or simply log loss."
  6. ^ sklearn.metrics.log_loss
  7. ^ Noel, Mathew; Banerjee, Arindam; D, Geraldine Bessie Amali; Muthiah-Nakarajan, Venkataraman (March 17, 2023). "Alternate loss functions for classification and robust regression can improve the accuracy of artificial neural networks". arXiv:2303.09935. {{cite journal}}: Cite journal requires |journal= (help)
  8. ^ Murphy, Kevin (2012). Machine Learning: A Probabilistic Perspective. MIT. ISBN 978-0262018029.
  9. ^ Shoham, Ron; Permuter, Haim (2019). Amended Cross-Entropy Cost: An Approach for Encouraging Diversity in Classification Ensemble (Brief Announcement). Lecture Notes in Computer Science. Vol. 11527. pp. 202–207. doi:10.1007/978-3-030-20951-3_18. ISBN 978-3-030-20950-6. {{cite book}}: |journal= ignored (help)
  10. ^ Shoham, Haim; Permuter (2020). "Amended Cross Entropy Cost: Framework For Explicit Diversity Encouragement". arXiv:2007.08140 [cs.LG]. {{cite arXiv}}: Unknown parameter |DUPLICATE_first1= ignored (help)

Further reading edit