Information gain in decision trees
This article needs additional citations for verification. (December 2009)
In information theory and machine learning, information gain is a synonym for Kullback–Leibler divergence; the amount of information gained about a random variable or signal from observing another random variable. However, in the context of decision trees, the term is sometimes used synonymously with mutual information, which is the conditional expected value of the Kullback–Leibler divergence of the univariate probability distribution of one variable from the conditional distribution of this variable given the other one.
The information gain of a random variable X obtained from an observation of a random variable A taking value is defined
In machine learning, this concept can be used to define a preferred sequence of attributes to investigate to most rapidly narrow down the state of X. Such a sequence (which depends on the outcome of the investigation of previous attributes at each stage) is called a decision tree and applied in the area of machine learning known as decision tree learning. Usually an attribute with high mutual information should be preferred to other attributes.[why?]
Let denote a set of training examples, each of the form where is the value of the attribute or feature of example and y is the corresponding class label. The information gain for an attribute is defined in terms of Shannon entropy as follows. For a value taken by attribute , let
The mutual information is equal to the total entropy for an attribute if for each of the attribute values a unique classification can be made for the result attribute. In this case, the relative entropies subtracted from the total entropy are 0. In particular, the values defines a partition of the training set data into mutually exclusive and all-inclusive subsets, inducing a categorical probability distribution on the values of attribute . The distribution is given . In this representation, the information gain of given can be defined as the difference between the unconditional Shannon entropy of and the expected entropy of conditioned on , where the expectation value is taken with respect to the induced distribution on the values of .
Although information gain is usually a good measure for deciding the relevance of an attribute, it is not perfect. A notable problem occurs when information gain is applied to attributes that can take on a large number of distinct values. For example, suppose that one is building a decision tree for some data describing the customers of a business. Information gain is often used to decide which of the attributes are the most relevant, so they can be tested near the root of the tree. One of the input attributes might be the customer's credit card number. This attribute has a high mutual information, because it uniquely identifies each customer, but we do not want to include it in the decision tree: deciding how to treat a customer based on their credit card number is unlikely to generalize to customers we haven't seen before (overfitting).
To counter this problem, Ross Quinlan proposed to instead choose the attribute with highest information gain ratio from among the attributes whose information gain is average or higher. This biases the decision tree against considering attributes with a large number of distinct values, while not giving an unfair advantage to attributes with very low information value, as the information value is higher or equal to the information gain.
Let’s use this table as a dataset and use information gain to classify if a patient is sick with a disease. Patients classified as True(T) are sick and patients classified as False(F) are not sick. We are currently at the root node of the tree and must consider all possible splits using the data.
|Patient||Symptom A||Symptom B||Symptom C||Classification|
Candidate Splits are determined by looking at each variable that makes up a patient and what its states can be. In this example all symptoms can either be True(T) or False(F).
|1||Symptom A = T, Symptom A = F|
|2||Symptom B = T, Symptom B = F|
|3||Symptom C = T, Symptom C = F|
Now for split #1, we determine the entropy before the split which is found using the classification of each patient.
The conditional entropy of split #1 is determined by finding the entropy of each state of symptom A and combining them.
Information gain can then be determined by finding the difference in the prior entropy and the conditional entropy.
These steps are repeated for all candidate splits to get their information gain. All candidate splits for a node use the same value for .
Candidate Split #2 has the highest information gain, so it will be the most favorable split for the root node. Depending on the confidence of the child nodes classifications, information gain can be applied to the child nodes but cannot use the same candidate split.