# Multi-label classification

In machine learning, multi-label classification and the strongly related problem of multi-output classification are variants of the classification problem where multiple labels may be assigned to each instance. Multi-label classification is a generalization of multiclass classification, which is the single-label problem of categorizing instances into precisely one of more than two classes; in the multi-label problem there is no constraint on how many of the classes the instance can be assigned to.

Formally, multi-label classification is the problem of finding a model that maps inputs x to binary vectors y (assigning a value of 0 or 1 for each element (label) in y).

## Problem transformation methodsEdit

Several problem transformation methods exist for multi-label classification; the baseline approach, called the binary relevance method,[1][2] amounts to independently training one binary classifier for each label. Given an unseen sample, the combined model then predicts all labels for this sample for which the respective classifiers predict a positive result. Although this method of dividing the task into multiple binary tasks may resemble superficially the one-vs.-all (OvA) and one-vs.-rest (OvR) methods for multiclass classification, it is essentially different from both, because a single classifier under binary relevance deals with a single label, without any regard to other labels whatsoever.

Various other transformations exist. Of these, the label powerset (LP) transformation creates one binary classifier for every label combination attested in the training set. The random k-labelsets (RAKEL) algorithm uses multiple LP classifiers, each trained on a random subset of the actual labels; prediction using this ensemble method proceeds by a voting scheme.[3]

Classifier chains are an alternative ensembling methods [1][2][4] that have been applied, for instance, in HIV drug resistance prediction.[5][6] Bayesian network was also applied to guide ordering of classifiers in the Classifier chains [7].

Some classification algorithms/models have been adaptated to the multi-label task, without requiring problem transformations. Examples of these include:

• k-nearest neighbors: the ML-kNN algorithm extends the k-NN classifier to multi-label data.[8]
• decision trees: "Clare" is an adapted C4.5 algorithm for multi-label classification; the modification involves the entropy calculations.[9] MMC, MMDT, and SSC refined MMDT, can classify multi-labeled data based on multi-valued attributes without transforming the attributes into single-values. They are also named multi-valued and multi-labeled decision tree classification methods.[10][11][12]
• kernel methods for vector output
• neural networks: BP-MLL is an adaptation of the popular back-propagation algorithm for multi-label learning.[13]

Based on learning paradigms, the existing multi-label classification techniques can be classified into batch learning and online machine learning. Batch learning algorithms require all the data samples to be available beforehand. It trains the model using the entire training data and then predicts the test sample using the found relationship. The online learning algorithms, on the other hand, incrementally build their models in sequential iterations. In iteration t, an online algorithm receives a sample, xt and predicts its label(s) ŷt using the current model; the algorithm then receives yt, the true label(s) of xt and updates its model based on the sample-label pair: (xt, yt). Recently, a new learning paradigm called the progressive learning technique has been developed.[14] The progressive learning technique is capable of not only learning from new samples but also capable of learning multiple new labels of data being introduced to the model and yet retain the knowledge learnt thus far.[15]

## Statistics and evaluation metricsEdit

The extent to which a dataset is multi-label can be captured in two statistics:

• Label cardinality is the average number of labels per example in the set: ${\displaystyle {\frac {1}{N}}\sum _{i=1}^{N}|Y_{i}|}$ ;
• Label density is the number of labels per sample divided by the total number of labels, averaged over the samples: ${\displaystyle {\frac {1}{N}}\sum _{i=1}^{N}{\frac {|Y_{i}|}{|L|}}}$  where ${\displaystyle L=\bigcup _{i=1}^{N}Y_{i}}$ .

Evaluation metrics for multi-label classification performance are inherently different from those used in multi-class (or binary) classification, due to the inherent differences of the classification problem. If T denotes the true set of labels for a given sample, and P the predicted set of labels, then the following metrics can be defined on that sample:

• Hamming loss: the fraction of the wrong labels to the total number of labels, i.e. ${\displaystyle {\frac {1}{|N|\cdot |L|}}\sum _{i=1}^{|N|}\sum _{j=1}^{|L|}\operatorname {xor} (y_{i,j},z_{i,j})}$ , where ${\displaystyle y_{i,j}}$  is the target and ${\displaystyle z_{i,j}}$  is the prediction. This is a loss function, so the optimal value is zero.
• The closely related Jaccard index, also called Intersection over Union in the multi-label setting, is defined as the number of correctly predicted labels divided by the union of predicted and true labels, ${\displaystyle {\frac {|T\cap P|}{|T\cup P|}}}$
• Precision, recall and ${\displaystyle F_{1}}$  score: precision is ${\displaystyle {\frac {|T\cap P|}{|P|}}}$ , recall is ${\displaystyle {\frac {|T\cap P|}{|T|}}}$ , and ${\displaystyle F_{1}}$  is their harmonic mean.[16]
• Exact match (also called Subset accuracy): is the most strict metric, indicating the percentage of samples that have all their labels classified correctly.

Cross-validation in multi-label settings is complicated by the fact that the ordinary (binary/multiclass) way of stratified sampling will not work; alternative ways of approximate stratified sampling have been suggested.[17]

## Implementations and datasetsEdit

Java implementations of multi-label algorithms are available in the Mulan and Meka software packages, both based on Weka.

The scikit-learn Python package implements some multi-labels algorithms and metrics.

The binary relevance method, classifier chains and other multilabel algorithms with a lot of different base learners are implemented in the R-package mlr.

A list of commonly used multi-label data-sets is available at the Mulan website.

## ReferencesEdit

1. ^ a b Jesse Read, Bernhard Pfahringer, Geoff Holmes, Eibe Frank. Classifier Chains for Multi-label Classification. Machine Learning Journal. Springer. Vol. 85(3), (2011).
2. ^ a b Read, Jesse; Martino, Luca; Luengo, David (2014-03-01). "Efficient monte carlo methods for multi-dimensional learning with classifier chains". Pattern Recognition. Handwriting Recognition and other PR Applications. 47 (3): 1535–1546. arXiv:. doi:10.1016/j.patcog.2013.10.006.
3. ^ Tsoumakas, Grigorios; Vlahavas, Ioannis (2007). Random k-labelsets: An ensemble method for multilabel classification (PDF). ECML.
4. ^ Read, Jesse; Martino, Luca; Olmos, Pablo M.; Luengo, David (2015-06-01). "Scalable multi-output label prediction: From classifier chains to classifier trellises". Pattern Recognition. 48 (6): 2096–2109. arXiv:. doi:10.1016/j.patcog.2015.01.004.
5. ^ Heider, D; Senge, R; Cheng, W; Hüllermeier, E (2013). "Multilabel classification for exploiting cross-resistance information in HIV-1 drug resistance prediction". Bioinformatics. 29 (16): 1946–52. doi:10.1093/bioinformatics/btt331. PMID 23793752.
6. ^ Riemenschneider, M; Senge, R; Neumann, U; Hüllermeier, E; Heider, D (2016). "Exploiting HIV-1 protease and reverse transcriptase cross-resistance information for improved drug resistance prediction by means of multi-label classification". BioData mining. 9: 10. doi:10.1186/s13040-016-0089-1. PMC . PMID 26933450.
7. ^ Soufan, Othman; Ba-Alawi, Wail; Afeef, Moataz; Essack, Magbubah; Kalnis, Panos; Bajic, Vladimir B. (2016-11-10). "DRABAL: novel method to mine large high-throughput screening assays using Bayesian active learning". Journal of Cheminformatics. 8: 64. doi:10.1186/s13321-016-0177-8. ISSN 1758-2946.
8. ^ Zhang, M.L.; Zhou, Z.H. (2007). "ML-KNN: A lazy learning approach to multi-label learning". Pattern Recognition. 40 (7): 2038–2048. doi:10.1016/j.patcog.2006.12.019.
9. ^ Madjarov, Gjorgji; Kocev, Dragi; Gjorgjevikj, Dejan; Džeroski, Sašo (2012). "An extensive experimental comparison of methods for multi-label learning". Pattern Recognition. 45 (9): 3084–3104. doi:10.1016/j.patcog.2012.03.004.
10. ^ Chen, Yen-Liang; Hsu, Chang-Ling; Chou, Shih-chieh (2003). "Constructing a multi-valued and multi-labeled decision tree". Expert Systems with Applications. 25 (2): 199–209. doi:10.1016/S0957-4174(03)00047-2.
11. ^ Chou, Shihchieh; Hsu, Chang-Ling (2005-05-01). "MMDT: a multi-valued and multi-labeled decision tree classifier for data mining". Expert Systems with Applications. 28 (4): 799–812. doi:10.1016/j.eswa.2004.12.035.
12. ^ Li, Hong; Guo, Yue-jian; Wu, Min; Li, Ping; Xiang, Yao (2010-12-01). "Combine multi-valued attribute decomposition with multi-label learning". Expert Systems with Applications. 37 (12): 8721–8728. doi:10.1016/j.eswa.2010.06.044.
13. ^ Zhang, M.L.; Zhou, Z.H. (2006). Multi-label neural networks with applications to functional genomics and text categorization (PDF). IEEE Transactions on Knowledge and Data Engineering. 18. pp. 1338–1351.
14. ^ Dave, Mihika; Tapiawala, Sahil; Meng Joo, Er; Venkatesan, Rajasekar (2016). "A Novel Progressive Multi-label Classifier for Classincremental Data". arXiv: [cs.LG].
15. ^ Venkatesan, Rajasekar. "Progressive Learning Technique for Multi-label Classification".
16. ^ Godbole, Shantanu; Sarawagi, Sunita (2004). Discriminative methods for multi-labeled classification (PDF). Advances in Knowledge Discovery and Data Mining. pp. 22–30.
17. ^ Sechidis, Konstantinos; Tsoumakas, Grigorios; Vlahavas, Ioannis (2011). On the stratification of multi-label data (PDF). ECML PKDD. pp. 145–158.