Leakage (machine learning)

In statistics and machine learning, leakage (also known as data leakage or target leakage) is the use of information in the model training process which would not be expected to be available at prediction time, causing the predictive scores (metrics) to overestimate the model's utility when run in a production environment.[1]

Leakage is often subtle and indirect, making it hard to detect and eliminate. Leakage can cause a statistician or modeler to select a suboptimal model, which could be outperformed by a leakage-free model.[1]

Leakage modesEdit

Leakage can occur in many steps in the machine learning process. The leakage causes can be sub-classified into two possible sources of leakage for a model: features and training examples.[1]

Feature leakageEdit

Feature or column-wise leakage is caused by the inclusion of columns which are one of the following: a duplicate label, a proxy for the label, or the label itself. These features, known as anachronisms, will not be available when the model is used for predictions, and result in leakage if included when the model is trained.[2]

For example, including a "MonthlySalary" column when predicting "YearlySalary"; or "MinutesLate" when predicting "IsLate"; or more subtly "NumOfLatePayments" when predicting "ShouldGiveLoan".

Training example leakageEdit

Row-wise leakage is caused by improper sharing of information between rows of data. Types of row-wise leakage include:

  • Premature featurization; leaking from premature featurization before CV/Train/Test split (must fit MinMax/ngrams/etc on only the train split, then transform the test set)
  • Duplicate rows between train/validation/test (e.g. oversampling a dataset to pad its size before splitting; e.g. different rotations/augmentations of an single image; bootstrap sampling before splitting; or duplicating rows to up sample the minority class)
  • Non-i.i.d. data
    • Time leakage (e.g. splitting a time-series dataset randomly instead of newer data in test set using a TrainTest split or rolling-origin cross validation)
    • Group leakage -- not including a grouping split column (e.g. Andrew Ng's group had 100k x-rays of 30k patients, meaning ~3 images per patient. The paper used random splitting instead of ensuring that all images of a patient was in the same split. Hence the model partially memorized the patients instead of learning to recognize pneumonia in chest x-rays. Revised paper had a drop in scores.[3][4])

For time-dependent datasets, the structure of the system being studied evolves over time (i.e. it is "non-stationary"). This can introduce systematic differences between the training and validation sets. For example, if a model for predicting stock values is trained on data for a certain five-year period, it is unrealistic to treat the subsequent five-year period as a draw from the same population. As another example, suppose a model is developed to predict an individual's risk for being diagnosed with a particular disease within the next year.

DetectionEdit

See alsoEdit

ReferencesEdit

  1. ^ a b c Shachar Kaufman; Saharon Rosset; Claudia Perlich (January 2011). "Leakage in Data Mining: Formulation, Detection, and Avoidance". Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 6: 556–563. doi:10.1145/2020408.2020496. Retrieved 13 January 2020.
  2. ^ Soumen Chakrabarti (2008). "9". Data Mining: Know it All. Morgan Kaufmann Publishers. p. 383. ISBN 978-0-12-374629-0. Anachronistic variables are a pernicious mining problem. However, they aren’t any problem at all at deployment time—unless someone expects the model to work! Anachronistic variables are out of place in time. Specifically, at data modeling time, they carry information back from the future to the past.
  3. ^ Guts, Yuriy (30 October 2018). Yuriy Guts. TARGET LEAKAGE IN MACHINE LEARNING (Talk). AI Ukraine Conference. Ukraine. Lay summary (PDF).
  4. ^ Nick, Roberts (16 November 2017). "Replying to @AndrewYNg @pranavrajpurkar and 2 others". Brooklyn, NY, USA: Twitter. Archived from the original on 10 June 2018. Retrieved 13 January 2020. Replying to @AndrewYNg @pranavrajpurkar and 2 others ... Were you concerned that the network could memorize patient anatomy since patients cross train and validation? “ChestX-ray14 dataset contains 112,120 frontal-view X-ray images of 30,805 unique patients. We randomly split the entire dataset into 80% training, and 20% validation.”