Mean absolute percentage error

(Redirected from MAPE)

The mean absolute percentage error (MAPE), also known as mean absolute percentage deviation (MAPD), is a measure of prediction accuracy of a forecasting method in statistics. It usually expresses the accuracy as a ratio defined by the formula:

where At is the actual value and Ft is the forecast value. Their difference is divided by the actual value At. The absolute value of this ratio is summed for every forecasted point in time and divided by the number of fitted points n.

MAPE in regression problems edit

Mean absolute percentage error is commonly used as a loss function for regression problems and in model evaluation, because of its very intuitive interpretation in terms of relative error.

Definition edit

Consider a standard regression setting in which the data are fully described by a random pair   with values in  , and n i.i.d. copies   of  . Regression models aim at finding a good model for the pair, that is a measurable function g from   to   such that   is close to Y.

In the classical regression setting, the closeness of   to Y is measured via the L2 risk, also called the mean squared error (MSE). In the MAPE regression context,[1] the closeness of   to Y is measured via the MAPE, and the aim of MAPE regressions is to find a model   such that:

 

where   is the class of models considered (e.g. linear models).

In practice

In practice   can be estimated by the empirical risk minimization strategy, leading to

 

From a practical point of view, the use of the MAPE as a quality function for regression model is equivalent to doing weighted mean absolute error (MAE) regression, also known as quantile regression. This property is trivial since

 

As a consequence, the use of the MAPE is very easy in practice, for example using existing libraries for quantile regression allowing weights.

Consistency edit

The use of the MAPE as a loss function for regression analysis is feasible both on a practical point of view and on a theoretical one, since the existence of an optimal model and the consistency of the empirical risk minimization can be proved.[1]

WMAPE edit

WMAPE (sometimes spelled wMAPE) stands for weighted mean absolute percentage error.[2] It is a measure used to evaluate the performance of regression or forecasting models. It is a variant of MAPE in which the mean absolute percent errors is treated as a weighted arithmetic mean. Most commonly the absolute percent errors are weighted by the actuals (e.g. in case of sales forecasting, errors are weighted by sales volume).[3]. Effectively, this overcomes the 'infinite error' issue.[4] Its formula is:[4]

 

Where   is the weight,   is a vector of the actual data and   is the forecast or prediction. However, this effectively simplifies to a much simpler formula:

 

Confusingly, sometimes when people refer to wMAPE they are talking about a different model in which the numerator and denominator of the wMAPE formula above are weighted again by another set of custom weights  . Perhaps it would be more accurate to call this the double weighted MAPE (wwMAPE). Its formula is:

 

Issues edit

Although the concept of MAPE sounds very simple and convincing, it has major drawbacks in practical application,[5] and there are many studies on shortcomings and misleading results from MAPE.[6][7]

  • It cannot be used if there are zero or close-to-zero values (which sometimes happens, for example in demand data) because there would be a division by zero or values of MAPE tending to infinity.[8]
  • For forecasts which are too low the percentage error cannot exceed 100%, but for forecasts which are too high there is no upper limit to the percentage error.
  • MAPE puts a heavier penalty on negative errors,   than on positive errors.[9] As a consequence, when MAPE is used to compare the accuracy of prediction methods it is biased in that it will systematically select a method whose forecasts are too low. This little-known but serious issue can be overcome by using an accuracy measure based on the logarithm of the accuracy ratio (the ratio of the predicted to actual value), given by  . This approach leads to superior statistical properties and also leads to predictions which can be interpreted in terms of the geometric mean.[5]
  • People often think the MAPE will be optimized at the median. But for example, a log normal has a median of   where as it is MAPE optimized at  .

To overcome these issues with MAPE, there are some other measures proposed in literature:

See also edit

External links edit

References edit

  1. ^ a b de Myttenaere, B Golden, B Le Grand, F Rossi (2015). "Mean absolute percentage error for regression models", Neurocomputing 2016 arXiv:1605.02541
  2. ^ Forecast Accuracy: MAPE, WAPE, WMAPE https://www.baeldung.com/cs/mape-vs-wape-vs-wmape%7Ctitle=Understanding Forecast Accuracy: MAPE, WAPE, WMAPE. {{cite web}}: Check |url= value (help); Missing or empty |title= (help)
  3. ^ Weighted Mean Absolute Percentage Error https://ibf.org/knowledge/glossary/weighted-mean-absolute-percentage-error-wmape-299%7Ctitle=WMAPE: Weighted Mean Absolute Percentage Error. {{cite web}}: Check |url= value (help); Missing or empty |title= (help)
  4. ^ a b "Statistical Forecast Errors".
  5. ^ a b Tofallis (2015). "A Better Measure of Relative Prediction Accuracy for Model Selection and Model Estimation", Journal of the Operational Research Society, 66(8):1352-1362. archived preprint
  6. ^ Hyndman, Rob J., and Anne B. Koehler (2006). "Another look at measures of forecast accuracy." International Journal of Forecasting, 22(4):679-688 doi:10.1016/j.ijforecast.2006.03.001.
  7. ^ a b Kim, Sungil and Heeyoung Kim (2016). "A new metric of absolute percentage error for intermittent demand forecasts." International Journal of Forecasting, 32(3):669-679 doi:10.1016/j.ijforecast.2015.12.003.
  8. ^ Kim, Sungil; Kim, Heeyoung (1 July 2016). "A new metric of absolute percentage error for intermittent demand forecasts". International Journal of Forecasting. 32 (3): 669–679. doi:10.1016/j.ijforecast.2015.12.003.
  9. ^ Makridakis, Spyros (1993) "Accuracy measures: theoretical and practical concerns." International Journal of Forecasting, 9(4):527-529 doi:10.1016/0169-2070(93)90079-3