Open main menu

In statistics, a likelihood ratio test (LR test) is a statistical test used for comparing the goodness of fit of two statistical models — a null model against an alternative model. The test is based on the likelihood ratio, which expresses how many times more likely the data are under one model than the other. This likelihood ratio, or equivalently its logarithm, can then be used to compute a p-value, or compared to a critical value to decide whether or not to reject the null model.

When the logarithm of the likelihood ratio is used, the statistic is known as a log-likelihood ratio statistic, and the probability distribution of this test statistic, assuming that the null model is true, can be approximated using Wilks' theorem.

In the case of distinguishing between two models, each of which has no unknown parameters, use of the likelihood ratio test can be justified by the Neyman–Pearson lemma, which demonstrates that such a test has the highest power among all competitors.[1]

Contents

DefinitionEdit

Simple hypothesesEdit

A statistical model is often a parametrized family of probability density functions or probability mass functions  . A simple-vs.-simple hypothesis test has completely specified models under both the null and alternative hypotheses, which for convenience are written in terms of fixed values of a notional parameter  :

 

Note that in this special case, under either hypothesis, the distribution of the data is fully specified; there are no unknown parameters to estimate. The likelihood ratio test is based on the likelihood ratio, which is often denoted by   (the capital Greek letter lambda). The likelihood ratio is defined either as[2][3]

 

or as

 

where   is the likelihood function, and   is the supremum function.

These are not both the same function, but they are monotone functions of each other, and so equivalent for present purposes. Some references may use the reciprocal of the first function above as the definition.[4] In the form stated here, the likelihood ratio is small if the alternative model is better than the null model.

The likelihood ratio test provides the decision rule as follows:

If  , do not reject  ;
If  , reject  ;
Reject with probability   if  

The values   and   are usually chosen to obtain a specified significance level  , via the relation

 .

The Neyman–Pearson lemma states that this likelihood ratio test is the most powerful among all level   tests for this problem.[1]

Composite hypothesesEdit

A null hypothesis is often stated by saying the parameter   is in a specified subset   of the parameter space  . The alternative hypothesis is thus that   is in the complement of  , i.e. in  , which is denoted by  .

 

The likelihood function is   (the probability density function or probability mass function) viewed as a function of the parameter   with   held fixed at the value that was actually observed, i.e. the data. The likelihood ratio test statistic is [5]

 

Here, the   notation refers to the supremum function.

A likelihood ratio test is any test with critical region (or rejection region) of the form   where   is any number satisfying  . Many common test statistics such as the Z-test, the F-test, Pearson's chi-squared test and the G-test are tests for nested models and can be phrased as log-likelihood ratios or approximations thereof.

InterpretationEdit

Being a function of the data  , the likelihood ratio is therefore a statistic. The likelihood ratio test rejects the null hypothesis if the value of this statistic is too small. How small is too small depends on the significance level of the test, i.e., on what probability of Type I error is considered tolerable ("Type I" errors consist of the rejection of a null hypothesis that is true).

The numerator corresponds to the likelihood of an observed outcome under the null hypothesis. The denominator corresponds to the maximum likelihood of an observed outcome varying parameters over the whole parameter space. The numerator of this ratio is less than the denominator. The likelihood ratio hence is between 0 and 1. Low values of the likelihood ratio mean that the observed result was less likely to occur under the null hypothesis as compared to the alternative. High values of the statistic mean that the observed outcome was nearly as likely to occur under the null hypothesis as the alternative, and the null hypothesis cannot be rejected.

The likelihood-ratio test requires nested models – models in which the more complex one can be transformed into the simpler model by imposing a set of constraints on the parameters. If the models are not nested, then a generalization of the likelihood-ratio test can usually be used instead: the relative likelihood.

Asymptotic distribution: Wilks’ theoremEdit

If the distribution of the likelihood ratio corresponding to a particular null and alternative hypothesis can be explicitly determined then it can directly be used to form decision regions (to accept/reject the null hypothesis). In most cases, however, the exact distribution of the likelihood ratio corresponding to specific hypotheses is very difficult to determine[citation needed].

A convenient result by Samuel S. Wilks, says that as the sample size   approaches  , the test statistic   for a nested model asymptotically will be chi-squared distributed ( ) with degrees of freedom equal to the difference in dimensionality of   and  , when   holds true.[6] This means that for a great variety of hypotheses, a practitioner can compute the likelihood ratio   for the data and compare   to the   value corresponding to a desired statistical significance as an approximate statistical test. Other extensions exists.

See alsoEdit

NotesEdit

ReferencesEdit

  • Casella, George; Berger, Roger L. (2001). Statistical Inference (Second ed.). ISBN 0-534-24312-6.
  • Cox, D. R.; Hinkley, D. V. (1974). Theoretical Statistics. Chapman and Hall. ISBN 0-412-12420-3.
  • Huelsenbeck, J. P.; Crandall, K. A. (1997). "Phylogeny Estimation and Hypothesis Testing Using Maximum Likelihood". Annual Review of Ecology and Systematics. 28: 437–466. doi:10.1146/annurev.ecolsys.28.1.437.
  • Mood, A.M.; Graybill, F.A. (1963). Introduction to the Theory of Statistics (2nd ed.). McGraw-Hill. ISBN 978-0070428638.
  • Neyman, Jerzy; Pearson, Egon S. (1933). "On the Problem of the Most Efficient Tests of Statistical Hypotheses" (PDF). Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 231 (694–706): 289–337. Bibcode:1933RSPTA.231..289N. doi:10.1098/rsta.1933.0009. JSTOR 91247.
  • Pinheiro, José C.; Bates, Douglas M. (2000), Mixed-Effects Models in S and S-PLUS, Springer-Verlag, pp. 82–93, ISBN 0-387-98957-9
  • Stuart, A.; Ord, K.; Arnold, S (1999). Kendall's Advanced Theory of Statistics. 2A. Arnold.
  • Wilks, S. S. (1938). "The Large-Sample Distribution of the Likelihood Ratio for Testing Composite Hypotheses". The Annals of Mathematical Statistics. 9: 60–62. doi:10.1214/aoms/1177732360.

External linksEdit