# Parametric model

In statistics, a parametric model or parametric family or finite-dimensional model is a particular class of statistical models. Specifically, a parametric model is a family of probability distributions that has a finite number of parameters.

## Definition

A statistical model is a collection of probability distributions on some sample space. We assume that the collection, 𝒫, is indexed by some set Θ. The set Θ is called the parameter set or, more commonly, the parameter space. For each θ ∈ Θ, let Pθ denote the corresponding member of the collection; so Pθ is a cumulative distribution function. Then a statistical model can be written as

${\mathcal {P}}={\big \{}P_{\theta }\ {\big |}\ \theta \in \Theta {\big \}}.$

The model is a parametric model if Θ ⊆ ℝk for some positive integer k.

When the model consists of absolutely continuous distributions, it is often specified in terms of corresponding probability density functions:

${\mathcal {P}}={\big \{}f_{\theta }\ {\big |}\ \theta \in \Theta {\big \}}.$

## Examples

• The Poisson family of distributions is parametrized by a single number λ > 0:
${\mathcal {P}}={\Big \{}\ p_{\lambda }(j)={\tfrac {\lambda ^{j}}{j!}}e^{-\lambda },\ j=0,1,2,3,\dots \ {\Big |}\;\;\lambda >0\ {\Big \}},$
where pλ is the probability mass function. This family is an exponential family.
• The normal family is parametrized by θ = (μ, σ), where μ ∈ ℝ is a location parameter and σ > 0 is a scale parameter:
${\mathcal {P}}={\Big \{}\ f_{\theta }(x)={\tfrac {1}{{\sqrt {2\pi }}\sigma }}\exp \left(-{\tfrac {(x-\mu )^{2}}{2\sigma ^{2}}}\right)\ {\Big |}\;\;\mu \in \mathbb {R} ,\sigma >0\ {\Big \}}.$
This parametrized family is both an exponential family and a location-scale family.
• The Weibull translation model has a three-dimensional parameter θ = (λ, β, μ):
${\mathcal {P}}={\Big \{}\ f_{\theta }(x)={\tfrac {\beta }{\lambda }}\left({\tfrac {x-\mu }{\lambda }}\right)^{\beta -1}\!\exp \!{\big (}\!-\!{\big (}{\tfrac {x-\mu }{\lambda }}{\big )}^{\beta }{\big )}\,\mathbf {1} _{\{x>\mu \}}\ {\Big |}\;\;\lambda >0,\,\beta >0,\,\mu \in \mathbb {R} \ {\Big \}}.$
• The binomial model is parametrized by θ = (n, p), where n is a non-negative integer and p is a probability (i.e. p ≥ 0 and p ≤ 1):
${\mathcal {P}}={\Big \{}\ p_{\theta }(k)={\tfrac {n!}{k!(n-k)!}}\,p^{k}(1-p)^{n-k},\ k=0,1,2,\dots ,n\ {\Big |}\;\;n\in \mathbb {Z} _{\geq 0},\,p\geq 0\land p\leq 1{\Big \}}.$
This example illustrates the definition for a model with some discrete parameters.

## General remarks

A parametric model is called identifiable if the mapping θPθ is invertible, i.e. there are no two different parameter values θ1 and θ2 such that Pθ1 = Pθ2.

## Comparisons with other classes of models

Parametric models are contrasted with the semi-parametric, semi-nonparametric, and non-parametric models, all of which consist of an infinite set of "parameters" for description. The distinction between these four classes is as follows:[citation needed]

• in a "parametric" model all the parameters are in finite-dimensional parameter spaces;
• a model is "non-parametric" if all the parameters are in infinite-dimensional parameter spaces;
• a "semi-parametric" model contains finite-dimensional parameters of interest and infinite-dimensional nuisance parameters;
• a "semi-nonparametric" model has both finite-dimensional and infinite-dimensional unknown parameters of interest.

Some statisticians believe that the concepts "parametric", "non-parametric", and "semi-parametric" are ambiguous. It can also be noted that the set of all probability measures has cardinality of continuum, and therefore it is possible to parametrize any model at all by a single number in (0,1) interval. This difficulty can be avoided by considering only "smooth" parametric models.