# Logit-normal distribution

In probability theory, a logit-normal distribution is a probability distribution of a random variable whose logit has a normal distribution. If Y is a random variable with a normal distribution, and P is the standard logistic function, then X = P(Y) has a logit-normal distribution; likewise, if X is logit-normally distributed, then Y = logit(X)= log (X/(1-X)) is normally distributed. It is also known as the logistic normal distribution,[1] which often refers to a multinomial logit version (e.g.[2][3][4][5]).

Notation Probability density function Cumulative distribution function ${\displaystyle P({\mathcal {N}}(\mu ,\,\sigma ^{2}))}$ σ2 > 0 — squared scale (real), μ ∈ R — location x ∈ (0, 1) ${\displaystyle {\frac {1}{\sigma {\sqrt {2\pi }}}}\,e^{-{\frac {(\operatorname {logit} (x)-\mu )^{2}}{2\sigma ^{2}}}}{\frac {1}{x(1-x)}}}$ ${\displaystyle {\frac {1}{2}}{\Big [}1+\operatorname {erf} {\Big (}{\frac {\operatorname {logit} (x)-\mu }{\sqrt {2\sigma ^{2}}}}{\Big )}{\Big ]}}$ no analytical solution ${\displaystyle P(\mu )\,}$ no analytical solution no analytical solution no analytical solution

A variable might be modeled as logit-normal if it is a proportion, which is bounded by zero and one, and where values of zero and one never occur.

## Characterization

### Probability density function

The probability density function (PDF) of a logit-normal distribution, for 0 < x < 1, is:

${\displaystyle f_{X}(x;\mu ,\sigma )={\frac {1}{\sigma {\sqrt {2\pi }}}}\,{\frac {1}{x(1-x)}}\,e^{-{\frac {(\operatorname {logit} (x)-\mu )^{2}}{2\sigma ^{2}}}}}$

where μ and σ are the mean and standard deviation of the variable’s logit (by definition, the variable’s logit is normally distributed).

The density obtained by changing the sign of μ is symmetrical, in that it is equal to f(1-x;-μ,σ), shifting the mode to the other side of 0.5 (the midpoint of the (0,1) interval).

Plot of the Logitnormal PDF for various combinations of μ (facets) and σ (colors)

### Moments

The moments of the logit-normal distribution have no analytic solution. The moments can be estimated by numerical integration, however numerical integration can be prohibitive when the values of ${\textstyle \mu ,\sigma ^{2}}$ are such that the density function diverges to infinity at the end points zero and one. An alternative is to use the observation that the logit-normal is a transformation of a normal random variable. This allows us to approximate the ${\displaystyle n}$ -th moment via the following quasi Monte Carlo estimate ${\displaystyle E[X^{n}]\approx {\frac {1}{K-1}}\sum _{i=1}^{K-1}\left(P\left(\Phi _{\mu ,\sigma ^{2}}^{-1}(i/K)\right)\right)^{n},}$

where ${\textstyle P}$  is the standard logistic function, and ${\textstyle \Phi _{\mu ,\sigma ^{2}}^{-1}}$  is the inverse cumulative distribution function of a normal distribution with mean and variance ${\textstyle \mu ,\sigma ^{2}}$ .[clarification needed]

### Mode or modes

When the derivative of the density equals 0 then the location of the mode x satisfies the following equation:

${\displaystyle \operatorname {logit} (x)=\sigma ^{2}(2x-1)+\mu .}$

For some values of the parameters there are two solutions, i.e. the distribution is bimodal.

## Multivariate generalization

The logistic normal distribution is a generalization of the logit–normal distribution to D-dimensional probability vectors by taking a logistic transformation of a multivariate normal distribution.[6][7][8]

### Probability density function

${\displaystyle f_{X}(\mathbf {x} ;{\boldsymbol {\mu }},{\boldsymbol {\Sigma }})={\frac {1}{|(2\pi )^{D-1}{\boldsymbol {\Sigma }}|^{\frac {1}{2}}}}\,{\frac {1}{\prod \limits _{i=1}^{D}x_{i}}}\,e^{-{\frac {1}{2}}\left\{\log \left({\frac {\mathbf {x} _{-D}}{x_{D}}}\right)-{\boldsymbol {\mu }}\right\}^{\top }{\boldsymbol {\Sigma }}^{-1}\left\{\log \left({\frac {\mathbf {x} _{-D}}{x_{D}}}\right)-{\boldsymbol {\mu }}\right\}}\quad ,\quad \mathbf {x} \in {\mathcal {S}}^{D}\;\;,}$

where ${\displaystyle \mathbf {x} _{-D}}$  denotes a vector of the first (D-1) components of ${\displaystyle \mathbf {x} }$  and ${\displaystyle {\mathcal {S}}^{D}}$  denotes the simplex of D-dimensional probability vectors. This follows from applying the additive logistic transformation to map a multivariate normal random variable ${\displaystyle \mathbf {y} \sim {\mathcal {N}}\left({\boldsymbol {\mu }},{\boldsymbol {\Sigma }}\right)\;,\;\mathbf {y} \in \mathbb {R} ^{D-1}}$  to the simplex:

${\displaystyle \mathbf {x} =\left[{\frac {e^{y_{1}}}{1+\sum _{i=1}^{D-1}e^{y_{i}}}},\dots ,{\frac {e^{y_{D-1}}}{1+\sum _{i=1}^{D-1}e^{y_{i}}}},{\frac {1}{1+\sum _{i=1}^{D-1}e^{y_{i}}}}\right]^{\top }}$

Gaussian density functions and corresponding logistic normal density functions after logistic transformation.

The unique inverse mapping is given by:

${\displaystyle \mathbf {y} =\left[\log \left({\frac {x_{1}}{x_{D}}}\right),\dots ,\log \left({\frac {x_{D-1}}{x_{D}}}\right)\right]^{\top }}$ .

This is the case of a vector x which components sum up to one. In the case of x with sigmoidal elements, that is, when

${\displaystyle \mathbf {y} =\left[\log \left({\frac {x_{1}}{1-x_{1}}}\right),\dots ,\log \left({\frac {x_{D}}{1-x_{D}}}\right)\right]^{\top }}$

we have

${\displaystyle f_{X}(\mathbf {x} ;{\boldsymbol {\mu }},{\boldsymbol {\Sigma }})={\frac {1}{|2\pi {\boldsymbol {\Sigma }}|^{\frac {1}{2}}}}\,{\frac {1}{\prod \limits _{i=1}^{D}\left(x_{i}(1-x_{i})\right)}}\,e^{-{\frac {1}{2}}\left\{\log \left({\frac {\mathbf {x} }{1-\mathbf {x} }}\right)-{\boldsymbol {\mu }}\right\}^{\top }{\boldsymbol {\Sigma }}^{-1}\left\{\log \left({\frac {\mathbf {x} }{1-\mathbf {x} }}\right)-{\boldsymbol {\mu }}\right\}}}$

where the log and the division in the argument are taken element-wise. This is because the Jacobian matrix of the transformation is diagonal with elements ${\displaystyle {\frac {1}{x_{i}(1-x_{i})}}}$ .

### Use in statistical analysis

The logistic normal distribution is a more flexible alternative to the Dirichlet distribution in that it can capture correlations between components of probability vectors. It also has the potential to simplify statistical analyses of compositional data by allowing one to answer questions about log-ratios of the components of the data vectors. One is often interested in ratios rather than absolute component values.

The probability simplex is a bounded space, making standard techniques that are typically applied to vectors in ${\displaystyle \mathbb {R} ^{n}}$  less meaningful. Aitchison described the problem of spurious negative correlations when applying such methods directly to simplicial vectors.[7] However, mapping compositional data in ${\displaystyle {\mathcal {S}}^{D}}$  through the inverse of the additive logistic transformation yields real-valued data in ${\displaystyle \mathbb {R} ^{D-1}}$ . Standard techniques can be applied to this representation of the data. This approach justifies use of the logistic normal distribution, which can thus be regarded as the "Gaussian of the simplex".

### Relationship with the Dirichlet distribution

Logistic normal approximation to Dirichlet distribution

The Dirichlet and logistic normal distributions are never exactly equal for any choice of parameters. However, Aitchison described a method for approximating a Dirichlet with a logistic normal such that their Kullback–Leibler divergence (KL) is minimized:

${\displaystyle K(p,q)=\int _{{\mathcal {S}}^{D}}p\left(\mathbf {x} \mid {\boldsymbol {\alpha }}\right)\log \left({\frac {p\left(\mathbf {x} \mid {\boldsymbol {\alpha }}\right)}{q\left(\mathbf {x} \mid {\boldsymbol {\mu }},{\boldsymbol {\Sigma }}\right)}}\right)\,d\mathbf {x} }$

This is minimized by:

${\displaystyle {\boldsymbol {\mu }}^{*}=\mathbf {E} _{p}\left[\log \left({\frac {\mathbf {x} _{-D}}{x_{D}}}\right)\right]\quad ,\quad {\boldsymbol {\Sigma }}^{*}={\textbf {Var}}_{p}\left[\log \left({\frac {\mathbf {x} _{-D}}{x_{D}}}\right)\right]}$

Using moment properties of the Dirichlet distribution, the solution can be written in terms of the digamma ${\displaystyle \psi }$  and trigamma ${\displaystyle \psi '}$  functions:

${\displaystyle \mu _{i}^{*}=\psi \left(\alpha _{i}\right)-\psi \left(\alpha _{D}\right)\quad ,\quad i=1,\ldots ,D-1}$
${\displaystyle \Sigma _{ii}^{*}=\psi '\left(\alpha _{i}\right)+\psi '\left(\alpha _{D}\right)\quad ,\quad i=1,\ldots ,D-1}$
${\displaystyle \Sigma _{ij}^{*}=\psi '\left(\alpha _{D}\right)\quad ,\quad i\neq j}$

This approximation is particularly accurate for large ${\displaystyle {\boldsymbol {\alpha }}}$ . In fact, one can show that for ${\displaystyle \alpha _{i}\rightarrow \infty ,i=1,\ldots ,D}$ , we have that ${\displaystyle p\left(\mathbf {x} \mid {\boldsymbol {\alpha }}\right)\rightarrow q\left(\mathbf {x} \mid {\boldsymbol {\mu }}^{*},{\boldsymbol {\Sigma }}^{*}\right)}$ .