# Q-function

In statistics, the Q-function is the tail distribution function of the standard normal distribution. In other words, $Q(x)$ is the probability that a normal (Gaussian) random variable will obtain a value larger than $x$ standard deviations. Equivalently, $Q(x)$ is the probability that a standard normal random variable takes a value larger than $x$ .

If $Y$ is a Gaussian random variable with mean $\mu$ and variance $\sigma ^{2}$ , then $X={\frac {Y-\mu }{\sigma }}$ is standard normal and

$P(Y>y)=P(X>x)=Q(x)$ where $x={\frac {y-\mu }{\sigma }}$ .

Other definitions of the Q-function, all of which are simple transformations of the normal cumulative distribution function, are also used occasionally.

Because of its relation to the cumulative distribution function of the normal distribution, the Q-function can also be expressed in terms of the error function, which is an important function in applied mathematics and physics.

## Definition and basic properties

Formally, the Q-function is defined as

$Q(x)={\frac {1}{\sqrt {2\pi }}}\int _{x}^{\infty }\exp \left(-{\frac {u^{2}}{2}}\right)\,du.$

Thus,

$Q(x)=1-Q(-x)=1-\Phi (x)\,\!,$

where $\Phi (x)$  is the cumulative distribution function of the standard normal Gaussian distribution.

The Q-function can be expressed in terms of the error function, or the complementary error function, as

{\begin{aligned}Q(x)&={\frac {1}{2}}\left({\frac {2}{\sqrt {\pi }}}\int _{x/{\sqrt {2}}}^{\infty }\exp \left(-t^{2}\right)\,dt\right)\\&={\frac {1}{2}}-{\frac {1}{2}}\operatorname {erf} \left({\frac {x}{\sqrt {2}}}\right)~~{\text{ -or-}}\\&={\frac {1}{2}}\operatorname {erfc} \left({\frac {x}{\sqrt {2}}}\right).\end{aligned}}

An alternative form of the Q-function known as Craig's formula, after its discoverer, is expressed as:

$Q(x)={\frac {1}{\pi }}\int _{0}^{\frac {\pi }{2}}\exp \left(-{\frac {x^{2}}{2\sin ^{2}\theta }}\right)d\theta .$

This expression is valid only for positive values of x, but it can be used in conjunction with Q(x) = 1 − Q(−x) to obtain Q(x) for negative values. This form is advantageous in that the range of integration is fixed and finite.

Craig's formula was later extended by Behnad (2020) for the Q-function of the sum of two non-negative variables, as follows:

$Q(x+y)={\frac {1}{\pi }}\int _{0}^{\frac {\pi }{2}}\exp \left(-{\frac {x^{2}}{2\sin ^{2}\theta }}-{\frac {y^{2}}{2\cos ^{2}\theta }}\right)d\theta ,\quad x,y\geqslant 0.$

## Bounds and approximations

• The Q-function is not an elementary function. However, the bounds, where $\phi (x)$  is the density function of the standard normal distribution,
$\left({\frac {x}{1+x^{2}}}\right)\phi (x)0,$
become increasingly tight for large x, and are often useful.
Using the substitution v =u2/2, the upper bound is derived as follows:
$Q(x)=\int _{x}^{\infty }\phi (u)\,du<\int _{x}^{\infty }{\frac {u}{x}}\phi (u)\,du=\int _{\frac {x^{2}}{2}}^{\infty }{\frac {e^{-v}}{x{\sqrt {2\pi }}}}\,dv=-{\biggl .}{\frac {e^{-v}}{x{\sqrt {2\pi }}}}{\biggr |}_{\frac {x^{2}}{2}}^{\infty }={\frac {\phi (x)}{x}}.$
Similarly, using $\phi '(u)=-u\phi (u)$  and the quotient rule,
$\left(1+{\frac {1}{x^{2}}}\right)Q(x)=\int _{x}^{\infty }\left(1+{\frac {1}{x^{2}}}\right)\phi (u)\,du>\int _{x}^{\infty }\left(1+{\frac {1}{u^{2}}}\right)\phi (u)\,du=-{\biggl .}{\frac {\phi (u)}{u}}{\biggr |}_{x}^{\infty }={\frac {\phi (x)}{x}}.$
Solving for Q(x) provides the lower bound.
The geometric mean of the upper and lower bound gives a suitable approximation for $Q(x)$ :
$Q(x)\approx {\frac {\phi (x)}{\sqrt {1+x^{2}}}},\qquad x\geq 0.$
• Tighter bounds and approximations of $Q(x)$  can also be obtained by optimizing the following expression 
${\tilde {Q}}(x)={\frac {\phi (x)}{(1-a)x+a{\sqrt {x^{2}+b}}}}.$
For $x\geq 0$ , the best upper bound is given by $a=0.344$  and $b=5.334$  with maximum absolute relative error of 0.44%. Likewise, the best approximation is given by $a=0.339$  and $b=5.510$  with maximum absolute relative error of 0.27%. Finally, the best lower bound is given by $a=1/\pi$  and $b=2\pi$  with maximum absolute relative error of 1.17%.
$Q(x)\leq e^{-{\frac {x^{2}}{2}}},\qquad x>0$
• Improved exponential bounds and a pure exponential approximation are 
$Q(x)\leq {\tfrac {1}{4}}e^{-x^{2}}+{\tfrac {1}{4}}e^{-{\frac {x^{2}}{2}}}\leq {\tfrac {1}{2}}e^{-{\frac {x^{2}}{2}}},\qquad x>0$
$Q(x)\approx {\frac {1}{12}}e^{-{\frac {x^{2}}{2}}}+{\frac {1}{4}}e^{-{\frac {2}{3}}x^{2}},\qquad x>0$
• The above were generalized by Tanash & Riihonen (2020), who showed that $Q(x)$  can be accurately approximated or bounded by
${\tilde {Q}}(x)=\sum _{n=1}^{N}a_{n}e^{-b_{n}x^{2}}.$
In particular, they presented a systematic methodology to solve the numerical coefficients $\{(a_{n},b_{n})\}_{n=1}^{N}$  that yield a minimax approximation or bound: $Q(x)\approx {\tilde {Q}}(x)$ , $Q(x)\leq {\tilde {Q}}(x)$ , or $Q(x)\geq {\tilde {Q}}(x)$  for $x\geq 0$ . With the example coefﬁcients tabulated in the paper for $N=20$ , the relative and absolute approximation errors are less than $2.831\cdot 10^{-6}$  and $1.416\cdot 10^{-6}$ , respectively. The coefficients $\{(a_{n},b_{n})\}_{n=1}^{N}$  for many variations of the exponential approximations and bounds up to $N=25$  have been released to open access as a comprehensive dataset.
• Another approximation of $Q(x)$  for $x\in [0,\infty )$  is given by Karagiannidis & Lioumpas (2007) who showed for the appropriate choice of parameters $\{A,B\}$  that
$f(x;A,B)={\frac {\left(1-e^{-Ax}\right)e^{-x^{2}}}{B{\sqrt {\pi }}x}}\approx \operatorname {erfc} \left(x\right).$
The absolute error between $f(x;A,B)$  and $\operatorname {erfc} (x)$  over the range $[0,R]$  is minimized by evaluating
$\{A,B\}={\underset {\{A,B\}}{\arg \min }}{\frac {1}{R}}\int _{0}^{R}|f(x;A,B)-\operatorname {erfc} (x)|dx.$
Using $R=20$  and numerically integrating, they found the minimum error occurred when $\{A,B\}=\{1.98,1.135\},$  which gave a good approximation for $\forall x\geq 0.$
Substituting these values and using the relationship between $Q(x)$  and $\operatorname {erfc} (x)$  from above gives
$Q(x)\approx {\frac {\left(1-e^{-1.98x}\right)e^{-{\frac {x^{2}}{2}}}}{1.135{\sqrt {2\pi }}x}},x\geq 0.$
• A tighter and more tractable approximation of $Q(x)$  for positive arguments $x\in [0,\infty )$  is given by López-Benítez & Casadevall (2011) based on a second-order exponential function:
$Q(x)\approx e^{-ax^{2}-bx-c},\qquad x\geq 0.$
The fitting coefficients $(a,b,c)$  can be optimized over any desired range of arguments in order to minimize the sum of square errors ($a=0.3842$ , $b=0.7640$ , $c=0.6964$  for $x\in [0,20]$ ) or minimize the maximum absolute error ($a=0.4920$ , $b=0.2887$ , $c=1.1893$  for $x\in [0,20]$ ). This approximation offers some benefits such as a good trade-off between accuracy and analytical tractability (for example, the extension to any arbitrary power of $Q(x)$  is trivial and does not alter the algebraic form of the approximation).

## Inverse Q

The inverse Q-function can be related to the inverse error functions:

$Q^{-1}(y)={\sqrt {2}}\ \mathrm {erf} ^{-1}(1-2y)={\sqrt {2}}\ \mathrm {erfc} ^{-1}(2y)$

The function $Q^{-1}(y)$  finds application in digital communications. It is usually expressed in dB and generally called Q-factor:

$\mathrm {Q{\text{-}}factor} =20\log _{10}\!\left(Q^{-1}(y)\right)\!~\mathrm {dB}$

where y is the bit-error rate (BER) of the digitally modulated signal under analysis. For instance, for QPSK in additive white Gaussian noise, the Q-factor defined above coincides with the value in dB of the signal to noise ratio that yields a bit error rate equal to y.

## Values

The Q-function is well tabulated and can be computed directly in most of the mathematical software packages such as R and those available in Python, MATLAB and Mathematica. Some values of the Q-function are given below for reference.

## Generalization to high dimensions

The Q-function can be generalized to higher dimensions:

$Q(\mathbf {x} )=\mathbb {P} (\mathbf {X} \geq \mathbf {x} ),$

where $\mathbf {X} \sim {\mathcal {N}}(\mathbf {0} ,\,\Sigma )$  follows the multivariate normal distribution with covariance $\Sigma$  and the threshold is of the form $\mathbf {x} =\gamma \Sigma \mathbf {l} ^{*}$  for some positive vector $\mathbf {l} ^{*}>\mathbf {0}$  and positive constant $\gamma >0$ . As in the one dimensional case, there is no simple analytical formula for the Q-function. Nevertheless, the Q-function can be approximated arbitrarily well as $\gamma$  becomes larger and larger.