# Error function

Plot of the error function

In mathematics, the error function (also called the Gauss error function) is a special function (non-elementary) of sigmoid shape that occurs in probability, statistics, and partial differential equations describing diffusion. It is defined as:[1][2]

{\displaystyle {\begin{aligned}\operatorname {erf} (x)&={\frac {1}{\sqrt {\pi }}}\int _{-x}^{x}e^{-t^{2}}\,dt\\[5pt]&={\frac {2}{\sqrt {\pi }}}\int _{0}^{x}e^{-t^{2}}\,dt.\end{aligned}}}

In statistics, for nonnegative values of x, the error function has the following interpretation: for a random variable Y that is normally distributed with mean 0 and variance 0.5, erf(x) describes the probability of Y falling in the range [−xx].

There are several closely related functions, such as the complementary error function, the imaginary error function, and others.

## Name

The name "error function" and its abbreviation erf were proposed by J. W. L. Glaisher in 1871 on account of its connection with "the theory of Probability, and notably the theory of Errors."[3] The error function complement was also discussed by Glaisher in a separate publication in the same year.[4] For the "law of facility" of errors whose density is given by

${\displaystyle f(x)=\left({\frac {c}{\pi }}\right)^{\tfrac {1}{2}}e^{-cx^{2}}}$

(the normal distribution), Glaisher calculates the chance of an error lying between ${\displaystyle p}$  and ${\displaystyle q}$  as:

${\displaystyle \left({\frac {c}{\pi }}\right)^{\tfrac {1}{2}}\int _{p}^{q}e^{-cx^{2}}dx={\tfrac {1}{2}}\left(\operatorname {erf} (q{\sqrt {c}})-\operatorname {erf} (p{\sqrt {c}})\right).}$

## Applications

When the results of a series of measurements are described by a normal distribution with standard deviation ${\displaystyle \sigma }$  and expected value 0, then ${\displaystyle \textstyle \operatorname {erf} \left({\frac {a}{\sigma {\sqrt {2}}}}\right)}$  is the probability that the error of a single measurement lies between −a and +a, for positive a. This is useful, for example, in determining the bit error rate of a digital communication system.

The error and complementary error functions occur, for example, in solutions of the heat equation when boundary conditions are given by the Heaviside step function.

The error function and its approximations can be used to estimate results that hold with high probability. Given random variable ${\displaystyle X\sim \operatorname {Norm} [\mu ,\sigma ]}$  and constant ${\displaystyle L<\mu }$ :

${\displaystyle \Pr[X\leq L]={\frac {1}{2}}+{\frac {1}{2}}\operatorname {erf} \left({\frac {L-\mu }{{\sqrt {2}}\sigma }}\right)\approx A\exp \left(-B\left({\frac {L-\mu }{\sigma }}\right)^{2}\right)}$

where A and B are certain numeric constants. If L is sufficiently far from the mean, i.e. ${\displaystyle \mu -L\geq \sigma {\sqrt {\ln {k}}}}$ , then:

${\displaystyle \Pr[X\leq L]\leq A\exp(-B\ln {k})={\frac {A}{k^{B}}}}$

so the probability goes to 0 as ${\displaystyle k\to \infty }$ .

## Properties

Plots in the complex plane
Integrand exp(−z2)
erf(z)

The property ${\displaystyle \operatorname {erf} (-z)=-\operatorname {erf} (z)}$  means that the error function is an odd function. This directly results from the fact that the integrand ${\displaystyle e^{-t^{2}}}$  is an even function.

For any complex number z:

${\displaystyle \operatorname {erf} ({\overline {z}})={\overline {\operatorname {erf} (z)}}}$

where ${\displaystyle {\overline {z}}}$  is the complex conjugate of z.

The integrand f = exp(−z2) and f = erf(z) are shown in the complex z-plane in figures 2 and 3. Level of Im(f) = 0 is shown with a thick green line. Negative integer values of Im(f) are shown with thick red lines. Positive integer values of Im(f) are shown with thick blue lines. Intermediate levels of Im(f) = constant are shown with thin green lines. Intermediate levels of Re(f) = constant are shown with thin red lines for negative values and with thin blue lines for positive values.

The error function at +∞ is exactly 1 (see Gaussian integral). At the real axis, erf(z) approaches unity at z → +∞ and −1 at z → −∞. At the imaginary axis, it tends to ±i∞.

### Taylor series

The error function is an entire function; it has no singularities (except that at infinity) and its Taylor expansion always converges.

The defining integral cannot be evaluated in closed form in terms of elementary functions, but by expanding the integrand ez2 into its Maclaurin series and integrating term by term, one obtains the error function's Maclaurin series as:

${\displaystyle \operatorname {erf} (z)={\frac {2}{\sqrt {\pi }}}\sum _{n=0}^{\infty }{\frac {(-1)^{n}z^{2n+1}}{n!(2n+1)}}={\frac {2}{\sqrt {\pi }}}\left(z-{\frac {z^{3}}{3}}+{\frac {z^{5}}{10}}-{\frac {z^{7}}{42}}+{\frac {z^{9}}{216}}-\cdots \right)}$

which holds for every complex number z. The denominator terms are sequence A007680 in the OEIS.

For iterative calculation of the above series, the following alternative formulation may be useful:

${\displaystyle \operatorname {erf} (z)={\frac {2}{\sqrt {\pi }}}\sum _{n=0}^{\infty }\left(z\prod _{k=1}^{n}{\frac {-(2k-1)z^{2}}{k(2k+1)}}\right)={\frac {2}{\sqrt {\pi }}}\sum _{n=0}^{\infty }{\frac {z}{2n+1}}\prod _{k=1}^{n}{\frac {-z^{2}}{k}}}$

because ${\displaystyle {\frac {-(2k-1)z^{2}}{k(2k+1)}}}$  expresses the multiplier to turn the kth term into the (k + 1)th term (considering z as the first term).

The imaginary error function has a very similar Maclaurin series, which is:

${\displaystyle \operatorname {erfi} (z)={\frac {2}{\sqrt {\pi }}}\sum _{n=0}^{\infty }{\frac {z^{2n+1}}{n!(2n+1)}}={\frac {2}{\sqrt {\pi }}}\left(z+{\frac {z^{3}}{3}}+{\frac {z^{5}}{10}}+{\frac {z^{7}}{42}}+{\frac {z^{9}}{216}}+\cdots \right)}$

which holds for every complex number z.

### Derivative and integral

The derivative of the error function follows immediately from its definition:

${\displaystyle {\frac {d}{dz}}\operatorname {erf} (z)={\frac {2}{\sqrt {\pi }}}e^{-z^{2}}.}$

From this, the derivative of the imaginary error function is also immediate:

${\displaystyle {\frac {d}{dz}}\operatorname {erfi} (z)={\frac {2}{\sqrt {\pi }}}e^{z^{2}}.}$

An antiderivative of the error function, obtainable by integration by parts, is

${\displaystyle z\operatorname {erf} (z)+{\frac {e^{-z^{2}}}{\sqrt {\pi }}}.}$

An antiderivative of the imaginary error function, also obtainable by integration by parts, is

${\displaystyle z\operatorname {erfi} (z)-{\frac {e^{z^{2}}}{\sqrt {\pi }}}.}$

Higher order derivatives are given by

${\displaystyle \operatorname {erf} ^{(k)}(z)={\frac {2(-1)^{k-1}}{\sqrt {\pi }}}{\mathit {H}}_{k-1}(z)e^{-z^{2}}={\frac {2}{\sqrt {\pi }}}{\frac {d^{k-1}}{dz^{k-1}}}\left(e^{-z^{2}}\right),\qquad k=1,2,\dots }$

where ${\displaystyle {\mathit {H}}}$  are the physicists' Hermite polynomials.[5]

### Bürmann series

An expansion,[6] which converges more rapidly for all real values of ${\displaystyle x}$  than a Taylor expansion, is obtained by using Hans Heinrich Bürmann's theorem:[7]

{\displaystyle {\begin{aligned}\operatorname {erf} (x)&={\frac {2}{\sqrt {\pi }}}\operatorname {sgn}(x){\sqrt {1-e^{-x^{2}}}}\left(1-{\frac {1}{12}}\left(1-e^{-x^{2}}\right)-{\frac {7}{480}}\left(1-e^{-x^{2}}\right)^{2}-{\frac {5}{896}}\left(1-e^{-x^{2}}\right)^{3}-{\frac {787}{276480}}\left(1-e^{-x^{2}}\right)^{4}-\cdots \right)\\[10pt]&={\frac {2}{\sqrt {\pi }}}\operatorname {sgn}(x){\sqrt {1-e^{-x^{2}}}}\left({\frac {\sqrt {\pi }}{2}}+\sum _{k=1}^{\infty }c_{k}e^{-kx^{2}}\right).\end{aligned}}}

By keeping only the first two coefficients and choosing ${\displaystyle c_{1}={\frac {31}{200}}}$  and ${\displaystyle c_{2}=-{\frac {341}{8000}},}$  the resulting approximation shows its largest relative error at ${\displaystyle x=\pm 1.3796,}$  where it is less than ${\displaystyle 3.6127\cdot 10^{-3}}$ :

${\displaystyle \operatorname {erf} (x)\approx {\frac {2}{\sqrt {\pi }}}\operatorname {sgn} (x){\sqrt {1-e^{-x^{2}}}}\left({\frac {\sqrt {\pi }}{2}}+{\frac {31}{200}}e^{-x^{2}}-{\frac {341}{8000}}e^{-2x^{2}}\right).}$

### Inverse functions

Inverse error function

Given complex number z, there is not a unique complex number w satisfying ${\displaystyle \operatorname {erf} (w)=z}$ , so a true inverse function would be multivalued. However, for −1 < x < 1, there is a unique real number denoted ${\displaystyle \operatorname {erf} ^{-1}(x)}$  satisfying

${\displaystyle \operatorname {erf} \left(\operatorname {erf} ^{-1}(x)\right)=x.}$

The inverse error function is usually defined with domain (−1,1), and it is restricted to this domain in many computer algebra systems. However, it can be extended to the disk |z| < 1 of the complex plane, using the Maclaurin series

${\displaystyle \operatorname {erf} ^{-1}(z)=\sum _{k=0}^{\infty }{\frac {c_{k}}{2k+1}}\left({\frac {\sqrt {\pi }}{2}}z\right)^{2k+1},}$

where c0 = 1 and

${\displaystyle c_{k}=\sum _{m=0}^{k-1}{\frac {c_{m}c_{k-1-m}}{(m+1)(2m+1)}}=\left\{1,1,{\frac {7}{6}},{\frac {127}{90}},{\frac {4369}{2520}},{\frac {34807}{16200}},\ldots \right\}.}$

So we have the series expansion (common factors have been canceled from numerators and denominators):

${\displaystyle \operatorname {erf} ^{-1}(z)={\tfrac {1}{2}}{\sqrt {\pi }}\left(z+{\frac {\pi }{12}}z^{3}+{\frac {7\pi ^{2}}{480}}z^{5}+{\frac {127\pi ^{3}}{40320}}z^{7}+{\frac {4369\pi ^{4}}{5806080}}z^{9}+{\frac {34807\pi ^{5}}{182476800}}z^{11}+\cdots \right).}$

(After cancellation the numerator/denominator fractions are entries / in the OEIS; without cancellation the numerator terms are given in entry .) The error function's value at ±∞ is equal to ±1.

For |z| < 1, we have ${\displaystyle \operatorname {erf} \left(\operatorname {erf} ^{-1}(z)\right)=z}$ .

The inverse complementary error function is defined as

${\displaystyle \operatorname {erfc} ^{-1}(1-z)=\operatorname {erf} ^{-1}(z).}$

For real x, there is a unique real number ${\displaystyle \operatorname {erfi} ^{-1}(x)}$  satisfying ${\displaystyle \operatorname {erfi} \left(\operatorname {erfi} ^{-1}(x)\right)=x}$ . The inverse imaginary error function is defined as ${\displaystyle \operatorname {erfi} ^{-1}(x)}$ .[8]

For any real x, Newton's method can be used to compute ${\displaystyle \operatorname {erfi} ^{-1}(x)}$ , and for ${\displaystyle -1\leq x\leq 1}$ , the following Maclaurin series converges:

${\displaystyle \operatorname {erfi} ^{-1}(z)=\sum _{k=0}^{\infty }{\frac {(-1)^{k}c_{k}}{2k+1}}\left({\frac {\sqrt {\pi }}{2}}z\right)^{2k+1},}$

where ck is defined as above.

### Asymptotic expansion

A useful asymptotic expansion of the complementary error function (and therefore also of the error function) for large real x is

${\displaystyle \operatorname {erfc} (x)={\frac {e^{-x^{2}}}{x{\sqrt {\pi }}}}\left[1+\sum _{n=1}^{\infty }(-1)^{n}{\frac {1\cdot 3\cdot 5\cdots (2n-1)}{(2x^{2})^{n}}}\right]={\frac {e^{-x^{2}}}{x{\sqrt {\pi }}}}\sum _{n=0}^{\infty }(-1)^{n}{\frac {(2n-1)!!}{(2x^{2})^{n}}},}$

where (2n – 1)!! is the double factorial of (2n – 1), which is the product of all odd numbers up to (2n – 1). This series diverges for every finite x, and its meaning as asymptotic expansion is that, for any ${\displaystyle N\in \mathbb {N} }$  one has

${\displaystyle \operatorname {erfc} (x)={\frac {e^{-x^{2}}}{x{\sqrt {\pi }}}}\sum _{n=0}^{N-1}(-1)^{n}{\frac {(2n-1)!!}{(2x^{2})^{n}}}+R_{N}(x)}$

where the remainder, in Landau notation, is

${\displaystyle R_{N}(x)=O\left(x^{1-2N}e^{-x^{2}}\right)}$

as ${\displaystyle x\to \infty .}$

Indeed, the exact value of the remainder is

${\displaystyle R_{N}(x):={\frac {(-1)^{N}}{\sqrt {\pi }}}2^{1-2N}{\frac {(2N)!}{N!}}\int _{x}^{\infty }t^{-2N}e^{-t^{2}}\,dt,}$

which follows easily by induction, writing

${\displaystyle e^{-t^{2}}=-(2t)^{-1}\left(e^{-t^{2}}\right)'}$

and integrating by parts.

For large enough values of x, only the first few terms of this asymptotic expansion are needed to obtain a good approximation of erfc(x) (while for not too large values of x, the above Taylor expansion at 0 provides a very fast convergence).

### Continued fraction expansion

A continued fraction expansion of the complementary error function is:[9]

${\displaystyle \operatorname {erfc} (z)={\frac {z}{\sqrt {\pi }}}e^{-z^{2}}{\cfrac {1}{z^{2}+{\cfrac {a_{1}}{1+{\cfrac {a_{2}}{z^{2}+{\cfrac {a_{3}}{1+\dotsb }}}}}}}}\qquad a_{m}={\frac {m}{2}}.}$

### Integral of error function with Gaussian density function

${\displaystyle \int _{-\infty }^{\infty }\operatorname {erf} \left(ax+b\right){\frac {1}{\sqrt {2\pi \sigma ^{2}}}}e^{-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}}\,dx=\operatorname {erf} \left[{\frac {a\mu +b}{\sqrt {1+2a^{2}\sigma ^{2}}}}\right],\qquad a,b,\mu ,\sigma \in \mathbb {R} }$

### Factorial series

The inverse factorial series

{\displaystyle {\begin{aligned}\operatorname {erfc} z&={\frac {e^{-z^{2}}}{{\sqrt {\pi }}\,z}}\sum _{n=0}^{\infty }{\frac {(-1)^{n}Q_{n}}{{(z^{2}+1)}^{\bar {n}}}}\\&={\frac {e^{-z^{2}}}{{\sqrt {\pi }}\,z}}\left(1-{\frac {1}{2}}{\frac {1}{(z^{2}+1)}}+{\frac {1}{4}}{\frac {1}{(z^{2}+1)(z^{2}+2)}}-\cdots \right)\end{aligned}}}

converges for ${\displaystyle \operatorname {Re} (z^{2})>0.}$  Here

${\displaystyle Q_{n}{\stackrel {\text{def}}{=}}{\frac {1}{\Gamma (1/2)}}\int _{0}^{\infty }\tau (\tau -1)\cdots (\tau -n+1)\tau ^{-1/2}e^{-\tau }d\tau =\sum _{k=0}^{n}\left({\frac {1}{2}}\right)^{\bar {k}}s(n,k),}$

${\displaystyle z^{\bar {n}}}$  denotes the rising factorial, and ${\displaystyle s(n,k)}$  denotes a signed Stirling number of the first kind.[10][11]

## Numerical approximations

### Approximation with elementary functions

• Abramowitz and Stegun give several approximations of varying accuracy (equations 7.1.25–28). This allows one to choose the fastest approximation suitable for a given application. In order of increasing accuracy, they are:
${\displaystyle \operatorname {erf} (x)\approx 1-{\frac {1}{(1+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+a_{4}x^{4})^{4}}},\qquad x\geq 0}$
(maximum error: 5×10−4)
where a1 = 0.278393, a2 = 0.230389, a3 = 0.000972, a4 = 0.078108
${\displaystyle \operatorname {erf} (x)\approx 1-(a_{1}t+a_{2}t^{2}+a_{3}t^{3})e^{-x^{2}},\quad t={\frac {1}{1+px}},\qquad x\geq 0}$     (maximum error: 2.5×10−5)
where p = 0.47047, a1 = 0.3480242, a2 = −0.0958798, a3 = 0.7478556
${\displaystyle \operatorname {erf} (x)\approx 1-{\frac {1}{(1+a_{1}x+a_{2}x^{2}+\cdots +a_{6}x^{6})^{16}}},\qquad x\geq 0}$     (maximum error: 3×10−7)
where a1 = 0.0705230784, a2 = 0.0422820123, a3 = 0.0092705272, a4 = 0.0001520143, a5 = 0.0002765672, a6 = 0.0000430638
${\displaystyle \operatorname {erf} (x)\approx 1-(a_{1}t+a_{2}t^{2}+\cdots +a_{5}t^{5})e^{-x^{2}},\quad t={\frac {1}{1+px}}}$     (maximum error: 1.5×10−7)
where p = 0.3275911, a1 = 0.254829592, a2 = −0.284496736, a3 = 1.421413741, a4 = −1.453152027, a5 = 1.061405429
All of these approximations are valid for x ≥ 0. To use these approximations for negative x, use the fact that erf(x) is an odd function, so erf(x) = −erf(−x).
• Exponential bounds and a pure exponential approximation for the complementary error function are given by [12]
{\displaystyle {\begin{aligned}\operatorname {erfc} (x)&\leq {\frac {1}{2}}e^{-2x^{2}}+{\frac {1}{2}}e^{-x^{2}}\leq e^{-x^{2}},\qquad x>0\\\operatorname {erfc} (x)&\approx {\frac {1}{6}}e^{-x^{2}}+{\frac {1}{2}}e^{-{\frac {4}{3}}x^{2}},\qquad x>0.\end{aligned}}}
• A tight approximation of the complementary error function for ${\displaystyle x\in [0,\infty )}$  is given by Karagiannidis & Lioumpas (2007)[13] who showed for the appropriate choice of parameters ${\displaystyle \{A,B\}}$  that
${\displaystyle \operatorname {erfc} \left(x\right)\approx {\frac {\left(1-e^{-Ax}\right)e^{-x^{2}}}{B{\sqrt {\pi }}x}}.}$
They determined ${\displaystyle \{A,B\}=\{1.98,1.135\},}$  which gave a good approximation for all ${\displaystyle x\geq 0.}$
• A single-term lower bound is[14]
${\displaystyle \operatorname {erfc} (x)\geq {\sqrt {\frac {2e}{\pi }}}{\frac {\sqrt {\beta -1}}{\beta }}e^{-\beta x^{2}},\qquad x\geq 0,\beta >1,}$
where the parameter β can be picked to minimize error on the desired interval of approximation.
• Another approximation is given by Sergei Winitzki using his "global Padé approximations":[15][16]:2–3
${\displaystyle \operatorname {erf} (x)\approx \operatorname {sgn} (x){\sqrt {1-\exp \left(-x^{2}{\frac {{\frac {4}{\pi }}+ax^{2}}{1+ax^{2}}}\right)}}}$
where
${\displaystyle a={\frac {8(\pi -3)}{3\pi (4-\pi )}}\approx 0.140012.}$
This is designed to be very accurate in a neighborhood of 0 and a neighborhood of infinity, and the relative error is less than 0.00035 for all real x. Using the alternate value a ≈ 0.147 reduces the maximum relative error to about 0.00013.[17]
This approximation can be inverted to obtain an approximation for the inverse error function:
${\displaystyle \operatorname {erf} ^{-1}(x)\approx \operatorname {sgn} (x){\sqrt {{\sqrt {\left({\frac {2}{\pi a}}+{\frac {\ln(1-x^{2})}{2}}\right)^{2}-{\frac {\ln(1-x^{2})}{a}}}}-\left({\frac {2}{\pi a}}+{\frac {\ln(1-x^{2})}{2}}\right)}}.}$

### Polynomial

An approximation with a maximal error of ${\displaystyle 1.2\times 10^{-7}}$  for any real argument is:[18]

${\displaystyle \operatorname {erf} (x)={\begin{cases}1-\tau &x\geq 0\\\tau -1&x<0\end{cases}}}$

with

{\displaystyle {\begin{aligned}\tau &=t\cdot \exp \left(-x^{2}-1.26551223+1.00002368t+0.37409196t^{2}+0.09678418t^{3}-0.18628806t^{4}\right.\\&\left.\qquad \qquad \qquad +0.27886807t^{5}-1.13520398t^{6}+1.48851587t^{7}-0.82215223t^{8}+0.17087277t^{9}\right)\end{aligned}}}

and

${\displaystyle t={\frac {1}{1+0.5|x|}}.}$

### Table of values

x erf(x) 1-erf(x)
0 0 1
0.02 0.022564575 0.977435425
0.04 0.045111106 0.954888894
0.06 0.067621594 0.932378406
0.08 0.090078126 0.909921874
0.1 0.112462916 0.887537084
0.2 0.222702589 0.777297411
0.3 0.328626759 0.671373241
0.4 0.428392355 0.571607645
0.5 0.520499878 0.479500122
0.6 0.603856091 0.396143909
0.7 0.677801194 0.322198806
0.8 0.742100965 0.257899035
0.9 0.796908212 0.203091788
1 0.842700793 0.157299207
1.1 0.88020507 0.11979493
1.2 0.910313978 0.089686022
1.3 0.934007945 0.065992055
1.4 0.95228512 0.04771488
1.5 0.966105146 0.033894854
1.6 0.976348383 0.023651617
1.7 0.983790459 0.016209541
1.8 0.989090502 0.010909498
1.9 0.992790429 0.007209571
2 0.995322265 0.004677735
2.1 0.997020533 0.002979467
2.2 0.998137154 0.001862846
2.3 0.998856823 0.001143177
2.4 0.999311486 0.000688514
2.5 0.999593048 0.000406952
3 0.99997791 0.00002209
3.5 0.999999257 0.000000743

## Related functions

### Complementary error function

The complementary error function, denoted ${\displaystyle \mathrm {erfc} }$ , is defined as

{\displaystyle {\begin{aligned}\operatorname {erfc} (x)&=1-\operatorname {erf} (x)\\[5pt]&={\frac {2}{\sqrt {\pi }}}\int _{x}^{\infty }e^{-t^{2}}\,dt\\[5pt]&=e^{-x^{2}}\operatorname {erfcx} (x),\end{aligned}}}

which also defines ${\displaystyle \mathrm {erfcx} }$ , the scaled complementary error function[19] (which can be used instead of erfc to avoid arithmetic underflow[19][20]). Another form of ${\displaystyle \operatorname {erfc} (x)}$  for non-negative ${\displaystyle x}$  is known as Craig's formula, after its discoverer:[21]

${\displaystyle \operatorname {erfc} (x\mid x\geq 0)={\frac {2}{\pi }}\int _{0}^{\pi /2}\exp \left(-{\frac {x^{2}}{\sin ^{2}\theta }}\right)\,d\theta .}$

This expression is valid only for positive values of x, but it can be used in conjunction with erfc(x) = 2 − erfc(−x) to obtain erfc(x) for negative values. This form is advantageous in that the range of integration is fixed and finite.

### Imaginary error function

The imaginary error function, denoted erfi, is defined as

{\displaystyle {\begin{aligned}\operatorname {erfi} (x)&=-i\operatorname {erf} (ix)\\[5pt]&={\frac {2}{\sqrt {\pi }}}\int _{0}^{x}e^{t^{2}}\,dt\\[5pt]&={\frac {2}{\sqrt {\pi }}}e^{x^{2}}D(x),\end{aligned}}}

where D(x) is the Dawson function (which can be used instead of erfi to avoid arithmetic overflow[19]).

Despite the name "imaginary error function", ${\displaystyle \operatorname {erfi} (x)}$  is real when x is real.

When the error function is evaluated for arbitrary complex arguments z, the resulting complex error function is usually discussed in scaled form as the Faddeeva function:

${\displaystyle w(z)=e^{-z^{2}}\operatorname {erfc} (-iz)=\operatorname {erfcx} (-iz).}$

### Cumulative distribution function

The error function is essentially identical to the standard normal cumulative distribution function, denoted Φ, also named norm(x) by software languages, as they differ only by scaling and translation. Indeed,

${\displaystyle \Phi (x)={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{x}e^{\tfrac {-t^{2}}{2}}\,dt={\frac {1}{2}}\left[1+\operatorname {erf} \left({\frac {x}{\sqrt {2}}}\right)\right]={\frac {1}{2}}\operatorname {erfc} \left(-{\frac {x}{\sqrt {2}}}\right)}$

or rearranged for erf and erfc:

{\displaystyle {\begin{aligned}\operatorname {erf} (x)&=2\Phi \left(x{\sqrt {2}}\right)-1\\\operatorname {erfc} (x)&=2\Phi \left(-x{\sqrt {2}}\right)=2\left(1-\Phi \left(x{\sqrt {2}}\right)\right).\end{aligned}}}

Consequently, the error function is also closely related to the Q-function, which is the tail probability of the standard normal distribution. The Q-function can be expressed in terms of the error function as

${\displaystyle Q(x)={\frac {1}{2}}-{\frac {1}{2}}\operatorname {erf} \left({\frac {x}{\sqrt {2}}}\right)={\frac {1}{2}}\operatorname {erfc} \left({\frac {x}{\sqrt {2}}}\right).}$

The inverse of ${\displaystyle \Phi }$  is known as the normal quantile function, or probit function and may be expressed in terms of the inverse error function as

${\displaystyle \operatorname {probit} (p)=\Phi ^{-1}(p)={\sqrt {2}}\operatorname {erf} ^{-1}(2p-1)=-{\sqrt {2}}\operatorname {erfc} ^{-1}(2p).}$

The standard normal cdf is used more often in probability and statistics, and the error function is used more often in other branches of mathematics.

The error function is a special case of the Mittag-Leffler function, and can also be expressed as a confluent hypergeometric function (Kummer's function):

${\displaystyle \operatorname {erf} (x)={\frac {2x}{\sqrt {\pi }}}M\left({\frac {1}{2}},{\frac {3}{2}},-x^{2}\right).}$

It has a simple expression in terms of the Fresnel integral.[further explanation needed]

In terms of the regularized gamma function P and the incomplete gamma function,

${\displaystyle \operatorname {erf} (x)=\operatorname {sgn}(x)P\left({\frac {1}{2}},x^{2}\right)={\frac {\operatorname {sgn}(x)}{\sqrt {\pi }}}\gamma \left({\frac {1}{2}},x^{2}\right).}$

${\displaystyle \operatorname {sgn}(x)}$  is the sign function.

### Generalized error functions

Graph of generalised error functions En(x):
grey curve: E1(x) = (1 − e −x)/${\displaystyle \scriptstyle {\sqrt {\pi }}}$
red curve: E2(x) = erf(x)
green curve: E3(x)
blue curve: E4(x)
gold curve: E5(x).

Some authors discuss the more general functions:[citation needed]

${\displaystyle E_{n}(x)={\frac {n!}{\sqrt {\pi }}}\int _{0}^{x}e^{-t^{n}}\,dt={\frac {n!}{\sqrt {\pi }}}\sum _{p=0}^{\infty }(-1)^{p}{\frac {x^{np+1}}{(np+1)p!}}.}$

Notable cases are:

• E0(x) is a straight line through the origin: ${\displaystyle \textstyle E_{0}(x)={\dfrac {x}{e{\sqrt {\pi }}}}}$
• E2(x) is the error function, erf(x).

After division by n!, all the En for odd n look similar (but not identical) to each other. Similarly, the En for even n look similar (but not identical) to each other after a simple division by n!. All generalised error functions for n > 0 look similar on the positive x side of the graph.

These generalised functions can equivalently be expressed for x > 0 using the gamma function and incomplete gamma function:

${\displaystyle E_{n}(x)={\frac {1}{\sqrt {\pi }}}\Gamma (n)\left(\Gamma \left({\frac {1}{n}}\right)-\Gamma \left({\frac {1}{n}},x^{n}\right)\right),\quad \quad x>0.}$

Therefore, we can define the error function in terms of the incomplete Gamma function:

${\displaystyle \operatorname {erf} (x)=1-{\frac {1}{\sqrt {\pi }}}\Gamma \left({\frac {1}{2}},x^{2}\right).}$

### Iterated integrals of the complementary error function

The iterated integrals of the complementary error function are defined by[22]

{\displaystyle {\begin{aligned}\operatorname {i^{n}erfc} (z)&=\int _{z}^{\infty }\operatorname {i^{n-1}erfc} (\zeta )\,d\zeta \\\operatorname {i^{0}erfc} (z)&=\operatorname {erfc} (z)\\\operatorname {i^{1}erfc} (z)&=\operatorname {ierfc} (z)={\frac {1}{\sqrt {\pi }}}e^{-z^{2}}-z\operatorname {erfc} (z)\\\operatorname {i^{2}erfc} (z)&={\frac {1}{4}}\left[\operatorname {erfc} (z)-2z\operatorname {ierfc} (z)\right]\\\end{aligned}}}

The general recurrence formula is

${\displaystyle 2n\operatorname {i^{n}erfc} (z)=\operatorname {i^{n-2}erfc} (z)-2z\operatorname {i^{n-1}erfc} (z)}$

They have the power series

${\displaystyle i^{n}\operatorname {erfc} (z)=\sum _{j=0}^{\infty }{\frac {(-z)^{j}}{2^{n-j}j!\Gamma \left(1+{\frac {n-j}{2}}\right)}},}$

from which follow the symmetry properties

${\displaystyle i^{2m}\operatorname {erfc} (-z)=-i^{2m}\operatorname {erfc} (z)+\sum _{q=0}^{m}{\frac {z^{2q}}{2^{2(m-q)-1}(2q)!(m-q)!}}}$

and

${\displaystyle i^{2m+1}\operatorname {erfc} (-z)=i^{2m+1}\operatorname {erfc} (z)+\sum _{q=0}^{m}{\frac {z^{2q+1}}{2^{2(m-q)-1}(2q+1)!(m-q)!}}.}$

## References

1. ^ Andrews, Larry C. (1998). Special functions of mathematics for engineers. SPIE Press. p. 110. ISBN 9780819426161.
2. ^ Greene, William H.; Econometric Analysis (fifth edition), Prentice-Hall, 1993, p. 926, fn. 11
3. ^ Glaisher, James Whitbread Lee (July 1871). "On a class of definite integrals". London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 4. 42 (277): 294–302. doi:10.1080/14786447108640568. Retrieved 6 December 2017.
4. ^ Glaisher, James Whitbread Lee (September 1871). "On a class of definite integrals. Part II". London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 4. 42 (279): 421–436. doi:10.1080/14786447108640600. Retrieved 6 December 2017.
5. ^ Weisstein, Eric W. "Erf". MathWorld. Wolfram.
6. ^ H. M. Schöpf and P. H. Supancic, "On Bürmann's Theorem and Its Application to Problems of Linear and Nonlinear Heat Transfer and Diffusion," The Mathematica Journal, 2014. doi:10.3888/tmj.16–11.Schöpf, Supancic
7. ^ Weisstein, E. W. "Bürmann's Theorem". Wolfram MathWorld—A Wolfram Web Resource.
8. ^ Bergsma, Wicher (2006). "On a new correlation coefficient, its orthogonal decomposition and associated tests of independence". arXiv:math/0604627.
9. ^ Cuyt, Annie A. M.; Petersen, Vigdis B.; Verdonk, Brigitte; Waadeland, Haakon; Jones, William B. (2008). Handbook of Continued Fractions for Special Functions. Springer-Verlag. ISBN 978-1-4020-6948-2.
10. ^ Schlömilch, Oskar Xavier (1859). "Ueber facultätenreihen". Zeitschrift für Mathematik und Physik (in German). 4: 390–415. Retrieved 4 December 2017.
11. ^ Eq (3) on page 283 of Nielson, Niels (1906). Handbuch der theorie der gammafunktion (in German). Leipzig: B. G. Teubner. Retrieved 4 December 2017.
12. ^ Chiani, M.; Dardari, D.; Simon, M.K. (2003). "New Exponential Bounds and Approximations for the Computation of Error Probability in Fading Channels" (PDF). IEEE Transactions on Wireless Communications. 2 (4): 840–845. CiteSeerX 10.1.1.190.6761. doi:10.1109/TWC.2003.814350.
13. ^ Karagiannidis, G. K., & Lioumpas, A. S. An improved approximation for the Gaussian Q-function. 2007. IEEE Communications Letters, 11(8), pp. 644-646.
14. ^ Chang, Seok-Ho; Cosman, Pamela C.; Milstein, Laurence B. (November 2011). "Chernoff-Type Bounds for the Gaussian Error Function". IEEE Transactions on Communications. 59 (11): 2939–2944. doi:10.1109/TCOMM.2011.072011.100049.
15. ^ Winitzki, Serge (2003). "Uniform approximations for transcendental functions". Lecture Notes in Comput. Sci. 2667. Spronger, Berlin. pp. 780–789. doi:10.1007/3-540-44839-X_82. ISBN 978-3-540-40155-1. (Sect. 3.1 "Error Function of Real Argument erf x")
16. ^ Zeng, Caibin; Chen, Yang Cuan (2015). "Global Padé approximations of the generalized Mittag-Leffler function and its inverse". Fractional Calculus and Applied Analysis. 18 (6): 1492–1506. arXiv:1310.5592. doi:10.1515/fca-2015-0086. Indeed, Winitzki [32] provided the so-called global Padé approximation
17. ^ Winitzki, Sergei (6 February 2008). "A handy approximation for the error function and its inverse" (PDF).
18. ^ Numerical Recipes in Fortran 77: The Art of Scientific Computing (ISBN 0-521-43064-X), 1992, page 214, Cambridge University Press.
19. ^ a b c Cody, W. J. (March 1993), "Algorithm 715: SPECFUN—A portable FORTRAN package of special function routines and test drivers" (PDF), ACM Trans. Math. Softw., 19 (1): 22–32, CiteSeerX 10.1.1.643.4394, doi:10.1145/151271.151273
20. ^ Zaghloul, M. R. (1 March 2007), "On the calculation of the Voigt line profile: a single proper integral with a damped sine integrand", Monthly Notices of the Royal Astronomical Society, 375 (3): 1043–1048, doi:10.1111/j.1365-2966.2006.11377.x
21. ^ John W. Craig, A new, simple and exact result for calculating the probability of error for two-dimensional signal constellations Archived 3 April 2012 at the Wayback Machine, Proceedings of the 1991 IEEE Military Communication Conference, vol. 2, pp. 571–575.
22. ^ Carslaw, H. S.; Jaeger, J. C. (1959), Conduction of Heat in Solids (2nd ed.), Oxford University Press, ISBN 978-0-19-853368-9, p 484