# Aitken's delta-squared process

In numerical analysis, Aitken's delta-squared process or Aitken Extrapolation is a series acceleration method, used for accelerating the rate of convergence of a sequence. It is named after Alexander Aitken, who introduced this method in 1926.[1] Its early form was known to Seki Kōwa (end of 17th century) and was found for rectification of the circle, i.e. the calculation of π. It is most useful for accelerating the convergence of a sequence that is converging linearly.

## Definition

Given a sequence ${\displaystyle X={(x_{n})}_{n\in \mathbb {N} }}$ , one associates with this sequence the new sequence

${\displaystyle AX={\left({\frac {x_{n}\,x_{n+2}-x_{n+1}^{2}}{x_{n}+x_{n+2}-2\,x_{n+1}}}\right)}_{n\in \mathbb {Z} ^{*}},}$

which can, with improved numerical stability, also be written as

${\displaystyle (AX)_{n}=x_{n}-{\frac {(\Delta x_{n})^{2}}{\Delta ^{2}x_{n}}},}$

or equivalently as

${\displaystyle (AX)_{n}=x_{n+2}-{\frac {(\Delta x_{n+1})^{2}}{\Delta ^{2}x_{n}}}=x_{n+2}-{\frac {(x_{n+2}-x_{n+1})^{2}}{(x_{n+2}-x_{n+1})-(x_{n+1}-x_{n})}}}$

where

${\displaystyle \Delta x_{n}={(x_{n+1}-x_{n})},\ \Delta x_{n+1}={(x_{n+2}-x_{n+1})},}$

and

${\displaystyle \Delta ^{2}x_{n}=x_{n}-2x_{n+1}+x_{n+2}=\Delta x_{n+1}-\Delta x_{n},\ }$

for ${\displaystyle n=0,1,2,3,\dots \,}$

Obviously, ${\displaystyle AX}$  is ill-defined if ${\displaystyle \Delta ^{2}x}$  contains a zero element, or equivalently, if the sequence of first differences has a repeating term.

From a theoretical point of view, if that occurs only for a finite number of indices, one could easily agree to consider the sequence ${\displaystyle AX}$  restricted to indices ${\displaystyle n>n_{0}}$  with a sufficiently large ${\displaystyle n_{0}}$ . From a practical point of view, one does in general rather consider only the first few terms of the sequence, which usually provide the needed precision. Moreover, when numerically computing the sequence, one has to take care to stop the computation when rounding errors in the denominator become too large, where the Δ² operation may cancel too many significant digits. (It would be better for numerical calculation to use ${\displaystyle \Delta x_{n+1}-\Delta x_{n}\ =(x_{n+2}-x_{n+1})-(x_{n+1}-x_{n})\ }$  rather than ${\displaystyle x_{n}-2x_{n+1}+x_{n+2}\ }$  .)

## Properties

Aitken's delta-squared process is a method of acceleration of convergence, and a particular case of a nonlinear sequence transformation.

${\displaystyle x}$  will converge linearly to ${\displaystyle \ell }$  if there exists a number μ ∈ (0, 1) such that

${\displaystyle \lim _{n\to \infty }{\frac {|x_{n+1}-\ell |}{|x_{n}-\ell |}}=\mu .}$

Aitken's method will accelerate the sequence ${\displaystyle x_{n}}$  if ${\displaystyle \lim _{n\to \infty }{\frac {(Ax)_{n}-\ell }{x_{n}-\ell }}=0.}$

${\displaystyle A}$  is not a linear operator, but a constant term drops out, viz: ${\displaystyle A[x-\ell ]=Ax-\ell }$ , if ${\displaystyle \ell }$  is a constant. This is clear from the expression of ${\displaystyle Ax}$  in terms of the finite difference operator ${\displaystyle \Delta }$ .

Although the new process does not in general converge quadratically, it can be shown that for a fixed point process, that is, for an iterated function sequence ${\displaystyle x_{n+1}=f(x_{n})}$  for some function ${\displaystyle f}$ , converging to a fixed point, the convergence is quadratic. In this case, the technique is known as Steffensen's method.

Empirically, the A-operation eliminates the "most important error term". One can check this by considering a sequence of the form ${\displaystyle x_{n}=\ell +a^{n}+b^{n}}$ , where ${\displaystyle 0 : The sequence ${\displaystyle Ax}$  will then go to the limit like ${\displaystyle b^{n}}$  goes to zero.

Geometrically, the graph of an exponential function ${\displaystyle f(t)}$  that satisfies ${\displaystyle f(n)=x_{n}}$ , ${\displaystyle f(n+1)=x_{n+1}}$  and ${\displaystyle f(n+2)=x_{n+2}}$  has an horizontal asymptote at ${\displaystyle {\frac {x_{n}x_{n+2}-x_{n+1}^{2}}{x_{n}-2x_{n+1}+x_{n+2}}}}$  (if ${\displaystyle x_{n}-2x_{n+1}+x_{n+2}\neq 0}$ ).

One can also show that if ${\displaystyle x}$  goes to its limit ${\displaystyle \ell }$  at a rate strictly greater than 1, ${\displaystyle Ax}$  does not have a better rate of convergence. (In practice, one rarely has e.g. quadratic convergence which would mean over 30 resp. 100 correct decimal places after 5 resp. 7 iterations (starting with 1 correct digit); usually no acceleration is needed in that case.)

In practice, ${\displaystyle Ax}$  converges much faster to the limit than ${\displaystyle x}$  does, as demonstrated by the example calculations below. Usually, it is much cheaper to calculate ${\displaystyle Ax}$  (involving only calculation of differences, one multiplication and one division) than to calculate many more terms of the sequence ${\displaystyle x}$ . Care must be taken, however, to avoid introducing errors due to insufficient precision when calculating the differences in the numerator and denominator of the expression.

## Example calculations

Example 1: The value of ${\displaystyle {\sqrt {2}}\approx 1.4142136}$  can be approximated by assuming an initial value for ${\displaystyle a_{0}}$  and iterating the following:

${\displaystyle a_{n+1}={\frac {a_{n}+{\frac {2}{a_{n}}}}{2}}.}$

Starting with ${\displaystyle a_{0}=1:}$

 n x = iterated value Ax 0 1 1.4285714 1 1.5 1.4141414 2 1.4166667 1.4142136 3 1.4142157 -- 4 1.4142136 --

It is worth noting here that Aitken's method does not save two iteration steps; computation of the first three Ax values required the first five x values. Also, the second Ax value is decidedly inferior to the 4th x value, mostly due to the fact that Aitken's process assumes linear, rather than quadratic, convergence[citation needed].

Example 2: The value of ${\displaystyle {\frac {\pi }{4}}}$  may be calculated as an infinite sum:

${\displaystyle {\frac {\pi }{4}}=\sum _{n=0}^{\infty }{\frac {(-1)^{n}}{2n+1}}\approx 0.785398}$
 n term x = partial sum Ax 0 1 1 0.79166667 1 −0.33333333 0.66666667 0.78333333 2 0.2 0.86666667 0.78630952 3 −0.14285714 0.72380952 0.78492063 4 0.11111111 0.83492063 0.78567821 5 −9.0909091×10−2 0.74401154 0.78522034 6 7.6923077×10−2 0.82093462 0.78551795 7 -6.6666667×10−2 0.75426795 -- 8 5.8823529×10−2 0.81309148 --

In this example, Aitken's method is applied to a sublinearly converging series, accelerating convergence considerably. It is still sublinear, but much faster than the original convergence: the first Ax value, whose computation required the first three x values, is closer to the limit than the eighth x value.

## Example pseudocode for Aitken extrapolation

The following is an example of using the Aitken extrapolation to help find the limit of the sequence ${\displaystyle x_{n+1}=f(x_{n})}$  when given ${\displaystyle x_{0}}$ , which we assume to be the fixed point ${\displaystyle \alpha =f(\alpha )}$ . For instance, we could have ${\displaystyle x_{n+1}={\frac {1}{2}}(x_{n}+{\frac {2}{x_{n}}})}$  with ${\displaystyle x_{0}=1}$  which has the fixed point ${\displaystyle {\sqrt {2}}}$  so that ${\displaystyle f(x)={\frac {1}{2}}(x+{\frac {2}{x}})}$  (see Methods of computing square roots).

This pseudo code also computes the Aitken approximation to ${\displaystyle f'(\alpha )}$ . The Aitken extrapolates will be denoted by aitkenX. We must check if during the computation of the extrapolate the denominator becomes too small, which could happen if we already have a large amount of accuracy, since otherwise a large amount of error could be introduced. We denote this small number by epsilon.

%These choices depend on the problem being solved
x0 = 1                      %The initial value
f(x) = (1/2)*(x + 2/x)      %The function that finds the next element in the sequence
tolerance = 10^-10          %10 digit accuracy is desired
epsilon = 10^-16            %Don't want to divide by a number smaller than this

maxIterations = 20          %Don't allow the iterations to continue indefinitely
haveWeFoundSolution = false %Were we able to find the solution to within the desired tolerance? not yet.

for i = 1 : maxIterations
x1 = f(x0)
x2 = f(x1)

if (x1 ~= x0)
lambda = absoluteValue((x2 - x1)/(x1 - x0))  %OPTIONAL: computes an approximation of |f'(fixedPoint)|, which is denoted by lambda
end

denominator = (x2 - x1) - (x1 - x0);

if (absoluteValue(denominator) < epsilon)          %Don't want to divide by too small of a number
print('WARNING: denominator is too small')
break;                                        %Leave the loop
end

aitkenX = x2 - ( (x2 - x1)^2 )/denominator

if (absoluteValue(aitkenX - x2) < tolerance)       %If the result is within tolerance
print("The fixed point is ", aitkenX))        %Display the result of the Aitken extrapolation
haveWeFoundSolution = true
break;                                        %Done, so leave the loop
end

x0 = aitkenX                                      %Update x0 to start again

end

if (haveWeFoundSolution == false)   %If we weren't able to find a solution to within the desired tolerance
print("Warning: Not able to find solution to within the desired tolerance of ", tolerance)
print("The last computed extrapolate was ", aitkenX)
end