# AKS primality test

The AKS primality test (also known as Agrawal–Kayal–Saxena primality test and cyclotomic AKS test) is a deterministic primality-proving algorithm created and published by Manindra Agrawal, Neeraj Kayal, and Nitin Saxena, computer scientists at the Indian Institute of Technology Kanpur, on August 6, 2002, in a paper titled "PRIMES is in P".[1] The authors received many accolades, including the 2006 Gödel Prize and the 2006 Fulkerson Prize, for this work.

The algorithm determines whether a number is prime or composite within polynomial time.

## Importance

AKS is the first primality-proving algorithm to be simultaneously general, polynomial, deterministic, and unconditional. Previous algorithms had been developed for centuries but achieved three of these properties at most, but not all four.

• The AKS algorithm can be used to verify the primality of any general number given. Many fast primality tests are known that work only for numbers with certain properties. For example, the Lucas–Lehmer test for Mersenne numbers works only for Mersenne numbers, while Pépin's test can be applied to Fermat numbers only.
• The maximum running time of the algorithm can be expressed as a polynomial over the number of digits in the target number. ECPP and APR conclusively prove or disprove that a given number is prime, but are not known to have polynomial time bounds for all inputs.
• The algorithm is guaranteed to distinguish deterministically whether the target number is prime or composite. Randomized tests, such as Miller–Rabin and Baillie–PSW, can test any given number for primality in polynomial time, but are known to produce only a probabilistic result.
• The correctness of AKS is not conditional on any subsidiary unproven hypothesis. In contrast, the Miller test is fully deterministic and runs in polynomial time over all inputs, but its correctness depends on the truth of the yet-unproven generalized Riemann hypothesis.
↑Jump back a section

## Concepts

The AKS primality test is based upon the following theorem: An integer n (≥ 2) is prime if and only if the polynomial congruence relation

$(x - a)^{n} \equiv (x^{n} - a) \pmod{n} \qquad (1)$

holds for all integers a coprime to n (or even just for some such integer a, in particular for a = 1).[1] Note that x is a free variable. It is never substituted by a number; instead you have to expand $(x-a)^n$ and compare the coefficients of the x powers.

This theorem is a generalization to polynomials of Fermat's little theorem, and can easily be proven using the binomial theorem together with the following property of the binomial coefficient:

${n \choose k} \equiv 0 \pmod{n}$ for all $0 if and only if n is prime.

While the relation (1) constitutes a primality test in itself, verifying it takes exponential time. Therefore, to reduce the computational complexity, AKS makes use of the related congruence

$(x - a)^{n} \equiv (x^{n} - a) \pmod{(n,x^{r}-1)} \qquad (2)$

which is the same as:

$(x - a)^n - (x^n - a) = nf + (x^r - 1)g \qquad (3)$

for some polynomials f and g. This congruence can be checked in polynomial time with respect to the number of digits in n, because it is provable that r need only be logarithmic with respect to n. Note that all primes satisfy this relation (choosing g = 0 in (3) gives (1), which holds for n prime). However, some composite numbers also satisfy the relation. The proof of correctness for AKS consists of showing that there exists a suitably small r and suitably small set of integers A such that, if the congruence holds for all such a in A, then n must be prime.

↑Jump back a section

## History and running time

In the first version of the above-cited paper, the authors proved the asymptotic time complexity of the algorithm to be Õ$(\log^{12}(n))$. In other words, the algorithm takes less time than the twelfth power of the number of digits in n times a polylogarithmic (in the number of digits) factor. However, the upper bound proved in the paper was rather loose; indeed, a widely held conjecture about the distribution of the Sophie Germain primes would, if true, immediately cut the worst case down to Õ$(\log^6(n))$.

In the months following the discovery, new variants appeared (Lenstra 2002, Pomerance 2002, Berrizbeitia 2003, Cheng 2003, Bernstein 2003a/b, Lenstra and Pomerance 2003), which improved the speed of computation by orders of magnitude. Due to the existence of the many variants, Crandall and Papadopoulos refer to the "AKS-class" of algorithms in their scientific paper "On the implementation of AKS-class primality tests", published in March 2003.

In response to some of these variants, and to other feedback, the paper "PRIMES is in P" was updated with a new formulation of the AKS algorithm and of its proof of correctness. (This version was eventually published in Annals of Mathematics.) While the basic idea remained the same, r was chosen in a new manner, and the proof of correctness was more coherently organized. While the previous proof had relied on many different methods, the new version relied almost exclusively on the behavior of cyclotomic polynomials over finite fields. The new version also allowed for an improved bound on the time complexity, which can now be shown by simple methods to be Õ$(\log^{10.5}(n))$. Using additional results from sieve theory, this can be further reduced to Õ$(\log^{7.5}(n))$.

In 2005, Carl Pomerance and H. W. Lenstra, Jr. demonstrated a variant of AKS that runs in Õ(log6(n)) operations, where n is the number to be tested – a marked improvement over the initial Õ(log12(n)) bound in the original algorithm.[2] An updated version of the paper is also available.[3]

Agrawal, Kayal and Saxena suggest a variant of their algorithm which would run in Õ$(\log^{3}(n))$ if a certain conjecture made by Bhattacharjee and Pandey in 2001 is true (Agrawal's conjecture); however, a heuristic argument by Hendrik Lenstra and Carl Pomerance suggests that it is probably false.[1]

↑Jump back a section

## Algorithm

The algorithm is as follows:[1]

Input: integer n > 1.
1. If n = ab for integers a > 1 and b > 1, output composite.
2. Find the smallest r such that or(n) > (log n)2.
3. If 1 < gcd(a,n) < n for some ar, output composite.
4. If nr, output prime.
5. For a = 1 to $\scriptstyle\lfloor \scriptstyle{\sqrt{\varphi(r)}\log(n)} \scriptstyle\rfloor$ do
if (X+a)nXn+a (mod Xr − 1,n), output composite;
6. Output prime.

Here or(n) is the multiplicative order of n modulo r, log is the binary logarithm, and $\scriptstyle\varphi(r)$ is Euler's totient function of r.

If n is a prime number, the algorithm will always return prime: since n is prime, steps 1 and 3 will never return composite. Step 5 will also never return composite, because (2) is true for all prime numbers n. Therefore, the algorithm will return prime either in step 4 or in step 6.

Conversely, if n is composite, the algorithm will always return composite: if the algorithm returns prime, then this will occur in either step 4 or step 6. In the first case, since nr, n has a factor ar such that 1 < gcd(a,n) < n, which will return composite. The remaining possibility is that the algorithm returns prime in step 6. The authors' article[1] proves that this will not happen because the multiple equalities tested in step 5 are sufficient to guarantee that the output is composite.

↑Jump back a section

## References

1. Agrawal, Manindra; Kayal, Neeraj; Saxena, Nitin (2004). "PRIMES is in P". Annals of Mathematics 160 (2): 781–793. doi:10.4007/annals.2004.160.781. JSTOR 3597229.
2. ^ H. W. Lenstra jr. and Carl Pomerance, "Primality testing with Gaussian periods", preliminary version July 20, 2005.
3. ^ H. W. Lenstra jr. and Carl Pomerance, "Primality testing with Gaussian periods", version of April 12, 2011.
↑Jump back a section