# Asymmetric numeral systems

(Redirected from Asymmetric Numeral Systems)

Asymmetric numeral systems (ANS)[1] is a family of entropy encoding methods introduced by Jarosław (Jarek) Duda[2] from Jagiellonian University, used in data compression since 2014[3] due to improved performance compared to previously used methods, being up to 30 times faster.[4] ANS combines the compression ratio of arithmetic coding (which uses a nearly accurate probability distribution), with a processing cost similar to that of Huffman coding. In the tabled ANS (tANS) variant, this is achieved by constructing a finite-state machine to operate on a large alphabet without using multiplication.

Among others, ANS is used in the Facebook Zstandard compressor[5][6] (also used e.g. in Linux kernel,[7] Android[8] operating system, was published as RFC 8478 for MIME[9] and HTTP[10]), in the Apple LZFSE compressor,[11] Google Draco 3D compressor[12](used e.g. in Pixar Universal Scene Description format[13]) and PIK image compressor,[14] in CRAM DNA compressor[15] from SAMtools utilities, Dropbox DivANS compressor,[16] and in JPEG XL[17] next generation image compression standard.

The basic idea is to encode information into a single natural number ${\displaystyle x}$. In the standard binary number system, we can add a bit ${\displaystyle s\in \{0,1\}}$ of information to ${\displaystyle x}$ by appending ${\displaystyle s}$ at the end of ${\displaystyle x}$ which gives us ${\displaystyle x'=2x+s}$. For an entropy coder, this is optimal if ${\displaystyle \Pr(0)=\Pr(1)=1/2}$. ANS generalizes this process for arbitrary sets of symbols ${\displaystyle s\in S}$ with an accompanying probability distribution ${\displaystyle (p_{s})_{s\in S}}$. In ANS, if ${\displaystyle x'}$ is the result of appending the information from ${\displaystyle s}$ to ${\displaystyle x}$, then ${\displaystyle x'\approx x\cdot p_{s}^{-1}}$. Equivalently, ${\displaystyle \log _{2}(x')\approx \log _{2}(x)+\log _{2}(1/p_{s})}$, where ${\displaystyle \log _{2}(x)}$ is the number of bits of information stored in the number ${\displaystyle x}$ and ${\displaystyle \log _{2}(1/p_{s})}$ is the number of bits contained in the symbol ${\displaystyle s}$.

For the encoding rule, the set of natural numbers is split into disjoint subsets corresponding to different symbols – like into even and odd numbers, but with densities corresponding to the probability distribution of the symbols to encode. Then to add information from symbol ${\displaystyle s}$ into the information already stored in the current number ${\displaystyle x}$, we go to number ${\displaystyle x'=C(x,s)\approx x/p}$ being the position of the ${\displaystyle x}$-th appearance from the ${\displaystyle s}$-th subset.

There are alternative ways to apply it in practice – direct mathematical formulas for encoding and decoding steps (uABS and rANS variants), or one can put the entire behavior into a table (tANS variant). Renormalization is used to prevent ${\displaystyle x}$ going to infinity – transferring accumulated bits to or from the bitstream.

## Entropy coding

Imagine you want to encode a sequence of 1,000 zeros and ones, which would take 1000 bits to store directly. However, if it is somehow known that it only contains 1 zero and 999 ones, it would be sufficient to encode the zero's position, which requires only ${\displaystyle \lceil \log _{2}(1000)\rceil =10}$  bits here instead of the original 1000 bits.

Generally, such length ${\displaystyle n}$  sequences containing ${\displaystyle pn}$  zeros and ${\displaystyle (1-p)n}$  ones, for some probability ${\displaystyle p\in (0,1)}$ , are called combinations. Using Stirling's approximation we get their asymptotic number being

${\displaystyle {n \choose pn}\approx 2^{nh(p)}{\text{ for large }}n{\text{ and }}h(p)=-p\log _{2}(p)-(1-p)\log _{2}(1-p),}$

called Shannon entropy.

Hence, to choose one such sequence we need approximately ${\displaystyle nh(p)}$  bits. It is still ${\displaystyle n}$  bits if ${\displaystyle p=1/2}$ , however, it can also be much smaller. For example we need only ${\displaystyle \approx n/2}$  bits for ${\displaystyle p=0.11}$ .

An entropy coder allows the encoding of a sequence of symbols using approximately the Shannon entropy bits per symbol. For example ANS could be directly used to enumerate combinations: assign a different natural number to every sequence of symbols having fixed proportions in a nearly optimal way.

In contrast to encoding combinations, this probability distribution usually varies in data compressors. For this purpose, Shannon entropy can be seen as a weighted average: a symbol of probability ${\displaystyle p}$  contains ${\displaystyle \log _{2}(1/p)}$  bits of information. ANS encodes information into a single natural number ${\displaystyle x}$ , interpreted as containing ${\displaystyle \log _{2}(x)}$  bits of information. Adding information from a symbol of probability ${\displaystyle p}$  increases this informational content to ${\displaystyle \log _{2}(x)+\log _{2}(1/p)=\log _{2}(x/p)}$ . Hence, the new number containing both information should be ${\displaystyle x'\approx x/p}$ .

## Basic concepts of ANS

Comparison of the concept of arithmetic coding (left) and ANS (right). Both can be seen as generalizations of standard numeral systems, optimal for uniform probability distribution of digits, into optimized for some chosen probability distribution. Arithmetic or range coding corresponds to adding new information in the most significant position, while ANS generalizes adding information in the least significant position. Its coding rule is "x goes to x-th appearance of subset of natural numbers corresponding to currently encoded symbol". In the presented example, sequence (01111) is encoded into a natural number 18, which is smaller than 47 obtained by using standard binary system, due to better agreement with frequencies of sequence to encode. The advantage of ANS is storing information in a single natural number, in contrast to two defining a range.

Imagine there is some information stored in a natural number ${\displaystyle x}$ , for example as bit sequence of its binary expansion. To add information from a binary variable ${\displaystyle s}$ , we can use coding function ${\displaystyle x'=C(x,s)=2x+s}$ , which shifts all bits one position up, and place the new bit in the least significant position. Now decoding function ${\displaystyle D(x')=(\lfloor x'/2\rfloor ,\mathrm {mod} (x',2))}$  allows one to retrieve the previous ${\displaystyle x}$  and this added bit: ${\displaystyle D(C(x,s))=(x,s),\ C(D(x'))=x'}$ . We can start with ${\displaystyle x=1}$  initial state, then use the ${\displaystyle C}$  function on the successive bits of a finite bit sequence to obtain a final ${\displaystyle x}$  number storing this entire sequence. Then using ${\displaystyle D}$  function multiple times until ${\displaystyle x=1}$  allows one to retrieve the bit sequence in reversed order.

The above procedure is optimal for the uniform (symmetric) probability distribution of symbols ${\displaystyle \Pr(0)=\Pr(1)=1/2}$ . ANS generalize it to make it optimal for any chosen (asymmetric) probability distribution of symbols: ${\displaystyle \Pr(s)=p_{s}}$ . While ${\displaystyle s}$  in the above example was choosing between even and odd ${\displaystyle C(x,s)}$ , in ANS this even/odd division of natural numbers is replaced with division into subsets having densities corresponding to the assumed probability distribution ${\displaystyle \{p_{s}\}_{s}}$ : up to position ${\displaystyle x}$ , there are approximately ${\displaystyle xp_{s}}$  occurrences of symbol ${\displaystyle s}$ .

The coding function ${\displaystyle C(x,s)}$  returns the ${\displaystyle x}$ -th appearance from such subset corresponding to symbol ${\displaystyle s}$ . The density assumption is equivalent to condition ${\displaystyle x'=C(x,s)\approx x/p_{s}}$ . Assuming that a natural number ${\displaystyle x}$  contains ${\displaystyle \log _{2}(x)}$  bits information, ${\displaystyle \log _{2}(C(x,s))\approx \log _{2}(x)+\log _{2}(1/p_{s})}$ . Hence the symbol of probability ${\displaystyle p_{s}}$  is encoded as containing ${\displaystyle \approx \log _{2}(1/p_{s})}$  bits of information as it is required from entropy coders.

## Uniform binary variant (uABS)

Let us start with the binary alphabet and a probability distribution ${\displaystyle \Pr(1)=p}$ , ${\displaystyle \Pr(0)=1-p}$ . Up to position ${\displaystyle x}$  we want approximately ${\displaystyle p\cdot x}$  analogues of odd numbers (for ${\displaystyle s=1}$ ). We can choose this number of appearances as ${\displaystyle \lceil x\cdot p\rceil }$ , getting ${\displaystyle s=\lceil (x+1)\cdot p\rceil -\lceil x\cdot p\rceil }$ . This variant is called uABS and leads to the following decoding and encoding functions:[18]

Decoding:

s = ceil((x+1)*p) - ceil(x*p)  // 0 if fract(x*p) < 1-p, else 1
if s = 0 then new_x = x - ceil(x*p)   // D(x) = (new_x, 0)
if s = 1 then new_x = ceil(x*p)  // D(x) = (new_x, 1)


Encoding:

if s = 0 then new_x = ceil((x+1)/(1-p)) - 1 // C(x,0) = new_x
if s = 1 then new_x = floor(x/p)  // C(x,1) = new_x


For ${\displaystyle p=1/2}$  it amounts to the standard binary system (with 0 and 1 inverted), for a different ${\displaystyle p}$  it becomes optimal for this given probability distribution. For example, for ${\displaystyle p=0.3}$  these formulas lead to a table for small values of ${\displaystyle x}$ :

${\displaystyle C(x,s)}$  0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
${\displaystyle s=0}$  0 1 2 3 4 5 6 7 8 9 10 11 12 13
${\displaystyle s=1}$  0 1 2 3 4 5 6

The symbol ${\displaystyle s=1}$  corresponds to a subset of natural numbers with density ${\displaystyle p=0.3}$ , which in this case are positions ${\displaystyle \{0,3,6,10,13,16,20,23,26,\ldots \}}$ . As ${\displaystyle 1/4<0.3<1/3}$ , these positions increase by 3 or 4. Because ${\displaystyle p=3/10}$  here, the pattern of symbols repeats every 10 positions.

The coding ${\displaystyle C(x,s)}$  can be found by taking the row corresponding to a given symbol ${\displaystyle s}$ , and choosing the given ${\displaystyle x}$  in this row. Then the top row provides ${\displaystyle C(x,s)}$ . For example, ${\displaystyle C(7,0)=11}$  from the middle to the top row.

Imagine we would like to encode the sequence '0100' starting from ${\displaystyle x=1}$ . First ${\displaystyle s=0}$  takes us to ${\displaystyle x=2}$ , then ${\displaystyle s=1}$  to ${\displaystyle x=6}$ , then ${\displaystyle s=0}$  to ${\displaystyle x=9}$ , then ${\displaystyle s=0}$  to ${\displaystyle x=14}$ . By using the decoding function ${\displaystyle D(x')}$  on this final ${\displaystyle x}$ , we can retrieve the symbol sequence. Using the table for this purpose, ${\displaystyle x}$  in the first row determines the column, then the non-empty row and the written value determine the corresponding ${\displaystyle s}$  and ${\displaystyle x}$ .

## Range variants (rANS) and streaming

The range variant also uses arithmetic formulas, but allows operation on a large alphabet. Intuitively, it divides the set of natural numbers into size ${\displaystyle 2^{n}}$  ranges, and split each of them in identical way into subranges of proportions given by the assumed probability distribution.

We start with quantization of probability distribution to ${\displaystyle 2^{n}}$  denominator, where ${\displaystyle n}$  is chosen (usually 8-12 bits): ${\displaystyle p_{s}\approx f[s]/2^{n}}$  for some natural ${\displaystyle f[s]}$  numbers (sizes of subranges).

Denote ${\displaystyle {\text{mask}}=2^{n}-1}$ , cumulative distribution function:

${\displaystyle \operatorname {CDF} [s]=\sum _{i

For ${\displaystyle y\in [0,2^{n}-1]}$  denote function (usually tabled)

symbol(y) = s such that CDF[s] <= y < CDF[s+1] .

Now coding function is:

C(x,s) = (floor(x / f[s]) << n) + (x % f[s]) + CDF[s]

Decoding:  s = symbol(x & mask)

D(x) = (f[s] * (x >> n) + (x & mask ) - CDF[s], s)

This way we can encode a sequence of symbols into a large natural number ${\displaystyle x}$ . To avoid using large number arithmetic, in practice stream variants are used: which enforce ${\displaystyle x\in [L,b\cdot L-1]}$  by renormalization: sending the least significant bits of ${\displaystyle x}$  to or from the bitstream (usually ${\displaystyle L}$  and ${\displaystyle b}$  are powers of 2).

In rANS variant ${\displaystyle x}$  is for example 32 bit. For 16 bit renormalization, ${\displaystyle x\in [2^{16},2^{32}-1]}$ , decoder refills the least significant bits from the bitstream when needed:

if(x < (1 << 16)) x = (x << 16) + read16bits()

## Tabled variant (tANS)

Simple example of 4 state ANS automaton for Pr(a) = 3/4, Pr(b) = 1/4 probability distribution. Symbol b contains −lg(1/4) = 2 bits of information and so it always produces two bits. In contrast, symbol a contains −lg(3/4) ~ 0.415 bits of information, hence sometimes it produces one bit (from state 6 and 7), sometimes 0 bits (from state 4 and 5), only increasing the state, which acts as buffer containing fractional number of bits: lg(x). The number of states in practice is for example 2048, for 256 size alphabet (to directly encode bytes).

tANS variant puts the entire behavior (including renormalization) for ${\displaystyle x\in [L,2L-1]}$  into a table which yields a finite-state machine avoiding the need of multiplication.

Finally, the step of the decoding loop can be written as:

t = decodingTable(x);
x = t.newX + readBits(t.nbBits); //state transition
writeSymbol(t.symbol); //decoded symbol


The step of the encoding loop:

s = ReadSymbol();
nbBits = (x + ns[s]) >> r;  // # of bits for renormalization
writeBits(x, nbBits);  // send the least significant bits to bitstream
x = encodingTable[start[s] + (x >> nbBits)];


A specific tANS coding is determined by assigning a symbol to every ${\displaystyle [L,2L-1]}$  position, their number of appearances should be proportional to the assumed probabilities. For example one could choose "abdacdac" assignment for Pr(a)=3/8, Pr(b)=1/8, Pr(c)=2/8, Pr(d)=2/8 probability distribution. If symbols are assigned in ranges of lengths being powers of 2, we would get Huffman coding. For example a->0, b->100, c->101, d->11 prefix code would be obtained for tANS with "aaaabcdd" symbol assignment.

Example of generation of tANS tables for m = 3 size alphabet and L = 16 states, then applying them for stream decoding. First we approximate probabilities using fraction with denominator being the number of states. Then we spread these symbols in nearly uniform way, optionally the details may depend on cryptographic key for simultaneous encryption. Then we enumerate the appearances starting with value being their amount for a given symbol. Then we refill the youngests bits from the stream to return to the assumed range for x (renormalization).

## Remarks

As for Huffman coding, modifying the probability distribution of tANS is relatively costly, hence it is mainly used in static situations, usually with some Lempel–Ziv scheme (e.g. ZSTD, LZFSE). In this case, the file is divided into blocks - for each of them symbol frequencies are independently counted, then after approximation (quantization) written in the block header and used as static probability distribution for tANS.

In contrast, rANS is usually used as a faster replacement for range coding (e.g. CRAM, LZNA, Draco, AV1). It requires multiplication, but is more memory efficient and is appropriate for dynamically adapting probability distributions.

Encoding and decoding of ANS are performed in opposite directions, making it a stack for symbols. This inconvenience is usually resolved by encoding in backward direction, after which decoding can be done forward. For context-dependence, like Markov model, the encoder needs to use context from the perspective of later decoding. For adaptivity, the encoder should first go forward to find probabilities which will be used (predicted) by decoder and store them in a buffer, then encode in backward direction using the buffered probabilities.

The final state of encoding is required to start decoding, hence it needs to be stored in the compressed file. This cost can be compensated by storing some information in the initial state of encoder. For example, instead of starting with "10000" state, start with "1****" state, where "*" are some additional stored bits, which can be retrieved at the end of the decoding. Alternatively, this state can be used as a checksum by starting encoding with a fixed state, and testing if the final state of decoding is the expected one.

## References

1. ^ J. Duda, K. Tahboub, N. J. Gadil, E. J. Delp, The use of asymmetric numeral systems as an accurate replacement for Huffman coding, Picture Coding Symposium, 2015.
2. ^ http://th.if.uj.edu.pl/~dudaj/
3. ^ Duda, Jarek (October 6, 2019). "List of compressors using ANS, implementations and other materials". Retrieved October 6, 2019.
4. ^ "Google Accused of Trying to Patent Public Domain Technology". Bleeping Computer. September 11, 2017.
5. ^ Smaller and faster data compression with Zstandard, Facebook, August 2016
6. ^ 5 ways Facebook improved compression at scale with Zstandard, Facebook, December 2018
7. ^ Zstd Compression For Btrfs & Squashfs Set For Linux 4.14, Already Used Within Facebook, Phoronix, September 2017
8. ^
9. ^ Zstandard Compression and The application/zstd Media Type (email standard)
10. ^
11. ^ Apple Open-Sources its New Compression Algorithm LZFSE, InfoQ, July 2016
12. ^ Google Draco 3D compression library
13. ^ Google and Pixar add Draco Compression to Universal Scene Description (USD) Format
14. ^ Google PIK: new lossy image format for the internet
15. ^ CRAM format specification (version 3.0)
16. ^ Building better compression together with DivANS
17. ^ Rhatushnyak, Alexander; Wassenberg, Jan; Sneyers, Jon; Alakuijala, Jyrki; Vandevenne, Lode; Versari, Luca; Obryk, Robert; Szabadka, Zoltan; Kliuchnikov, Evgenii; Comsa, Iulia-Maria; Potempa, Krzysztof; Bruse, Martin; Firsching, Moritz; Khasanova, Renata; Ruud van Asseldonk; Boukortt, Sami; Gomez, Sebastian; Fischbacher, Thomas (2019). "Committee Draft of JPEG XL Image Coding System". arXiv:1908.03565 [eess.IV].
18. ^ Data Compression Explained, Matt Mahoney