Non-adjacent form

The non-adjacent form (NAF) of a number is a unique signed-digit representation, in which non-zero values cannot be adjacent. For example:

(0 1 1 1)2 = 4 + 2 + 1 = 7
(1 0 −1 1)2 = 8 − 2 + 1 = 7
(1 −1 1 1)2 = 8 − 4 + 2 + 1 = 7
(1 0 0 −1)2 = 8 − 1 = 7

All are valid signed-digit representations of 7, but only the final representation, (1 0 0 −1)2, is in non-adjacent form.

The non-adjacent form is also known as "canonical signed digit" representation.

PropertiesEdit

NAF assures a unique representation of an integer, but the main benefit of it is that the Hamming weight of the value will be minimal. For regular binary representations of values, half of all bits will be non-zero, on average, but with NAF this drops to only one-third of all digits. This leads to efficient implementations of add/subtract networks (e.g. multiplication by a constant) in hardwired digital signal processing.[1]

Obviously, at most half of the digits are non-zero, which was the reason it was introduced by G.W. Reitweisner [2] for speeding up early multiplication algorithms, much like Booth encoding.

Because every non-zero digit has to be adjacent to two 0s, the NAF representation can be implemented such that it only takes a maximum of m + 1 bits for a value that would normally be represented in binary with m bits.

The properties of NAF make it useful in various algorithms, especially some in cryptography; e.g., for reducing the number of multiplications needed for performing an exponentiation. In the algorithm, exponentiation by squaring, the number of multiplications depends on the number of non-zero bits. If the exponent here is given in NAF form, a digit value 1 implies a multiplication by the base, and a digit value −1 by its reciprocal.

Other ways of encoding integers that avoid consecutive 1s include Booth encoding and Fibonacci coding.

Converting to NAFEdit

There are several algorithms for obtaining the NAF representation of a value given in binary. One such is the following method using repeated division; it works by choosing non-zero coefficients such that the resulting quotient is divisible by 2 and hence the next coefficient a zero.[3]

   Input     E = (em−1 em−2 ··· e1 e0)2
   Output     Z = (zm zm−1 ··· z1 z0)NAF
   i ← 0
   while E > 0 do
       if E is odd then
           zi ← 2 − (E mod 4)
           EEzi
       else
           zi ← 0
       EE/2
       ii + 1
   return z

A faster way is given by Prodinger[4] where x is the input, np the string of positive bits and nm the string of negative bits:

   Input   x
   Output  np, nm
   xh = x >> 1;
   x3 = x + xh;
   c = xh ^ x3;
   np = x3 & c;
   nm = xh & c;

which is used, for example, in A184616.

External linksEdit

  • Introduction to Canonical Signed Digit Representation
  • Fractions in the Canonical-Signed-Digit Number System. Conference on Information Sciences and Systems. The Johns Hopkins University. March 21–23, 2001. CiteSeerX 10.1.1.126.5477.

ReferencesEdit

  1. ^ Hewlitt, R.M. (2000). "Canonical signed digit representation for FIR digital filters". Signal Processing Systems, 2000. SiPS 2000. 2000 IEEE Workshop on: 416–426. doi:10.1109/SIPS.2000.886740. ISBN 978-0-7803-6488-2. S2CID 122082511.
  2. ^ George W. Reitwiesner, Binary Arithmetic, Advances in Computers, 1960.
  3. ^ D. Hankerson, A. Menezes, and S.A. Vanstone, Guide to Elliptic Curve Cryptography, Springer-Verlag, 2004. p. 98.
  4. ^ Prodinger, Helmut. "On Binary Representations of Integers with Digits -1, 0, 1" (PDF). Integers. Retrieved 25 June 2021.