# Ones' complement

The ones' complement of a binary number is defined as the value obtained by inverting all the bits in the binary representation of the number (swapping 0's for 1's and vice-versa). The ones' complement of the number then behaves like the negative of the original number in some arithmetic operations. To within a constant (of −1), the ones' complement behaves like the negative of the original number with binary addition. However, unlike two's complement, these numbers have not seen widespread use because of issues such as the offset of −1, that negating zero results in a distinct negative zero bit pattern, less simplicity with arithmetic borrowing, etc.

A ones' complement system or ones' complement arithmetic is a system in which negative numbers are represented by the arithmetic negative of the value. In such a system, a number is negated (converted from positive to negative or vice versa) by computing its ones' complement. An N-bit ones' complement numeral system can only represent integers in the range −(2N−1−1) to 2N−1−1 while two's complement can express −2N−1 to 2N−1−1.

The ones' complement binary numeral system is characterized by the bit complement of any integer value being the arithmetic negative of the value. That is, inverting all of the bits of a number (the logical complement) produces the same result as subtracting the value from 0.

## History

The early days of digital computing were marked by a lot of competing ideas about both hardware technology and mathematics technology (numbering systems). One of the great debates was the format of negative numbers, with some of the era's most expert people having very strong and different opinions[citation needed]. One camp supported two's complement, the system that is dominant today. Another camp supported ones' complement, where any positive value is made into its negative equivalent by inverting all of the bits in a word. A third group supported "sign & magnitude" (sign-magnitude), where a value is changed from positive to negative simply by toggling the word's sign (high-order) bit.

There were arguments for and against each of the systems.[citation needed] Sign & magnitude allowed for easier tracing of memory dumps (a common process 40 years ago) as numeric values tended to use fewer 1 bits[citation needed]. Internally, these systems did ones' complement math so numbers would have to be converted to ones' complement values when they were transmitted from a register to the math unit and then converted back to sign-magnitude when the result was transmitted back to the register[citation needed]. The electronics required more gates than the other systems – a key concern when the cost and packaging of discrete transistors was critical. IBM was one of the early supporters of sign-magnitude, with their 7090 (709x series) computers perhaps the best known architecture to use it.

Ones' complement allowed for somewhat simpler hardware designs as there was no need to convert values when passed to and from the math unit. But it also shared an undesirable characteristic with sign-magnitude – the ability to represent negative zero (−0). Negative zero behaves exactly like positive zero; when used as an operand in any calculation, the result will be the same whether an operand is positive or negative zero. The disadvantage, however, is that the existence of two forms of the same value necessitates two rather than a single comparison when checking for equality with zero. Ones' complement subtraction can also result in an end-around borrow (described below). It can be argued that this makes the addition/subtraction logic more complicated or that it makes it simpler as a subtraction requires simply inverting the bits of the second operand as it is passed to the adder. The CDC 6000 series and UNIVAC 1100 series computers were based on ones' complement.

Two's complement is the easiest to implement in hardware, which may be the ultimate reason for its widespread popularity[citation needed]. Processors on the early mainframes often consisted of thousands of transistors – eliminating a significant number of transistors was a significant cost savings. The architects of the early integrated circuit-based CPUs (Intel 8080, etc.) chose to use two's complement math. As IC technology advanced, virtually all adopted two's complement technology. Intel, AMD, and Power Architecture chips are all two's complement.[citation needed]

↑Jump back a section

## Number representation

Positive numbers are the same simple, binary system used by two's complement and sign-magnitude. Negative values are the bit complement of the corresponding positive value. The largest positive value is characterized by the sign (high-order) bit being off (0) and all other bits being on (1). The smallest negative value is characterized by the sign bit being 1, and all other bits being 0. The table below shows all possible values in a 4-bit system, from −7 to +7.

```     +      −
0   0000   1111   —Note that +0 and −0 return TRUE when tested for zero, FALSE when tested for non-zero.
1   0001   1110
2   0010   1101
3   0011   1100
4   0100   1011
5   0101   1010
6   0110   1001
7   0111   1000
```
↑Jump back a section

## Basics

Adding two values is straight forward. Simply align the values on the least significant bit and add, propagating any carry to the bit one position left. If the carry extends past the end of the word it is said to have "wrapped around", a condition called an "end-around carry". When this occurs, the bit must be added back in at the right-most bit. This phenomenon does not occur in two's complement arithmetic.

```  0001 0110     22
+ 0000 0011      3
===========   ====
0001 1001     25
```

Subtraction is similar, except that borrows, rather than carries, are propagated to the left. If the borrow extends past the end of the word it is said to have "wrapped around", a condition called an "end-around borrow". When this occurs, the bit must be subtracted from the right-most bit. This phenomenon does not occur in two's complement arithmetic.

```  0000 0110      6
− 0001 0011     19
===========   ====
1 1111 0011    −12    —An end-around borrow is produced, and the sign bit of the intermediate result is 1.
− 0000 0001      1    —Subtract the end-around borrow from the result.
===========   ====
1111 0010    −13    —The correct result (6 − 19 = -13)
```

It is easy to demonstrate that the bit complement of a positive value is the negative magnitude of the positive value. The computation of 19 + 3 produces the same result as 19 − (−3).

```  0001 0011     19
+ 0000 0011      3
===========   ====
0001 0110     22
```

Subtract −3 from 19.

```  0001 0011     19
− 1111 1100     −3
===========   ====
1 0001 0111     23    —An end-around borrow is produced.
− 0000 0001      1    —Subtract the end-around borrow from the result.
===========   ====
0001 0110     22    —The correct result (19 − (−3) = 22).
```
↑Jump back a section

## Negative zero

Negative zero is the condition where all bits in a signed word are 1. This follows the ones' complement rules that a value is negative when the left-most bit is 1, and that a negative number is the bit complement of the number's magnitude. The value also behaves as zero when computing. Adding or subtracting negative zero to/from another value produces the original value.

```  0001 0110     22
+ 1111 1111     −0
===========   ====
1 0001 0101     21    —An end-around carry is produced.
+ 0000 0001      1
===========   ====
0001 0110     22    —The correct result (22 + (−0) = 22)
```

Subtracting negative zero:

```  0001 0110     22
− 1111 1111     −0
===========   ====
1 0001 0111     23    —An end-around borrow is produced.
− 0000 0001      1
===========   ====
0001 0110     22    —The correct result (22 − (−0) = 22)
```

Negative zero is easily produced in a 1's complement adder. Simply add the positive and negative of the same magnitude.

```  0001 0110     22
+ 1110 1001    −22
===========   ====
1111 1111     −0    —Negative zero.
```

Although the math always produces the correct results, a side effect of negative zero is that software must test for negative zero.

↑Jump back a section

## Avoiding negative zero

The generation of negative zero becomes a non-issue if addition is achieved with a complementing subtractor. The first operand is passed to the subtract unmodified, the second operand is complemented, and the subtraction generates the correct result, avoiding negative zero. The previous example added 22 and −22 and produced −0.

```  0001 0110     22         0001 0110     22                  1110 1001   −22         1110 1001   −22
+ 1110 1001    −22       − 0001 0110     22                + 0001 0110    22       − 1110 1001   −22
===========   ====  but  ===========   ====   likewise,    ===========   ===  but  ===========   ===
1111 1111     −0         0000 0000      0                  1111 1111    −0         0000 0000     0
```

The interesting "corner cases" are when one or both operands are zero and/or negative zero.

```  0001 0010     18         0001 0010     18
− 0000 0000      0       − 1111 1111     −0
===========   ====       ===========   ====
0001 0010     18       1 0001 0011     19
− 0000 0001      1
===========   ====
0001 0010     18
```

Subtracting +0 is trivial (as shown above). If the second operand is negative zero it is inverted and the original value of the first operand is the result. Subtracting −0 is also trivial. The result can be only 1 of two cases. In case 1, operand 1 is −0 so the result is produced simply by subtracting 1 from 1 at every bit position. In case 2, the subtraction will generate a value that is 1 larger than operand 1 and an end around borrow. Completing the borrow generates the same value as operand 1.

The only really interesting case is when both operands are plus or minus zero. Look at this example:

```  0000 0000      0         0000 0000      0         1111 1111     −0         1111 1111     −0
+ 0000 0000      0       + 1111 1111     −0       + 0000 0000      0       + 1111 1111     −0
===========   ====       ===========   ====       ===========   ====       ===========   ====
0000 0000      0         1111 1111     −0         1111 1111     −0       1 1111 1110     −1
+ 0000 0001      1
==================
1111 1111     −0
```
```  0000 0000      0         0000 0000      0         1111 1111     −0         1111 1111     −0
− 1111 1111     −0       − 0000 0000      0       − 1111 1111     −0       − 0000 0000      0
===========   ====       ===========   ====       ===========   ====       ===========   ====
1 0000 0001      1         0000 0000      0         0000 0000      0         1111 1111     −0
− 0000 0001      1
===========   ====
0000 0000      0
```

This example shows that of the 4 possible conditions when adding only ±0, an adder will produce −0 in three of them. A complementing subtractor will produce −0 only when both operands are −0.

↑Jump back a section

↑Jump back a section

## References

Donald Knuth: The Art of Computer Programming, Volume 2: Seminumerical Algorithms, chapter 4.1

↑Jump back a section