Talk:Endianness/Archive 9

Archive 5 Archive 7 Archive 8 Archive 9

Honeywell example

The Honeywell 316 example appears inconsistent with Honeywell's own documentation [1], which illustrates that all data types (fixed point 32-bit, double precision integer 32-bit, single precision float 32-bit, double precision float 48-bit) put the most significant 16-bit word first in memory (pages 2-4 and 2-5), meaning the example (0x0A0B0C0D as of 2021-01-14) would really be {0B,0A,0D,0C}. Additionally, the bus width is 16 bits, and 16-bit words appear to be the minimum addressable size, with the only way to read what their documentation calls "halfwords" (8 bits) being to load a 16-bit word into a register and then use one of the halfword instructions (page 3-7). So from the memory point of view, the example would really be {0x0A0B,0x0C0D} (though it may be possible to have 8-bit addressability of the data via magnetic tape output?). So if there exist Honeywell Series 16 machines matching the current example, then it's not the Honeywell 316, whose endianness swaps groups of 16 bits per 32 bits, rather than 8 bits per each 16 bits. Piken (talk) 07:41, 15 January 2021 (UTC)piken

I see nothing in the documentation to indicate that 8-bit bytes are directly addressable, so any byte-order conventions would be imposed by peripherals (which is where the magnetic tape would come in; unfortunately, the Bitsavers documentation only seems to discuss a 7-track tape drive, in which tape frames don't correspond to 8-bit bytes) or by software rather than CPU hardware. Guy Harris (talk) 09:47, 15 January 2021 (UTC)
Endianness doesn't happen only in 8 bit units. If six bit units come out in one order, that would also be endianness. Slightly easier when the word size is a multiple of six, though. I don't remember now how the 36 bit PDP-10 writes 9-track tape, though. Gah4 (talk) 15:58, 18 January 2021 (UTC)
@Gah4 and Guy Harris: The documentation doesn't say either that the first word must be at the lower address in memory. It rather seems to say that what is called the first word is the most-significant word and what is called the second word is the least-significant word. Now, concerning the optional "high-speed arithmetic unit":
  • page 4-1 for double-precision load (DLD) and store (DST) says that the A-Register corresponds to the effective address (EA) and the B-Register to EA+1 (with 16-bit word addressing);
  • page 4-5 for Normalize (NRM), corresponding to a left shift, shows a figure where the most significant bit of the B-Register goes to the least significant bit of the A-Register, which implies that the A-Register (thus EA) corresponds to the most significant word and the B-Register (thus EA+1) corresponds to the least significant word. This contradicts John Savard's page on floating-point formats[2] (final comment, which also deals with integers), which currently says: "As with many other computers, such as the Honeywell 316, a 32-bit integer was stored with its least significant 16-bit word first, in the lower memory address, so that addition could begin while the more significant words of the operands were being fetched." Or was this just a choice done by some software without the high-speed arithmetic unit? — Vincent Lefèvre (talk) 11:12, 8 June 2021 (UTC)
This example was added in 927086025 and 927090340 by Artoria2e5. Any comment? — Vincent Lefèvre (talk) 20:43, 8 June 2021 (UTC)

References

Calculation order

As for the calculation order section. Modern high-performance processors usually have divide, and simple low-performance ones usually not. In earlier years, this was less true. Even the low end S/360 models, with 8 bit memory and ALU, still have fixed, and optionally floating, point divide. However, the hardware to do it isn't all that hard. Well, S/360 requires operands to be appropriately aligned, so all you need to do is invert the bits to count backwards. S/370 allows for them to be not aligned, so it is harder. Also, the 8087 for use with the 16-bit bus 8086 and 8-bit bus 8088 has to be able to find its bytes in memory. In any case, if a processor does division then little endian doesn't help much. Gah4 (talk) 22:25, 8 June 2021 (UTC)

You need to take into account the fact that additions, subtractions and multiplications occur much more often than divisions, so that it is better to favor these 3 basic operations. Moreover, division is a slow operation, so that any win due to endianness choice would not be very noticeable for this operation. — Vincent Lefèvre (talk) 22:32, 8 June 2021 (UTC)
Well, mostly it is the need for hardware to do it. My favorite is always the 6502, which on subroutine call pushes one less than the address of the next instruction on the stack. Just because that is what is in the register at the time. RET then increments it in time. As above, if you want to decrease through an 8 byte aligned object, just invert the three low bits. And the hardware is there, no matter how often you do divide. But yes, especially in microcode, you might do some slower operations on divide, if it didn't need extra hardware. In the case if S/360, they did things because they were right, not because they were easy. Many previous processors were designed to make things easy, and also many of the early microprocessors. For S/360, the architecture was designed and then machines were built, not the other way around. Gah4 (talk) 11:45, 9 June 2021 (UTC)

Citation requested, asking for examples of middle-endian architectures

Well, first, it says possible, and it is possible even if there aren't any examples. But there are, such as I noted VAX floating-point. As the section indicates, any ordering that isn't big or little is middle. Except that the note about VAX was moved, so it isn't an example anymore. In F-float, the sign bit is 1/2 bit away from the middle. Others aren't so close, but still qualify according to the explanation. Gah4 (talk) 01:11, 8 June 2021 (UTC)

@Gah4 and Guy Harris: This section was not clear, and it looked like requesting an example similar to the MDY dates, as there were already big-little and little-big examples in the following sections, which were not subsections of this one (so I was a bit confused). IIRC, when I saw mentions of other kinds of endianness in the past, this was always similar to PDP-11, i.e. mixing big-endian and little-endian (in one way of the other), never something like MDY dates. So this should be clarified and sourced. A reference to the floating-point section could be added. BTW, I don't think that "Intel IA-32 segment descriptors" should be regarded as endianness since from the description, the data appear to be discontinuous, so that this is very special.
I don't have the time to check, but for the Honeywell Series 16, I found the programmers' reference manual, so perhaps the corresponding subsection could be re-added and sourced if it is correct.
https://www.ai.univ-paris8.fr/public-html/fb/public_html/Cours/Exposes0405-1/little-big-endian.pdf (in French) says that middle-endian is used to represent compact decimal numbers, but I don't know what it means, perhaps packed BCD, by considering the order of the nibbles in a byte and the order of the bytes in memory.
Vincent Lefèvre (talk) 09:08, 8 June 2021 (UTC)
Two bytes can be in one of two orders. Four bytes can be in 24 different orders, only two of which are big and little endian. Of the 22 so-called middle endian, only two make some sense, which are big endian 16 bit words in little endian order, and little endian such words in big endian order. There are some more interesting ways to put eight bytes (or eight anythings) together. In any case, VAX floating point is an instance of real architectures using an unusual order. Gah4 (talk) 13:13, 8 June 2021 (UTC)
But some architectures may have formats with 3 words (such as Honeywell Series 16, whose double-precision floating-point format consists of three 16-bit words). Some processors have 48-bit integers. — Vincent Lefèvre (talk) 13:54, 8 June 2021 (UTC)
See the earlier discussion above at #Honeywell example; I based my comments on various Series 16 manuals at Bitsavers (Honeywell bought Computer Control Company in 1966).
The Series 16 minicomputers were word-addressable, so either software, or DMA peripheral controllers dealing with bytes, would have to be involved in the definition of the byte order.
I requested citations for the claims in the Series 16 section in this edit from July 2020. Guy Harris (talk) 20:14, 8 June 2021 (UTC)
The French document you point to says that "Some minicomputers use this format to represent compact decimals", but I'm not sure what minicomputers those would be - the example they give is just showing the octets of a 4-octet 0x04030201 in a middle-endian form with two little-endian 16-bit words, the first containing the high-order bits and the second containing the low-order bits, i.e. PDP-endian.
For bytes containing two digits, the PDP-11's Commercial Instruction Set (CIS) packed decimal format puts the high-order digit in the upper 4 bits of a byte and the low-order digit in the lower 4 bits, and the byte with the lowest address has the two highest-order digits, so it's consistently big-endian (even though the PDP-11 is otherwise little-endian, at least for integers and addresses). See page 3-7 of KE44-A CISP Technical Manual], for the KE44-A add-on processor for the PDP-11/44. I suspect that's a feature of the CIS, which the F-11, used in the PDP-11/23, also offered as an option, not just of the KE44-A.
The VAX packed decimal format is the same. See page 18 of the VAX Architecture Reference Manual.
At least as I read the packed decimal example on page 2-5 of the ECLIPSE Programmer's Reference Manual], the 16-bit Data General Eclipse line was consistently big-endian.
At least as I read the packed decimal examples on pages 2-17 and 2-18 of the Eclipse 32-Bit Systems Principles of Operation, the 32-bit Data General Eclipse line was also consistently big-endian.
At least as I read pages 3-23 and 3-24 of the HP 3000 Computer Systems Machine Instruction Set Reference, the (16-bit) HP 3000 was also consistently big-endian (the 32-bit version used PA-RISC, which I think had no decimal assist instructions that would impose anything resembling a digit order).
So I'm not sure what US minicomputers were middle-endian for packed decimal. Might a non-US (Western Europe, Eastern Europe, Japan, etc.) minicomputer line have done so? Guy Harris (talk) 05:49, 9 June 2021 (UTC)
What about Intel 8086 and later? Intel Microprocessors: 8008 to 8086 by Stephen P. Morse et al. says "In the 8086, correction operations are provided to allow arithmetic to be performed directly on unpacked representations of decimal digits (e.g., ASCII) or on packed decimal representations." But it does not seems to give details about such a packed decimal representation. However, if I understand DAA -- Decimal Adjust AL after Addition correctly, the least significant digit must be stored in the low-order nibble of the byte to make it work correctly, since the "AL := AL + 6;" may generate a carry from from low-order nibble to the high-order nibble. However, I don't think that this has any consequence from a programming point of view, even on such a little-endian machine. One would actually need to find a machine that is nibble-addressable. — Vincent Lefèvre (talk) 08:04, 9 June 2021 (UTC)
Almost. In the case of 8086 DAA, it works byte by byte. So, the programmer can store the bytes in either order. If you print out a byte, especially as a hexadecimal value, it is MSD on the left. (I forget if there is an unpack in 8086.) So if you print out bytes in the wrong order, it is middle-endian. That is, if you print the digits of a byte in a different order than you print the bytes. Then there is VMS DUMP command, which does a hexidecimal dump of a file, printing ASCII data left to right, and hex data right to left. But you will really confuse people if you print decimal data in little endian order. Gah4 (talk) 11:53, 9 June 2021 (UTC)

performance

the article states that a system will perform equally well regardless of endianness, but piece argues this based on internal consistency which is no guarantee of performance (i.e. would engineering for the other endianness effect speed.) I suggest that the comment be removed or changed to say "code correctness is guaranteed" or "internal decoding is guaranteed to be consistent" rather than any statement about an unspecified performance metric. 204.48.95.191 (talk) 21:27, 9 September 2021 (UTC)

For the simplest processor, such as the 6502, it is significant. That is especially true for any processor without multiply and divide. For general register style machines, it pretty much doesn't matter, as the registers can be arranged as appropriate. But processors like the 6502 have to work with 16 bit (2 byte) addresses, and process them accordingly. For anything past the early 8 bit processors, I pretty much agree that it doesn't affect performance. Gah4 (talk) 22:49, 5 April 2022 (UTC)

Most/least significant byte definition

Most significant byte and least significant byte are used at the start of the article without much reference as to what they mean. They are linked to a page which redirects to "Bit numbering#(Least|Most) Significant Byte" as of right now, but those sections no longer exist on that page. [1] appears to be the revision that got rid of those sections, and in it's edit summary it points to the Endianness article to define these terms. Those redirects should be changed, but the information might want to be inserted in this page, which is why I'm leaving this on the talk page. 2001:48F8:7054:18FA:0:0:0:7297 (talk) 01:34, 30 March 2022 (UTC)

Hmm. How else would you describe it? In a numeric system, the 100's digit is more significant than the 1's digit, which is the sense meant. Is there another word that indicates this? Gah4 (talk) 22:41, 5 April 2022 (UTC)
2001:48F8:7054:18FA:0:0:0:7297: The problem with the missing section names has now been corrected. All that was needed was to add a couple of {{anchor}} templates to the target article, which I've done. Endianness doesn't need to change. -- R. S. Shaw (talk) 01:21, 6 April 2022 (UTC)

Predominant addressing scheme

The subsequent section "Calculation order" has the sentence:

Addressing multi-digit data at its first (= smallest address) byte is the predominant addressing scheme.

So it is possible that a machine has another addressing scheme. With such an addressing scheme the statement of the section "Simplicity" becomes false. –Nomen4Omen (talk) 20:42, 25 April 2022 (UTC)

What other such addressing schemes exist? That scheme is used for all byte-addressable binary computers that I know of.
Perhaps for decimal data that takes multiple storage locations there are machines where instruction operands refer to the last storage unit rather than the first storage unit; are there?
If they do exist, then the notion of different ways of addressing multi-storage-unit data as operands, distinct from endianness, should probably be introduced separately, before either § Simplicity or § Calculation order. Guy Harris (talk) 21:14, 25 April 2022 (UTC)
As noted in a {{note}}, the IBM 1401 does addition from high address to low address. As long as one is consistent, it doesn't make all that much difference. Computers can usually subtract about as easily as add. In the case of the 6502, it is data in the instruction stream that is conveniently little endian, as the instruction counter increases. The 6502 is funny, in that the CALL instruction does not put the return address on the stack, but one less than the return address. It seems to be what is in the PC at the time. (RET has to fix this.) Gah4 (talk) 21:21, 25 April 2022 (UTC)
Also, early Fortran compilers stored arrays in decreasing addresses in memory. This might have been related to indexing operations on the IBM 704. Gah4 (talk) 21:31, 25 April 2022 (UTC)

OK, so now:

  • § Basics says "On most systems, the address of a multi-byte simple data value is the address of its first byte (the byte with the lowest address)."; that's the sentence to which the note about the 1401 is attached.
  • § Simplified access to part of a field begins with "On most systems, the address of a multi-byte value is the address of its first byte (the byte with the lowest address); little-endian systems of that type have the property that, for sufficiently low values, the same value can be read from memory at different lengths without using different addresses (even when alignment restrictions are imposed)." (I expanded the name - "Simplicity" is a very broad term, and this is simplification of a very specific thing.)
  • § Calculation order says "Addition, subtraction, and multiplication start at the least significant digit position and propagate the carry to the subsequent more significant position. On most systems, the address of a multi-byte value is the address of its first byte (the byte with the lowest address); when this first byte contains the least significant digit – which is equivalent to little-endianness, then the implementation of these operations is marginally simpler."

Hopefully, that makes things clearer. Guy Harris (talk) 02:56, 26 April 2022 (UTC)

I wouldn't have put in multiplication, which is not simple in any order. In the case of the 1401, operands are variable length. Also in the case of the 1401, you want them in big endian order for printing. The 6502 is an amazingly simple processor, so even marginally simpler is enough. Gah4 (talk) 04:59, 27 April 2022 (UTC)
I didn't put in multiplication - it was already there; all I did was replace "The address of such a field is mostly the address of its first byte." with "On most systems, the address of a multi-byte simple data value is the address of its first byte (the byte with the lowest address)." Guy Harris (talk) 05:03, 27 April 2022 (UTC)
Oh, yes, it wasn't supposed to say that you did. I had thought about it earlier, but didn't say it until then. Gah4 (talk) 06:57, 27 April 2022 (UTC)