Archive 1 Archive 2

Code density

The first reference link is dead.

(VAX, 320xx microprocessor usually produced more compact code than 68000 also. Sometimes the 8086 as well.
---
Hard to believe. The VAX is a 32-bit machine.

(Yes, but it was byte-coded, very compact. GJ)

The TI 32000 series is a RISC machine, isn't it?

(possibly, for some definitions of RISC. But i meant the Natsemi chip 32016 and successors. I have now fixed the link! GJ)

RISCs execute fast, but their code is not compact.

(RISC is not usually compact. But several are not much worse than 68K, and the HP-PA allegedly beats nearly everything for code size. No i don't know how, and i never got my hands on one to test. GJ)

Even space-optimized RISCs like the ARM need larger code than the 68K. They had to really sweat the ARM down with the "thumb" and "thumbscrew" approaches to reduce it to less than the 68K. Just reading about it tells me somebody had a bad 6 months getting there.

Certainly the 8086 is not smaller; You'll cram about 2x as much code into a 68K machine per byte as an x86. If you don't believe -me-, see the 6/27/97 entry:

http://vis-www.cs.umass.edu/~heller/cgi-bin/heller.cgi/?Frame=Main

x86 code is just not that compact. Ray Van De Walker

(It was 20% smaller than 68K the only time i actually coded something in both and cared enough to check the sizes. It did depend on what you were coding. 32 bit arithmetic on 8086 was horrible, and running out of registers was nearly as bad. And if you could make use of auto increment and decrement on 68K that was a big win. But the stuff i did mostly avoided all that, and was extremely compact on 8086. This experience was apparently almost normal for hand coded 16 bit stuff. 68K usually won for stuff from compilers. With the 80386, Intel became more "normal" and all the comparisons probably changed. -- Geronimo Jones)

That web page (~heller) seems like an especially bogus comparison. 1) using C++ is a joke, neither CPU architecture was designed to support it. C would be a better language to compiler. 2) using different breed compilers is silly, you should use compilers from the same stable. For example, the lattice C compiler targets both architectures, as does Metrowerks (just about), and of course gcc. 3) the program you compile probably makes a big difference. As GJ points out 32-bit ops on an 8086 are a pain, but if your C program uses mostly 'int' then that's not a problem. On a 80486 it might not make much difference. --drj

--- These are all reasonable objections. However, there's no doubt that many designers thought that it was more compact. So, I rewrote it from a NPOV to say so. I also rewrote the orthogonality discussion from a NPOV. I hope that helps. Ray Van De Walker

---

A common misunderstanding among assembly-language programmers had to do with the DBcc loop instructions. These would unconditionally decrement the count register, and then take the branch unless the count register had been set to -1 (instead of 0, as on most other machines). I have seen many examples of code sequences which subtracted 1 from the loop counter before entering the loop, as a "workaround" for this feature.

In fact, the loop instructions were designed this way to allow for skipping the loop altogether if the count was initially zero, without the need for a separate check before entering the loop.

The following simple code sequence for clearing <count> bytes beginning at <addr> illustrates the basic technique for using a loop instruction:

    move.l <addr>, a0
    move.w <count>, d0
    bra.s @7
@1:	clr.b (a0)+
@7:	dbra d0, @1

Notice how you enter the loop by branching directly to the decrement instruction. This makes it execute the loop body precisely <count> times, simply falling right out if <count> is initially zero.

Also, even though the DBcc instructions only support a 16-bit count, it is possible to use them with a full 32-bit count as follows:

    move.l <addr>, a0
    move.l <count>, d0
    bra.s @7
@1: swap d0
@2: clr.b (a0)+
@7: dbra d0, @2
    swap d0
    dbra d0, @1

This does involve a bit more code, but because the inner loop executes up to 65536 times for each time round the outer loop, the extra time taken is insignificant.Ldo 10:05, 12 Sep 2003 (UTC)

virtualization

The main page claimed that "the 68000 could not easily run a virtual image of itself without simulating a large number of instructions.". This is false; the only 68000 instruction which violates the Popek_and_Goldberg_virtualization_requirements is the "MOVE from SR" instruction. The 68010 made "MOVE from SR" privileged for that reason, and added an unprivileged "MOVE from CCR" instruction that could be used in its place.

It was further claimed that "This lack caused the later versions of the Intel 80386 to win designs in avionic control, where software reliability was achieved by executing software virtual machines.". The i386 is MUCH harder to virtualize than a 68000, as it has very many violations of the Popek and Goldberg requirements, and they are much more difficult to deal with than the 68000's "MOVE from SR" instruction. See X86 virtualization, and in particular Mr. Lawton's paper referenced there.

I'm not sure how common the i386 was in avionics, but the 68000 and later 68K family parts were in fact widely used.

--Brouhaha 22:52, 23 Nov 2004 (UTC)

I know the early Apollo workstations had to include a major KLUDGE in order to implement virtual memory-- they had TWO 68000's one running one clock cycle ahead of the other. If the one ahead got a page fault, the one behind would service the fault, then they'd exchange places.— Preceding unsigned comment added by 66.41.7.155 (talk) 12:53, 10 August 2005 (UTC)

"Its name was derived from the 68,000 transistors on the chip."

If anyone is interested I found an oral history from all the people who developed and ran the 68000 project at Motorola 68000 Oral History. Page 32 has some text by Tom Gunter (Team lead on the 68000) that says:

"All I remember is when the 68000 came out it was as much a fluke as anything, and I used to tell people we planned it that way. When the 68000 came out, it came out at 68000 square mills. At the time Motorola as a corporation had 68,000 people working for them. Was that kind of a fluke. I was trying to put that back into microns and so on. We didn't even have microns that'd go—we had motrons. We had our own definition of a micron." Loweredtone (talk) 16:21, 24 January 2019 (UTC)


Please supply a reference for this. Mirror Vax 21:12, 18 Jun 2005 (UTC)

It's really unlikely the 68000 has only 68,000 transistors. More likely the name came as an upgrade of the good old Motorola "6800" series. Although there's almost no resemblance between the two architectures.

Actually the MC68000 did have approximately 68,000 transistor "sites"; that count included PLA locations that might or might not have an actual transistor depending on whether that PLA bit was a one or a zero. This information was widely publicized by Motorola FAEs back then, but wasn't in the data sheets, so it's hard to find anything that would be considered definitive today. At one time the Mototola SPS BBS had information on transistor counts of various devices in the 68K family, which ranged from 68,000 for the MC68000 to 273,000 for the MC68030. If someone had time to dig through electronics trade journals (Electronics, Electronics Design, EDN, EE Times) from 1979-1980, they might find mention of the transistor count.
Or one might pester one of the original designers of the MC68000. His email address isn't that hard to find, but I'm not going to put it here since that would probably result in the guy getting tons of email with dumb 68K questions.
Anyhow, it's accurate to say that the MC68000 designation derived from BOTH the transistor count and as a logical successor to the MC6800 family. --Brouhaha 01:03, 17 October 2005 (UTC)
We can only include verifiable information, not speculation (however plausible). Besides, how the name was arrived at is not important. Mirror Vax 01:54, 17 October 2005 (UTC)
The 68000 transistor count was widely known at the time - I'm sure one could find it mentioned in Byte (magazine) etc. There was a great deal of rivalry between the 8086 and the 68000. The transistor count was presumably a way of advertizing how advanced the 68000 was, compared to the 8086, and explain why the 68000 was delayed. An important piece of information, IMHO. -- Egil 04:56, 17 October 2005 (UTC)
Just doing a quick google on "68000 transistors" I easily found:
Note the 29000 transistors of the 8086. Not mentioning the rivalry between the 68k and the 8086, and not mentioning the transistor count issue would be the wrong thing here, it is an important piece of historical information. The actual transistor count ended up slightly over 68000, I've seen 70000 mentioned. -- Egil 05:12, 17 October 2005 (UTC)
Mirror VAX wrote "We can only include verifiable information" -- since when? You *REALLY* would not like what the 68000 page would turn into if we removed everything that wasn't 100% verifiable from actual printed, customer-distributed Motorola literature. Since multiple people (myself included) have personal recollection of Motorola FAEs giving the 68000 transistor number and stating that it influenced the part number, I think it's fair game to include, and certainly it's closer to being authoritative than a lot of the other rubbish on the page.
The 68000 FAQ has a list of transistor counts that appears to have been derived from information Motorola put on their now-defunct "Freeware BBS". It confirms the transistor count. --Brouhaha 23:41, 17 October 2005 (UTC)
First of all, the subject is not the transistor count. We are discussing how the chip was named. Sorry if I didn't make that clear. Why isn't it good enough to simply state the transistor count, and leave to the reader speculation about what "influenced" the name? Mirror Vax 02:11, 18 October 2005 (UTC)
Also, it's possible that the influence worked in the other direction - perhaps they decided on the 68000 name, and then creatively rounded the transistor count to match (maybe there are really 67000, or 69000...) Mirror Vax 02:21, 18 October 2005 (UTC)
Your latest suggestion is ridicolus. If you had even bothered reading the referrences, you would have seen that the final design ended up with having around 70,000. Motorola made a great marketing fuss about the transistor count wrt. chip name, and that is certainly something that should be mentioned. -- Egil 05:35, 18 October 2005 (UTC)
Mirror VAX asks "Why isn't it good enough": because various Motorola employees including FAEs at the time of introduction made a point of the transistor count being related to the part number; it's not just some random coincidence that customers noticed after the fact. Why do you have such a big problem with it? As for your other suggestion, I've been in the industry for over 20 years, currently work for a semiconductor company, and I've never yet heard of anyone basing design characteristics of a chip on the numerical portion of a part number.
It is much more likely the case that they had an approximate transistor budget in mind when they started the design, beased on the process technology and die size they wanted. As the design progressed, the transistor budget probably changed. For instance, it could have been decided to increase the transistor budget to allow for more GPRs, or a larger instruction set. It is possible but rather less likely that the design ended up needing fewer transistors than the original budget. Without contacting the designers, we're unlikely to ever know what the original transistor budget at the outset of the project was.
In any case, it is common practice for the final part number to be determined AFTER the design is complete and ready for production. One chip my employer developed was known by the number 4000 during development, then 3035 for first silicon (not shipped to customers), then 3023 for the final product. Except possibly for the original 4000 designation, the part numbers were determined by the marketing department and were essentially unrelated to the engineering details. --Brouhaha 00:23, 19 October 2005 (UTC)
You don't know how the marketers arrived at the name. You weren't in the room. I wasn't in the room. So we have two choices: (1) we can invent a history that seems plausible, and might be wholly true, partly true, or wholly false, or (2) we can stick to what we know to be true. You prefer (1); I prefer (2). Mirror Vax 02:26, 19 October 2005 (UTC)
I know what the Motorola FAEs *said* was the basis for the name. So we can assume that they were telling the truth, or we can assume that they were lying, or we can assume that I am lying. Which seems more plausible to you?
Did you actually have any contact with Motorola FAEs regarding the MC68000 in the 1979-1981 timeframe? I dealt with the local Motorola FAEs in Denver as part of my job. --Brouhaha 05:36, 19 October 2005 (UTC)
OK, found a reasonably definitive reference. Harry "Nick" Tredennick, one of the engineers responsible for the logic design and microcode of the 68000 (and listed as one of the inventors on six of the Motorola patents regarding the 68000), posted to comp.sys.intel on 22-Aug-1996 a response to comments about the 68000 designation being derived from the transistor count, or as a followon to the 6800: "I think there was a little of each in the naming, but definitely some contribution from its being a follow-on to the 6800. We (the lowly engineers) were concerned at the time that the press would confuse the 6800 with the 68000 in reporting. It happened." This confirms what the Motorola FAEs were telling customers at the time. --Brouhaha 09:29, 24 October 2005 (UTC)
The current version of the article says, "The transistor cell count, which was said to be 68,000 (in reality around 70,000)...". I don't know if that's true or not, but if it is, it undermines the notion that the name was derived from the transistor count (as does Tredennick's statement that there was "definitely some contribution from its being a follow-on to the 6800"). Rather, it suggests that the stated transistor count was derived from the name. Why not name it the MC70000? Why, if you are bragging about large transistor count, would you "round down" 70,000 to 68,000? Mirror Vax 15:06, 24 October 2005 (UTC)
Which part of "I think there was a little of each" did you not understand? He didn't say that the part number was based exclusively on the MC6800. And given your insistence on authoritative information, where is the authoritative source for the 70,000 count? --Brouhaha 19:03, 24 October 2005 (UTC)
Good question. As I said, I don't know if it's true or not. Mirror Vax 19:29, 24 October 2005 (UTC)

Motorola 6800

What about the Motorola 6800 (one zero less)? --Abdull 13:12, 2 October 2005 (UTC)

What about it? --Brouhaha 01:03, 17 October 2005 (UTC)
The Motorola 6800 is an 8-bit CPU.— Preceding unsigned comment added by 24.141.181.146 (talk) 13:03, 4 January 2007 (UTC)

Talking about claims

The article says: "Originally, the MC68000 was designed for use in household products (at least, that is what an internal memo from Motorola claimed)." I very much doubt this. What sort of household product would need the computing power of the 68k? The 68k was totally state of the art wrt complexity, pin count and chip area at the time, with a price to match. (I would have believed the above statement if we are talking about the MC6800, but that is another issue). -- Egil 05:59, 17 October 2005 (UTC)

So, how many bits?

To help clarify this, is the 68000 code word size 16 or 32 bits wide? --Arny 09:03, 30 January 2006 (UTC)

Do you mean the size of the instructions? They could vary from 16 bits (eg 0x4e75 for RTS, 0x4e71 for NOP, and 60xx for short relative branches) to 80 bits (0x23f9aaaaaaaa55555555 for MOVE.L $AAAAAAAA, $55555555). The data bus for reading/writing to memory was 16 bits wide, and the registers A0-A7 and D0-D7 were 32 bits wide. Cmdrjameson 14:17, 30 January 2006 (UTC)
I think the conventional view is that the 68000 is a 16-bit implementation of a 32-bit architecture; the later 68020, '030 and '040 are 32-bit implementations of the same architecture. This is what it basically says in my copy of "68000 primer" by Kelly-Bootle and Fowler. Graham 22:55, 30 January 2006 (UTC)
Yes, this is what I've heard too. I'm next to certain this is explicitly documented in Motorola's reference manuals about the 680x0. By way of contrast, the 68008 was also a 16-bit implementation of the same architecture, but this time in a smaller physical package and as a result had an 8-bit data bus, and only a 20-bit external address. Cmdrjameson 01:20, 31 January 2006 (UTC)
It was still a 16/32-bit chip though. The narrow bus was only to keep the physical pin count down. It used as many fetches as needed to bring in the data byte by byte on the 8 lines. Of course this made it slow but more than adequate for the applications it was intended for. I think this approach was really clever on the part of Moto - they allowed people to learn the instruction set once and apply it over a very wide range of different chips and applications. The same code would run unchanged on all varieties of the processor and the hardware just did what it needed to do to make it work. I guess it could be said that this was one of the first micros to be designed mainly from the software perspective rather than the hardware one. Graham 01:27, 31 January 2006 (UTC)
"It was still a 16/32-bit chip though. The narrow bus was only to keep the physical pin count down". It wasn't. Your claim "It was a 16/32 bit chip, the reduction in data pins was just to keep the pin-count down" has technical parity with "It was an 4/8 cylinder engine. The reduction in Cylinders was just to keep the cylinder count down. You will notice, if you look for the documentation, the Intel 8088 appears in their own self-sourced manuals described as "The 8088 8-bit microprocessor". There are two facts which are interesting about that, and pivotal: 1. The 8088 was internally almost identical to the 16bit 8086. It differed from it only in the data-bus pin reduction. 2. It was otherwise internally *entirely* 16bit. It had not only 16bit registers but also a 16bit ALU, and yet still Intel described their own 8088 chip as "8-bit". Why? Because the *primary* metric in these matters is /the size of the data bus/ in exactly the same way the *primary* measure of wether a car is a V4 or a V8 is the cylinder count. As with engines, you don't get to ignore the main thing entirely and call it a "4/8 cylinder" car on the strength of extraneous and superficial facts such as "It's 8 cylinder because it has eight valves". The valves don't make any difference. The reason Intel called their 8088 "8-bit", despite the fact it was internally 16bit in its entirety, while Motorola called the 68000 "16/32 bit", despite the fact there is *nothing* truly 32bit about the 68000, is because Motorola lied about how CPU bit-ratings are measured for marketing reasons and it has persisted ever since, because enthusiasts are happy to blindly accept any good news they hear and any reasoning that comes with it. Consider the Intel AVX-512 extensions, for example. If the 68000 is, in any way, "32-bit", then Intel have a far more convincing case that, since 2013, they have been making "512-bit" CPUs. The claim the 68000 is 32-bit is simply preposterous and it is made by 68000/Amiga evangelists for publicity and fandom reasons. Obviously, the claim will persist within the culture for the foreseeable future, but it is false. The Motorola 68000 is 16bit. You don't get to legitimately uprate it merely by cherry-picking the juicy bits from the CPU die you like the sound of, it isn't how this works. You can't upgrade the bit rating of a CPU by talking about details inside the CPU die any more than you can upgrade the cylinder count of you car by talking about details in the cabin. It is merely a distraction. If you only have 8 cylinders pulling the drive-train it doesn't matter what's in the cabin. Likewise, the 68000 has a 16bit data bus, it and comments to 16bit RAM on 16bit motherboards. There is no valid route from those facts to it being 32bit. Even if the 68000 had had 128 parts on the die, it would still talk to the outside world exclusively through a 16bit interface. Vapourmile (talk) 01:44, 13 March 2021 (UTC)
Oh absolutely, it was still definitely a 16/32-bit chip. Mind you, this notion of having a common instruction set/architecture and a large range of implementations with different price/performance characteristics wasn't unique to Motorola. DEC differentiated the VAX product line with horribly slow but cheap implementation in the 11/730 vs the faster 11/780; and later with systems like the MicroVAX vs the 8650. And of course the granddaddy of them all is IBM's System/360 which did all this back in the 60s... --Cmdrjameson 11:00, 31 January 2006 (UTC)
The 68000 was a mainframe on a chip ;-) Graham 11:10, 31 January 2006 (UTC)
Hardly. It may have been the first microprocessor to have an architecture similar to that of a mainframe CPU, but it didn't particularly have any of the other attributes of mainframes, nor was the raw computing performance comparable to a contemporary mainframe. That's not a dig at the 68000; it wasn't trying to be a mainframe, and it was definitely a best-in-class microprocessor for several years after its introduction.
Intel called their Intel iAPX 432 a "Micromainframe", and it had a few attributes that were more mainframe-like than the 68000, but its uniprocessor performace was significantly worse than the 68000. --Brouhaha 23:09, 31 January 2006 (UTC)
Earnestness alert! I was joking. Graham 23:47, 31 January 2006 (UTC)



The article claims that the 68000 has 3 ALUs. This is completely wrong. It has a single 16-bit ALU. And this is probably the most important aspect that makes the 68000 a 16/32 chip, even internally.

32-bits ALU operations are performed internally with two 16-bit steps, and using a carry when needed. 32-bit address calculations are performed using a separate simple AU unit. This unit is not an ALU. The only operation it can perform is a simple 32-bit addition.Ijor 19:40, 14 December 2006 (UTC)

If you look at a die photo there are three equal sized ALUs. Multiplication in particular is handled by two ALU's chained together. Potatoswatter 10:36, 15 December 2006 (UTC)
Can you point to some document that states that those 3 sections are 3 ALUs as you think? Can you point to any source describing that multiplication is performed by two ALUs? Can you explain, if multiplication uses two 16-bit ALUs, why it takes the number of cycles as it does? Can you explain, if it has more than a single ALUs, then why logical 32-bit operations (that don't require carry) take longer than 16-bits ones? If it has 3 ALUs, then can you explain the timing of the DIVx instructions? Can you explain why the need of implementing an ALU exclusively for address caculation, when all what is needed is a simple small addition?Ijor 17:01, 15 December 2006 (UTC)
I'm pretty busy so I can't do research for you, and I won't be around for the next month either. See The M68000 Family, vol. 1 by Hilf and Nausch. Microphotograph on p40. The low address word has a much smaller ALU. They describe in detail the layout of the microcode and nanocode ROMs and how the ALUs are ported to each other. There are two LSW ALUs and one MSW ALU. 70 cycles for multiplication = 16 instruction cycles * 4 cycles/instruction cycle + 6 cycles overhead. The ALUs form three 16 bit registers of internal state. I'd guess two are being used to calculate a running total, with the first operand latched into the low word ALU's input, and a left shift performed every insn cycle. The third ALU is used to right-shift through the second operand.
Don't underestimate the importance of instruction cycles as opposed to clock cycles. The ALUs just couldn't be programmed to do an operation every cycle. The above algo fits with the address ALU doing two additions per insn cycle and the data ALU being able to do right shifts one per insn cycle. The microcode needed one cycle to branch, assuming the operation was programmed as a loop. (Otherwise that cycle would be a conditional branch, so same difference.) No real activity could happen when the microcode state machine was dedicated to controlling itself. Making a microcoded ~68000 transistor machine do multiplication that fast is harder than it might sound.
Not fair to demand an explanation of division. Generally what you seem to be confused about is the fact the address ALU had to compute a 32 bit addition for every insn just to increment the PC. Potatoswatter 04:47, 16 December 2006 (UTC)
I don't need you to make any research for me, I already did. I researched and investigated the 68000 far and beyond what was ever did, at least in disclosed form. The questions I asked were rhetoric, just to proof my point. I already know the answer to all of them.
I don't have that book, but if it states that the 68000 has 3 ALUs, then the book is wrong. The book seems to be confusing an ALU, with a simple AU. The 68000 32-bit AU, which indeed can be separated in two halves, is not an ALU. Among other things it can't perform logical operations, and it can't perform shifts or rotations. It can only perform a simple addition. That's why it is called AU, and not ALU. Btw, the term AU is used in Motorola documentation, it is not my own one.
The above MUL algo is wrong, for starters because 70 cycles is only the maximum, it depends on the operand bit pattern and is not fixed.
It is true that the ALU can't perform operations on every CPU cycle. Actually the 68000 needs a minimum of two cycles for any internal operation, not just ALU ones. But it is wrong that it needs an extra cycle to perform microcode branches. Microcode branches, conditionals or not, don't take any overhead at all. The reason is because every microcode instruction performs a branch. There is no concept of microcode PC. Every micro instruction must specify the next one.
Instructions must of course perform a 32-bit addition to compute the next PC. This is not an impediment or an explanation for why 32-bit operations take longer. The only explanation is because there is a single 16-bit ALU. See my article about the 68000 prefetch and cycle-by-cycle execution to understand why.
It is perfectly fair to ask about division. I solved all the details about the 68000 division algo already and published the results about a year ago. Can you or the authors of that book explain the exact timing of both DIV instructions for multiple ALUs?
As you can see by reading my articles, I know exactly the difference between a CPU clock cycle, a bus cycle, and a micro-cycle.Ijor 16:04, 16 December 2006 (UTC)
Some quotes from Motorola papers and patents: "A limited arithmetic unit is located in the HIGH and LOW sections, and a general capability arithmetic and logical unit is located in the DATA section. This allows address and data calculations to occur simultaneously. For example, it is possible to do a register-to-register word addition concurrently with a program counter increment".
I think this clearly shows that there is a single ALU, and that the other one is an AU. Again, it wouldn't make any sense to implement an ALU that would never perform any LOGICAL operations.
Btw, by re-reading your post it seems you think that one microcycle (what you call instruction cycle) takes four CPU clock cycles. This is also wrong, it takes two clock cycles, not four.Ijor 04:18, 2 January 2007 (UTC)

Here's a bit of real history. (I was at Motorola in the 80's).


The original design was for a machine with 32 bit address registers and only 16 bit data registers, eight of each! The microcode writer convinced the powers that be to make it a 32Address/32data machine.
This history is reflected in the fact that the high 16 bits of the 8 data registers are located beside the high 16 bits of the address registers and are physically located on the far side of the chip from the data ALU rather than beside it like the lower 16 bits are.
Yes, there were 3 math units. One 16 bit ALU and two 16 bit Address Units. The ALU was complex enough for the math instruction set while the AUs were only able to do address related math (add, subtract). And of course the 2 16 bit AUs worked together to make the 32 bit address calculations.
Therefore, to do a simple 16 bit math instruction, the ALU can do the operation while the AUs can perform address calculations during the same micro-cycle.
To perform a general 32 bit data operation, it is necessary to move the high 16 bit register data past/through the AUs to the ALU. This is why 32 bit ops take many more cycles than 16 bit ops.
One could therefore say it was a 16 bit processor pretending to be a 32 bit one. A real 32 bit ALU came out in the 68020.
BTW: If you look at a die, the top half is the microcode, the middle the random logic, and the bottom the registers and ALU/AUs. The bottom third is ordered LtoR: High A & D registers, AUhigh, (X) , AUlow, Low A registers, (X), Low D registers, ALU, I/O aligner. Where the Xs are gates (usually closed) that allow data to travel between the three areas and the I/O aligner was the interface to the Data pins with byte/word align logic.


Some more interesting History.


After making the 68000, IBM and Motorola got together and made the 360 on a chip. It was thought that because the 68000 core was '32 bits' and regular enough that this would be an easy task.
What they failed to realize was that the random logic in the middle of the chip would have to change considerably to make the 360 on a chip. This took more resources and time than Motorola expected, delaying the 68010 and 68020, giving Intel a better chance to jump ahead with the i286/i386. 71.231.43.16 (talk) 13:05, 30 November 2007 (UTC)HenriS

Very interesting historic tidbit about the 360. But regarding your comments about why 32-bit ALU operations take more cycles, I'm not sure that is correct. The main reason they take more is because, obviously, the ALU is 16-bits and then an extra micro-cycle is required at the minimum. I don't think the higher 16-bits need to go through the AU for reaching the ALU. It is true they are physically located nearer to the AU than to the ALU. Bus the internal buses connect all the sections. If that wasn't true then 32-bit ALU operations would take much longer than they actually do.Ijor (talk) 05:07, 6 December 2007 (UTC)

It is a 16 bit ALU. The multiplication was performed by a barrel shifter and was heralded as an innovation in microprocessor design. I don't know where all of this 3 ALUs BS is coming from but there was one 16 bit ALU in this chip. The MC68000 was a 16 bit machine. The MC68010 added virtual memory capability. The MC68020 was the first 32 bit family member. The MC68020 core is used as the basis for CPU32 designs. I know this machine very well. I've designed with it. I've written assemblers and debuggers for it and used one in my Atari ST. —Preceding unsigned comment added by 72.78.53.31 (talk) 21:18, 24 July 2010 (UTC)

Ijor, have you ever considered contributing something besides to this talk page? Potatoswatter (talk) 08:23, 6 December 2007 (UTC)
The important thing is, he's right and, typically, he's being ignored, because fandom is against him. Vapourmile (talk) 07:15, 13 March 2021 (UTC)
Hi Potatoswatter. I did, but only "considered", sorry. A possible useful contribution could be a link to my undocumented 68000 web page. If you like it, go ahead and put a link on the main page: http://pasti.fxatari.com/68kdocs/ Ijor (talk) 15:40, 6 December 2007 (UTC)

It somehow seems to have been established merely by popular convention that it's "16/32bit" or "32bit". By what reckoning is it so significantly 32bit for it to warrant appearing in its title? It really has more to do with marketing and branding that reality, doesn't it? Neither the address nor data busses are 32bit. The ALU isn't 32bit. With that said, what about can possibly be 32bit enough as to change its title? Vapourmile (talk) 23:12, 15 August 2020 (UTC)

It's probably all those 32-bit wide registers that have people thinking it's a 32-bit processor. --Wtshymanski (talk) 01:14, 17 August 2020 (UTC)
I agree, it probably is. But 32bit registers cannot make a processor 32bit by themselves. The Intel AVX-512 extensions give the Intel chips 512bit registers. Do you believe they are all 512bit microprocessors? The bit rating of a CPU refers to its single overriding operating characteristic. If the 1979 68000 is 32bit merely for having 32bit registers then multiple CPUs from the 1960s onward have been classified incorrectly, and should be given the same upgrade Motorola awarded themselves. Vapourmile (talk) 02:06, 18 August 2020 (UTC)
Why is this useful to discuss? If "32-bitness" or "512-bitness" is a useful property to classify a processor, then sure, call them 32-bit or 512-bit processors. What are we trying to convey when we describe a processor as "32-bit" And does the 68000 meet some practical definitions of 32-bitness? --Wtshymanski (talk) 23:00, 18 August 2020 (UTC)
Is it useful to discuss anything? This is a public-access encyclopaedia, so of course it's useful. It comprises knowledge. I don't see Amiga/68000 fanatics arguing about the usefulness of the many misleading and inaccurate claims they make throughout the internet and throughout Wikipedia for competitive rather than educational reasons. Is it useful to know the top speed of a Ferrari? It is information. When published on a public encyclopaedia the main thing to attain should be accuracy so I make the point because it is not only obvious most of the defenders and promoters of the Amiga and 68000 don't care about the accuracy of public knowledge, it is also true that they don't even know how bit-ratings are calculated. They tend to just manufacture a figure from randomly selected features of the various chips. As I said before, the answer is found in the overriding operational unit-size of the microprocessor, which in the case of the 68000 is 16 not 32bit. Just as the average speed of a car on a race track is less than the maximum speed of that same car, and the chain is only as strong as its weakest link, it is also so that the bit-rating of a microprocessor is NOT determined by the widest feature you can find on that chip-die at a stretch. If it was then a Cray would be a 1024bit computer. The bit-rating determined by its overriding operating characteristic, which in the case of the 68000, with its 16bit ALU, and 16bit memory interface, is 16bits. Vapourmile (talk) 19:53, 22 August 2020 (UTC)
Before the illustration goes boom, Motorola in their October 1985 document titled it "MC 68000 16-32-Bit Microprocessor" and gives a very precise description of its bitted-ness in the introduction section. --Wtshymanski (talk) 22:19, 23 August 2020 (UTC)
No. That isn't what happened. What happened was Motorola set out to design a 16bit chip with some degree of forwards binary compatibility with a then prospective 32bit chip. As the 68000 was launched what Motorola's then head of marketing decided would be a good idea was to leverage some of the side effects of that 16bit chip's forwards compatibility to produce a smokescreen of marketing blurb arguing along the lines of "Hey, how many bits do CPUs have? You know, it's all so complicated. I know other players in the microprocessor industry strictly measures it by the width of the data bus (because, notwithstanding a look at the ALU, that's exactly what that piece of terminology means), but hey, why do that? Why don't we..... totally make up our own special unique definition to suit the design of the 68000 which magically makes it 32bit?". So the 16/32bit label was borne out of Motorola marketing blurb. "Hey, I know it's hard to know how many legs people have. Look at all those confusing appendages a human body has! I'll tell you what: Legs are limbs and arms are limbs. Arms are basically legs if you walk on all fours. So how about we just call you a quadruped, sounds cool right? Hey! You're a quadruped! QED". No, it would be great if that's how it worked, but it isn't. The 68000 is not 32bit, despite what Motorola's marketing said about it. Vapourmile (talk) 04:51, 25 August 2020 (UTC)
Back in the days when I was doing asembler code on 8-bit processors, I would have loved to have had so many 32-bit registers available that would do useful things in just one instruction. But if you have religious objections to calling it a "32-bit" processor, I'm not going to dissuade you. Seems excessivly purist to me, but that's how WP rolls. --Wtshymanski (talk) 20:21, 25 August 2020 (UTC)
I have no such thing as a "religious" motivation and it's a sad mainstay of the internet that people on it for want of something more effective to say try to make a discussion personal when they are unable to say anything prescient about the facts of an issue. It's quite the contrary: Those who want the 68000 to be 32bit have a religious devotion to supporting and evangelising it, just as Motorola's own marketing did. The number of bits in the registers is not meaningful in this discussion. If it was then the Intel chips released since the Xeon Phi with 32 512bit registers would be 512bit CPUs. Do you recall Intel, or AMD, or anybody, announcing the launch of 512bit CPUs? No, because the register size is irrelevant, and the only people to mention it are the sort of people Motorola's marketing campaign was designed to take advantage of: People who don't actually know how the bit rating of microprocessors is properly determined. Vapourmile (talk) 06:05, 26 August 2020 (UTC)
Now I'm more confused. So at what point does the 68XXX family become a 32 bit processor? Was the IBM PC XT an 8 bit computer? Professor Tabak in "Advanced Microporcessors" says the 68000 was a 16 bit system but also says the 68008 is a 16 bit system, though it has only an 8 bit data bus. On the other a hand, Profesor Clements in "68000 Family Assembly Lanaguage" just gets right to programming and never tells us what bit-ness he believes the processor is. The 68000 has only 23 address lines, does that make it a 23-bit processor? Who is the ISO custodian of bit-ness? ---Wtshymanski (talk) 02:31, 27 August 2020 (UTC)
It doesn't seem very confusing to me. What are you confused about? Whether the 68000 is 8, 16 or 24bit? None of that new information creates any confusion over whether or not it's 32bit. You say yourself your chosen avatars, professor Tabak and Clements don't say the 68000 is 32bit. According to you, the professor Tabak claims both the 68000 and 68008 are 16bit, the other doesn't comment at all. So great, you've brought in other witnesses, who you presumably class as experts, who either refrain from comment or classify the 60000 as 16bit. The single question cast by the presence of an 8bit bus on the 68008 isn't whether it's 16 or 32bit, it's whether it's 16 or 8bit. The single reason that doubt has been cast is for the single architectural element that matters: The size of the ALU. That same commentator doesn't claim the 68000 is 32bit because the 68000 cannot claim to have a 32bit ALU. It has a 16bit ALU so you have presented no doubt in prof. Tabak's mind as to whether the 68000 is 16 or 32bit: It's 16bit, not 32bit.
So what about this tussle between the ALU and the data bus? Well, it's a question of chains and links. If you have a 16bit ALU but only an 8bit data bus then it somehwhat negates the ALU from consideration, not so much in Professor Tabak's mind to downgrade the CPU rating. As is the same in counting how many cylinders in your car though you don't need an appeal to authority to decide the issue. The fact is, a 16bit ALU is blatantly blatantly downgraded by an 8bit bus, isn't it? Whatever he thinks. No matter what he's authority. We don't need him to verify basic use of two digit numbers: A 16bit ALU communicating via an 8bit bus clearly is not the same as a 16bit ALU communicating over a 16bit bus. The difference is bottlenecking. The other key difference is, again, the prime reason chip ratings are dispensed: The motherboard it fits into: Whereas a 68000 fits a 16bit motherboard, a 68008 fits an 8bit board. Let's compare it to the Intel analogy: The 8086 and the 8088. The analogy holds well: The 68000 and 8086 chips are both 16bit and the 68008 and 8088 differ from their parent CPU in only one way: The 8bit data bus. So how does Intel do this? In their own CPU manual for the 8088 Intel clearly describe the 8088 as the "*8bit* HMOS microprocessor". The only feasible dispute over this, if you accept the names you've cited, comes from the size of the ALU which for the 8088 is 16bit.
Is this confusing? For you maybe. I don't think it is confusing in the least. The overriding characteristic in microprocessors was, even in the time of the 68000, considered to be the data bus because that's solely how the results are delivered to the outside world and it's solely what determines what type of motherboard the chip slots in to. Like the 8088 the 68008 was devised to save manufacturing costs by fitting the chip to 8bit boards. It would be a fool who claimed this fact is beneath consideration. When you have an 8bit bus it doesn't matter, as far as delivering outcomes is concerned, if you have a 1024bit ALU, because you can't see those results except in 128 lots of 8bits. It affects the outcome. To draw an analogy: It's like business delivery times. Imagine you have a local Chinese restaurant who deliver, which promises to have your order ready for dispatch within an hour. If you don't get a knock at your door for five hours you're going to complain because the delivery time directly affects your experience. If they can promise to have your order ready in one hour, but their delivery system means you can never have a right to expect your order ready within five hours, it's impossible to pretend the delivery time of results somehow doesn't matter. Of course it matters. It matters more than anything else. The hour doesn't matter anymore. If you have a takeaway next door which delivers in two hours you'll take that over five and you won't care how quickly they have orders packed. So it is with an 8bit bus: You can't argue it has no effect on user experience. So if you have an 8bit bus it conversely doesn't even matter if you have a 256bit ALU which finds the results of additions in a single clock cycle because you still do nothing but wait for 33cycles during which time you can't do anything. On the 68000 you can't have any 32bit result in less than 12cycles and the reason is it's delivering them over a 16bit bus, to a 16bit motherboard, with wait states. Having said so the 68000 doesn't even have a 32bit ALU. It has a 16bit ALU with a 16bit bus. So whereas you might have a discussion about whether the 8088 and 68008 are 8 or 16bit this dispute exists only between the size of the ALU versus the width of the data bus. In the case of the 68000 there is no such dispute. It does't matter whether you pick the ALU or the data bus, both the ALU and data bus of a 68000 are 16bit. Neither of them are 32bit. That dispute doesn't apply. There is no fog. *All* of the 68000s overriding features are 16bit. The 68000 is a 16bit CPU. Vapourmile (talk) 00:14, 4 September 2020 (UTC)
Atari couldn't seem to make their mind up on the ST because ST stands for "Sixteen/Thirty-two", which apparently 'refers to the Motorola 68000's 16-bit external bus and 32-bit internals.' Most likely just marketing speak though. Loweredtone (talk) 13:29, 4 September 2020 (UTC)

32bit claims for the 68000 are far more deeply rooted in evangelising rhetoric than in fact.

The declaration that the 68000 is in any way realistically 32bit has far more in common with evangelising which began in Motorola's own marketing material, than it does with architecture. The claim is, for all practical purposes, a myth.

In actual fact the single overriding factor determining the bit rating, the metric which decides from an architectural perspective what the bit rating is of a microprocessor is its bus. In particular, it's data bus. Which for a 68000 is 16bit. The technical explanation for is a simple as the fact this is the breadth of its communications with the world outside the chip die. It's evangelist's argument for the superfluous addition of 32 to its bit rating is clutching at straws to give the chip a purely notional win against its competition. In real terms, the reason the data bus predominately determines the bit rating of the CPU is because no matter what claims are made about its internal architecture, even somewhere there were 64bit operations on die, it would operationally make negligible difference because the results of those calculations, irrespective of how they're calculated on die, would still leave the chip die and reach the outside world 16bits at a time. Indeed this is how the CPU operates. If for example you write an instruction with a 32bit operand that operand is in fact for practical purposes two 16bit values in memory. An instruction and its operand of any size are read into the CPU 16bits at a time.

The way to think of it is in terms of the Turing Machine concept on which CPUs are based. The RAM takes the place of the tape on a Turing machine, with the head of the machine reading and writing the values on the tape. Each cell on the tape of this Turing Machine in the case of the 68000 contains a single 16bit number. It is therefore a 16bit CPU.

Even that description does not define it in the strictest terms of its day, which is to say, in the stricter sense, the bit rating of a CPU is defined by how many bits it can move between main storage and the CPU die in a single machine cycle. That is the measure which provides by far the most telling guide to how many "bits" a CPU really is and in terms of clock timing the 68000 is more like an 8 bit CPU than a 16bit CPU.

If Motorola fans really want the industry to redefine how the bit rating of CPU is measured, just to suit them, then they should consider the same alteration of definition then similarly offers a magic-wand upgrade to the chip class of Motorola's competitors. Intel's 8088, marketed by Intel themselves as 8 bit, is 8bit only as far as the multiplexed data bus is concerned, but internally apart from the multiplexer the 8088 is entirely 16bit, including the ALU. So Intel should be allowed to reclassify the 8088 simply by adopting the Motorola chip-fan's anomalous method of classifying their favoured CPU. The 8088 is far more "internally 16bit" than the 68000 is "internally 32bit", and yet Intel used the conventional method of using the address bus to determine its classification. The Pentium had a 64bit data bus and its MMX extensions included 64 operations, this means the Pentium has a more convincing claim of being a 64bit CPU than the 68000 does of being 32bit. The argument continues throughout the Intel line: The Pentium II SSE extensions added 128bit registers to the instruction set so if the 68000 has warrant to be called 32bit then the Pentium II must be 128 bit. Furthermore Intel's AVX-512 extensions add 32 512bit registers. So again, if we accept the way 68000 fans want us to do this then Intel's Knights Landing Xeon is a 512bit CPU.

The arguments made favouring the 32bit moniker have, since its inception found in none other than Motorola's own marketing material, constituted nothing but fog. An ad-hoc redefinition of how to classify CPUs purely to suit the specific architectural quirks of the 68000. The argument is, for all practical purposes, clutching at straws. People should not feel free to redefine the scoring system of the game so it becomes about some arbitrary specifics of their home team just so their home team wins. There is no technical validity in pointing to an arbitrary 32bit unit on the chip die and pretending it can change the CPU class. 32 bit registers do appear in the chip specification, but with a 16bit data bus, the pins of which can be physically counted, and with a 16bit ALU, the argument that "32bit" should appear anywhere in the 68000 chip classification is fan fiction. Vapourmile (talk) 01:45, 4 August 2020 (UTC)

There's more than one form of "architecture" in computing. The instruction set architecture of the 68000 is 32-bit, as the registers are 32 bits wide, addresses are 32 bits wide (although the upper 8 bits are ignored, but that's also true of System/360, and some 64-bit instruction set architectures don't allow a full 264-bit address space), load and store instructions can load and store 32-bit quantities, and most arithmetic instructions can do 32-bit arithmetic (multiply and divide are the exceptions). The same applied to the 68010 (and the 68012, except that only the uppermost bit of the address was ignored).
The microarchitecture was mostly 16-bit, so that most 32-bit operations took two cycles. (That's also true of many System/360 implementations, although those implementations generally had only 8-bit ALUs.)
The bus architecture was also 16-bit for data (and 24-bit for addresses).
Systems had either an LP32 architecture, in which integers were 16-bit, and "long" integers and pointers were 32-bit, or an ILP32 architecture, in which integers, "long" integers, and pointers were all 32-bit. LP32 allows memory and arithmetic operation on integers to take one cycle; ILP32 allows code written for other ILP32 platforms to work with less effort. The Mac, and some UN*Xes, went with LP32 for performance reasons; SunOS, however, went with ILP32, even on the 68010-based Sun-2, because a lot of the code for it came from a VAX ILP32 environment.
Both LP32 and ILP32 would be difficult, at best, on contemporary 16-bit processors such as the 8086 and Z8000. The 80286's and Z8000's segmentation made it less painful to have an address space larger than 216 bytes, but it's still more work than on the 68000.
And the only "Turing machine" on which CPUs are based would be the Turing-designed Automatic Computing Engine; the abstract mathematical machine he designed is not the basis of CPUs - they're based on the stored-program computer designs done by Turing, von Neumann, and others. If you simulate a Turing machine on an N-bit computer, you are not restricted to 2N symbols - you can have fewer symbols if you pack multiple symbols into a unit of memory, or you can have more symbols if you use more than a single unit of memory for a symbol, so a Turing machine simulation on a 68000 could have 1-byte, 2-byte, or 4-byte symbols with relatively little effort, 3-byte symbols with a little more effort, and 5-or-more-byte symbols with a bit more effort. Guy Harris (talk) 05:04, 4 August 2020 (UTC)
I am fully aware of how the 68000 operates, thank you. The claims the ISA is "32 bit" are false, for the reasons I have already given. The 68000 is a 16bit CPU. There isn't really any debate except fog, such as the fog you've supplied. First, as I have already pointed out, the chief overriding architectural decision placing an upper limit on the classification of the CPU is the size of the bus. As you have said, the bus is 16bits wide. That is for all practical purposes the end of the discussion. All you've added is fog. Long word instruction on the 68000 instruct the CPU to load two 16bit locations from main RAM in separate operations. You do not alter the classification of a CPU by summing the number of bits moved in successive operations. For it to be a 32bit CPU it would be necessary for the chip, by definition of what the terminology "32bit" means, to move the two 16bit words following the instruction in main RAM in a single operation. It doesn't do this. It moves the instruction and the "32bits" from main RAM to the CPU die in three successive stages. It does this because it is a 16bit microprocessor. As already stated, if we accept you just sum the total number of bits handled by the CPU in successive operations then the terminology has very little meaning. The Intel Knights Landing CPU becomes a 512bit microprocessor. As I also said, the strict definition of the bit classification is how many bits it moves in a single cycle, on a 68000 the answer to that is 8.
The 68000 moves addresses to and from main RAM 16bits at a time. It does this because it's a 16bit chip. Everything else is just fog. — Preceding unsigned comment added by Vapourmile (talkcontribs) 02:22, 6 August 2020 (UTC)
"The claims the ISA is "32 bit" are false, for the reasons I have already given." You gave no reasons whatsoever. You talked about the data bus, but that's not exposed in the ISA, except to the extent that the 68000 requires 16-bit, not 32-bit, alignment of 32-bit operands. (The 68020 doesn't even expose that, as it doesn't require 16-bit alignment, it handles 16-bit and 32-bit quantities on arbitrary boundaries.) Guy Harris (talk) 02:39, 6 August 2020 (UTC)
I gave many reasons, most of which you have just ignored. You appear to want to do what 68000 fans do: To have the argument swing on a NOTIONAL 32bits. Such as the "32bits" you appear to see when you edit a 68000 assembly program. That isn't a valid argument. You can have a .longbyte assembler directive on a 6510 assembler, that didn't just magically make the 6510 32bit. Those assembly language instructions have no analogue on the 68000 hardware. It retrieves the information from RAM 16bits at a time. It is not some inexplicable quirk the instructions and data in RAM need to be 16bit aligned, it is this way because it is a 16bit CPU. What isn't "exposed"? You think if you don't talk about something it disappears? Of course it's "exposed" it's exposed in the fact a time penalty exists for using long word instructions, which doesn't exist for using word length instructions, because it is a 16bit CPU.
The argument reminds me of religious doubts people have when trying to reconcile the puzzle of why their real-life experience seems to contradict religious claims, "If there is a God, why do good things happen to bad people?", "If there is a God why aren't my prayers answered?", "If there is a God why is there no mention in the bible of modern discoveries, such as the pathogenic transmission of diseases, or anything else that people wouldn't know about then, but a God would?". All these paradoxes cease when you conclude such a God about which those questions are asked is not real. It's the same with the 68000 "If the 68000 is 32bit then why does it only have a 16bit ALU?", "If the 68000 is 32bit then why is there a 4 clock cycle penalty for using 32bit long-word load operations?", "If the 68000 is 32bit then why does it have only a 16bit interface to main RAM?". Answer: Because the idea it's 32bit is a fiction. The Motorola 68000 is a 16bit chip. That is the explanation for all those anamoalies which would not be the case if it was a 32bit chip. Everything else is just fog. The long word instructions only indicate the instruction is followed by two successive 16bit words.
Here are the instruction timings for a 32bit microprocessor:
Instruction Clocks Description
ADD AL,imm8 2 Add immediate byte to AL
ADD AX,imm16 2 Add immediate word to AX
ADD EAX,imm32 2 Add immediate dword to EAX
Notice what happened? It takes 2 clock cycles. Notice what else happened? The time penalty when doing a 32bit operation is zero. That is because this behaviour is what defines what it means to be 32bit. The 32bit chip takes exactly the same amount of time to do the 32bit operation as it does to do the 8bit operation. There is no time penalty because that is what a 32bit chip looks like operationally. Functionally it is doing a 32bit operation in a single step. That is what 32bit means. There are no operations on a 68000 which take fewer than 4 cycles. Vapourmile (talk) 03:33, 6 August 2020 (UTC)

"I gave many reasons, most of which you have just ignored." You gave no reasons that have anything to do with the instruction set architecture. Timings are not part of an instruction set architecture, as there can be multiple implementations of the same instruction set architecture that have different timings. That's one of the main purposes of an instruction set architecture - to allow multiple implementations, with different costs and performance characteristics, that can run the same binary code.

Stop reverting the talk section. Your endless waffle about "the instruction set architecture" is irrelevant. Go away and come back when you know what you're talking about. Vapourmile (talk) 05:06, 26 March 2021 (UTC)

The lengthy and unhelpful elucidation on the IBM 360 between this heading and the next section is not illuminating nor is it relevant in this context. I recommend it for deletion.

The IBM System/360 instruction set architecture is defined by the IBM System/360 Principles of Operation manual. As that manual says on page 5:

Models of System/360 differ in storage speed, storage width (the amount of data obtained in each storage access), register width, and capabilities for processing data concurrently with the operation of multiple input/output devices. Several cpu's permit a wide choice in internal performance. The range is such that the ratio of internal performances between the largest and the smallest model is approximately 50:1 for scientific computation and 15:1 for commercial processing. Yet none of these differences affect the logical appearance of System/360 to the programmer.

The System/360 instruction set architecture has 16 32-bit general-purpose registers, the last 15 of which can also be used as base or index registers (effective addresses are calculated by adding a 16-bit displacement, the contents of the base register, and the contents of the index register; if register R0 is specified as a base or index register, the value 0 is used instead - but R0 is not a "zero register", as in load, store, and arithmetic instructions, its contents are loaded, stored, or used as an operand and the result stored into it). Addresses are 24 bits - the upper 8 bits of an effective address are ignored. (The IBM System/360 Model 67 is an exception; unlike the other models, it 1) supported demand paging with a page-table-based MMU and 2) supported 32-bit addresses if the MMU is enabled.) It supports 32-bit integer arithmetic.

The IBM System/360 Model 30 is an implementation of the System/360 instruction set architecture. The IBM System/360 Model 30 Functional Characteristics manual indicates that the CPU data paths are 8 bits wide (so a 32-bit addition requires 4 8-bit addition cycles in the microcode) and the data path to memory is 8 bits wide. It also gives instruction timings; for example, a 16-bit addition instruction (Add Halfword, always register-memory, with the register operand being 32-bit and the memory operand being 16-bit) takes 27 microseconds if no index register is specified and 31.5 microseconds if an index register is specified, and a 32-bit register-memory addition instruction (Add) takes 29 microseconds if no index register is specified and 33.5 microseconds if an index register is specified. (There is no 8-bit addition instruction.)

The IBM System/360 Model 40 is an implementation of the System/360 instruction set architecture. The IBM System/360 Model 40 Functional Characteristics manual indicates that the CPU data paths are 8 bits wide (so a 32-bit addition requires 4 8-bit addition cycles in the microcode) and the data path to memory is 16 bits wide. It also gives instruction timings; for example, a 16-bit addition instruction (Add Halfword, always register-memory, with the register operand being 32-bit and the memory operand being 16-bit) takes 10.63 microseconds if the leading 16 bits of the register operand aren't modified by the operation and 11.88 microseconds if they are (plus an extra 1.25 microseconds if the operation overflows and overflow interrupts are disabled - presumably that's fixing up the result), under the assumption that A5 and A6 apply to Add Halfword as well as Subtract Halfword, and a 32-bit addition instruction (Add) 11.88 microseconds (plus an extra 1.25 microseconds if the operation overflows and overflow interrupts are disabled), so it's the same speed as an Add Halfword if there's a carry out of the lower 16 bits.

The IBM System/360 Model 65 is another implementation of the IBM System/360 instruction set architecture. The IBM System/360 Model 65 Functional Characteristics manual indicates that the CPU has a 60-bit adder for floating-point operations and an 8-bit adder for other operations (so a 32-bit integer or address addition requires 4 8-bit addition cycles in the microcode) and that the data path to memory is 64 bits (8 bytes) wide. The 16-bit register-memory addition instruction timing depends on the submodel, but is greater than the 32-bit register-memory addition instruction timing.

The IBM System/360 Model 75 is yet another implementation of the IBM System/360 instruction set architecture. The IBM System/360 Model 75 Functional Characteristics manual probably indicates how wide the CPU data paths are, but the copy that they scanned lacked pages 9 and 10, containing that information. It does, however, indicate that the data path to memory is 64 bits (8 bytes) wide. The 16-bit register-memory addition instruction timing depends on the submodel, but is greater than the 32-bit register-memory addition instruction timing. The Volume 1 of the 2075 Field Engineering Model of Instruction manual says, on page 87, that the main adder-shifter is a 64-bit adder.

So is the IBM System/360 instruction set architecture 8-bit, 16-bit, 32-bit, or 64-bit? Is the IBM System/360 Model 30 an 8-bit or 32-bit machine? Is the IBM System/360 Model 40 an 8-bit, 16-bit, or 32-bit machine? Is the IBM System/360 Model 64 an 8-bit, 32-bit, or 64-bit machine? Is the IBM System/360 Model 75 a 32-bit or a 64-bit machine?

If you discuss an instruction set architecture's bit width, that's independent of the bit widths of the implementations. If you discuss an implementation's bit width, there may be more than one bit width (for example, internal data paths and external data paths), and neither of them are necessarily the same as the bit width of the instruction set architecture being implemented.

So:

  • The IBM System/360 instruction set architecture is a 32-bit instruction set architecture with 24-bit linear addressing.
  • The IBM System/360 Model 30 is an implementation of that instruction set architecture, with 8-bit arithmetic/logical data paths and an 8-bit memory bus.
  • The IBM System/360 Model 40 is an implementation of that instruction set architecture, with 8-bit arithmetic/logical data paths and a 16-bit memory bus.
  • The IBM System/360 Model 65 is a partially 8-bit and partially 60-bit (the latter for floating point) implementation of that instruction set architecture, with a 64-bit memory bus.
  • The IBM System/360 Model 75 is an implementation of that instruction set architecture, with a 64-bit main adder is 64-bit that probably handles both integer and floating-point addition, a 24-bit three-input adder for computing addresses, an 8-bit AND/OR/XOR unit, an 8-bit decimal addition unit, a 7-bit adder for floating-point exponents, and a 64-bit memory bus.

Not a simple "this machine is N-bit" description, even at the instruction set architecture level, but so it goes; people who insist on such a description are doomed to disappointment.

Similarly:

  • The Motorola 68000 instruction set architecture is a mostly 32-bit instruction set architecture, lacking multiply and divide operations with 32-bit operands, and with, in effect, 24-bit linear addressing except on the 68012, which had, in effect, 31-bit linear addressing.
  • The Motorola 68000 is an implementation of that instruction set architecture with 16-bit arithmetic/logical data paths and a 16-bit data/24-bit address memory bus.
  • The Motorola 68008 is an implementation of that instruction set architecture with 16-bit arithmetic/logical data paths and a 8-or-16-bit data/20-or-22-bit address memory bus (this manual says the 68008's data bus width is "statically selectable" and that one package supports 20-bit addresses and another supports 22-bit addresses).
  • The Motorola 68010 is an implementation of that instruction set architecture with 16-bit arithmetic/logical data paths and a 16-bit data/24-bit address memory bus.
  • The Motorola 68012 is an implementation of that instruction set architecture with 16-bit arithmetic/logical data paths and a 16-bit data/31-bit address memory bus.

The Motorola 68020 extended the instruction set, making it fully 32-bit with 32x32->64 multiply and 64/32->32 divide operations, and extended the addressing to 32 bits.

And:

  • The Intel 8086/8088 and 80186/80188 instruction set architectures (the 8018x added some extensions) are 16-bit instruction set architectures, with 16-bit linear and 20-bit segmented addressing;
  • The Intel 8086 is an implementation of the 8086/8088 instruction set architecture, with 16-bit arithmetic/logical data paths and a 16-bit data/20-bit address memory bus.
  • The Intel 8088 is an implementation of the 8086/8088 instruction set architecture, with 16-bit arithmetic/logical data paths and a 8-bit data/20-bit address memory bus.
  • The Intel 80186 is an implementation of the 80186/80188 instruction set architecture, with 16-bit arithmetic/logical data paths and a 16-bit data/20-bit address memory bus.
  • The Intel 80188 is an implementation of the 80186/80188 instruction set architecture, with 16-bit arithmetic/logical data paths and a 8-bit data/20-bit address memory bus.
  • The Intel 80286 instruction set architecture is a 16-bit instruction set architecture, with 16-bit linear and 24-bit segmented addressing in which the segmentation doesn't just involve adding a segment start address, shifted left 4 bits, to a segment offset.
  • The Intel 80286 is an implementation of the 80286 instruction set architecture, with 16-bit arithmetic/logical data paths and a 16-bit data/24-bit address memory bus.
  • The 32-bit x86 instruction set architecture is a 32-bit instruction set architecture, with 32-bit linear and 48-bit segmented addressing.
  • The Intel 80386, 80486, and Pentium are implementations of the 32-bit x86 instruction set architecture with 32-bit arithmetic/logical data paths and a 32-bit data/32-bit address memory bus.
  • The Intel 80386SX is an implementation of the 32-bit x86 instruction set architecture with 32-bit arithmetic/logical data paths and a 16-bit data/32-bit address memory bus.
  • Later implementations of the 32-bit x86 instruction set architecture had 32-bit arithmetic/logical data paths and either a 32-bit data/32-bit address memory bus or, if the extra address pins were added for PAE to do something more than just make page table entries bigger (including, in some later processors, the NX bit), a 32-bit data/36-bit address memory bus.

And:

  • The Z8000 instruction set is a mixed 16/32-bit instruction set at the arithmetic/logical level, with 16-bit general registers but with 32-bit arithmetic instructions (add, subtract, multiply, divide, compare, test, shift) but requiring two 16-bit instructions to do a 32-bit logical operation, with 16-bit linear and 23-bit segmented addressing, with optional support for segmented addresses in register pairs.
  • The Z8001 is an implementation of the Z8000 instruction set with support for segmentation and with a 16-bit data/23-bit address memory bus.
  • The Z8002 is an implementation of the Z8000 instruction set without support for segmentation and with a 16-bit data/16-bit address memory bus.
  • The Z80000 instruction set is a fully 32-bit instruction set.

Not quite the simple description fanboys might like when arguing about the merits of processors, but the point isn't to enable fanboy/foeboy arguments, it's to accurately describe the capabilities of the instruction set and its implementations for use by system designers, hardware engineers, and programmers.

So:

  • With System/360, 32-bit code, you can do 32-bit arithmetic and logical operations in single instructions, with either 32-bit pointers with the upper 8 bits zero or 24-bit pointers with extra stuff in the upper 8 bits, and your code could run on all models with no change, just with different performance, with smaller machines having slower clock rates and memory speeds as well as narrower data paths;
  • With the pre-68020 68k family, 32-bit code, you can do most 32-bit arithmetic in single instructions but with 32x32 multiplication and 32/32 division in subroutines or multiple instructions inline, and with either 32-bit pointers with the upper 8 bits zero or 24-bit pointers with extra stuff in the upper 8 bits unless you also wanted to, or needed to, support the 68012 (which gave you 31-bit pointers), and your code could run on all models with no change, but with some performance hits due to the 16-bit data paths and buses - code doing 16-bit arithmetic would run faster;
  • With the 68020 and later, that code would also work as long as it didn't use any of the upper 8 bits of pointers for any other purpose (or an external MMU, or the pins going to it, threw away the upper 8 bits), and if you only had to support the 68020 or later processors, and used fully 32-bit code, you could use the 32x32 multiplication and 64/32 division instructions rather than needing subroutines or multiple inline instructions to do multiplication and division, and could have a full 32-bit address space.
  • With the x86 processors prior to the 80386, you can't do 32-bit arithmetic and logical operations with single instructions, and have to go through some amount of pain to use larger than 16-bit addresses.
  • With the 80386 and later, you can do 32-bit arithmetic and logical operations with single instructions and 32-bit addressing can be done without that pain.
  • With the Z8001/Z8002, you can do 32-bit arithmetic, but not logical, operations with single instructions and, with the Z8001 in segmented mode, can use larger than 16-bit addresses with less pain than on x86 processors prior to the 80386 but not as conveniently as you can on the 68000/68008/68010/68012 (you're limited to individual segments being no larger than 65536 bytes on the Z8001, whereas you can have a contiguous data area larger than 65536 bytes on the 68000/68008/68010/68012).
  • With the Z80000, you can do 32-bit arithmetic and logical operations with single instructions and 32-bit addressing can be done without that pain. Guy Harris (talk) 07:55, 6 August 2020 (UTC)
I don't know why you think what any of that is relevant. You continue to want to talk about "The instruction set architecture". It has no relevance. The designation is determined by the microprocessor or architecture. As I said previously everything else is just fog and that is all you have produced. I have adequately explained why the 68000 is 16bit. There is nothing in what you've written which has any relevance to how these things are determined. The fact you're trying to confuse the issue and don't know how it's determined doesn't change anything. I've just told you how it's determined. Vapourmile (talk) 11:58, 6 August 2020 (UTC)
"You continue to want to talk about "The instruction set architecture". It has no relevance." It's relevant to the programmer's view of a processor. A 68010 is demonstrably capable of running 32-bit code, in the sense of code with 32-bit integers and 32-bit pointers, doing 32-bit arithmetic in one instruction, and doing memory references within a 32-bit flat address space, even though it has 16-bit data paths for integer arithmetic and a 16-bit external data bus. Neither the 8086/8088/80186/80188/80286 nor the Z8000 could do that, although the Z8000 came closer. Guy Harris (talk) 18:34, 6 August 2020 (UTC)
I discussed this right at the beginning: The representation presented to a programmer in an assembler is irrelevant. 8bit CPUs might have an assembler syntax which allows them to use 16bit or 32bit words. This tells you nothing about the underlying architecture. You aren't changing the CPU architecture by changing what you're showing the programmer via computer software. Only what happens operationally in the hardware itself is relevant. In an assembler for an 8bit micro where you have 16bit .word directives the programmer still addresses those locations 8bits at a time. Similarly, how many words you are able to instruct a CPU to operate on a single command does not physically alter the word length. As I said previously, which you ignored, the AVX-512 instruction allow CPUs to collect 512bits from RAM. Those instructions do not make them 512bit CPUs. Intel chips have incorporated SIMD instructions since MMX. The MMX architecture gave the CPU 64bit registers. SSE implements 128bit registers. The register size does not alter the bit-rating of the CPU. Since you like talking about the IBM 360 so much, the 360 manual also tells you it has 64bit registers. The 360 manual also details instructions which command the CPU to load up to four 32bit registers and again these factors do not alter the bit rating of the CPU. To explain it in simplest terms: If you have a robot, which can carry up for metric-tonne blocks at one time, it might respond to single commands to instruct it to manipulate 1, 2, 4 or 8 blocks, or 16, or more. The existence of such commands does not alter the robot's handling capacity. Therefore, as I said right at the beginning, the Motorola 68000 is 16bit. Thus is it's carrying capacity. What it can be made to *look like* to some operators, say, to those who don't bother to look at how long the work takes, makes no difference. Reality is not altered by situationally convenient misrepresentations of it. It is only through misrepresentation any argument exists: The popular idea the Motorola 68000 is in any way 32bit was cooked up by a Motorola marketing manager in an early 68000 marketing campaign which intentionally fogged the issue of how microprocessor bit ratings are determined for their own ends, and it has endured, especially in the minds of its fans, who ever since have said "Hey! Ignore the microprocessor architecture! Look at the instruction set! Look at that assembler software does with it!". Yes, what it does is hide the fact it's a 16bit CPU. This sleight of hand doesn't change anything physical. If you tell a man, who owns a truck which has a 1 cubic metre capacity, to move 4 cubic metres of bricks, and he does it, and tells you he moved 4 cubic metres of bricks, it doesn't imply he owns a 4 cubic-metre capacity van, and it doesn't mean anything so absurd as his van "has a 1/4 cubic metre capacity". Vapourmile (talk) 01:30, 14 August 2020 (UTC)
One large problem here is that the notion of "bit width" isn't as simple as some think it is. Even at the instruction set level, independent of the characteristics of a particular implementation, there may be multiple register types, with different bit widths; there may be integer data registers, address registers, general-purpose registers used for both integers and addresses, floating-point registers, vector registers, SIMD registers, etc.. Typically, for instruction sets with general-purpose registers, the bit width of the general-purpose registers is used as the bit width of the instruction set; it's also floating-point registers, vector registers, SIMD registers, etc. may be wider, but they're not treated as the bit width.
Implementations of an instruction set with N-bit general purpose registers, or N-bit integer and address registers, mmay have one or more internal data paths, and the data paths for integer or address operations need not be N bits wide. Furthermore, the external data paths might not be N bits wide, and might not be the width of the data paths for integer or address operations, either (the System/360 Model 40 is an implementation of a 32-bit instruction set with 8-bit internal data paths and 16-bit external data paths, for example).
(Note: "instruction set" here refers to machine language, not assembler language.)
The Motorola 68000 has 32-bit address and data registers, and instructions that operate on them as 32-bit quantities; the only missing 32-bit integer instructions are 32x32 multiply and divide with a 32-bit (or later) dividend and 32-bit divisor. However, its internal data paths for integer operations were 16 bits wide, so 32-bit arithmetic operations required two cycles; it had two 16-bit arithmetic units for addresses, operating in parallel, so it could do 32-bit arithmetic operations in one cycle. Its external data path had 24 bits of address and 16 bits of data, so fetching or storing 32-bit integer or address values required two bus cycles.
The main programming models for the 68000, to use the terms used for C programming models, are LP32, with 16-bit integers, 32-bit long integers, and 32-bit pointers, and ILP32, with 32-bit integers, 32-bit long integers, and 32-bit pointers. The first programming model could do integer arithmetic faster, as it required fewer external bus cycles to load and store integer values and fewer internal cycles to do arithmetic on integer values. The second programming model allowed programs from 32-bit machines like the VAX to work with fewer changes - no need to, for example, replace "int" with "long" in C code. Apple chose the first model, presumably for performance reasons; Sun chose the second model, as they wanted to produce VAX-class workstations and servers, and were willing to take a performance hit, in their 68010-based machines; they were probably expecting future 68k processors to have 32-bit internal data paths, and may have known about the plans for the 68020 - the first 68000-based Sun-1 machines (without support for demand paging, as they didn't pull the Apollo trick of having two 68000's) came out in 1982, the first 68010-based Sun-2 (running a 4.2BSD-based UNIX, with demand paging) came out in 1983, and the 68020 came out in 1984, so they may have conclude that the best plan for the future was to go full 32-bit with the 68010-based machines.
So the choice of 16-bit or 32-bit integers was a current performance vs. future capabilities choice; Sun knew quite well that 32-bit integer arithmetic was slower than 16-bit integer arithmetic (especially for 32x32->32 multiplies and 32/32->32 divides, as those involved subroutine calls in code compiled to run on Sun-2s), but they expected that this was a temporary limitation, and concluded that it was better to make it behave as a VAX-like (but big-endian...) 32-bit machine.
(I don't know whether they used LP32 for the Unisoft port for the 68000-based Sun-1s; if they did, it would not have been due to data path width differences between the 68000 and 68010, as the data paths didn't have different widths - if might have been due to Unisoft's port being designed around 16-bit ints. Sun offered a 68010 upgrade for the Sun-1 models, making them Sun-1Us; those ran the same ILP32 Sun UNIX that the Sun-2s did.)
System/360 had similar tradeoffs. IBM went with a 32-bit instruction set, with 32-bit general-purpose registers, even though most implementations had smaller internal and external data paths, and most of the machines leased to customers were the smaller implementations. Eventually, all implementations had 32-bit internal data paths for integer and address operations, and had wider external data paths to move data between the CPU cache and main memory, but the same could run (at least as long as the same operating system was used) on all machines from the original machines with 8-bit data paths to the last of the 32-bit machines. They only made bit-width changes twice; the first was to add 31-bit addressing (with a mode bit so that old applications that stuffed data in the upper 8 bits of an address could continue to run, along with applications using 31-bit addresses, at the same time), and the second was to introduce a 64-bit version of the instruction set (again, still supporting the 32-bit-with-31-bit-addressing and 32-bit-with-24-bit-addressing applications with mode bits); otherwise, as long as both machines were running the same OS, a program originally developed for, and running on, the Model 30, with its 8-bit internal and external data paths, could run on a 64-bit z/Architecture machine, without recompiling or modification of assembler-language code.
So, again, the 68000 is a 16-bit internal integer data path/32-bit internal address data path/16-bit-data-and-24-bit-address external bus implementation of a mostly 32-bit instruction set (that became fully 32-bit with extensions added in the 68020. The same is true of the 68010. Yes, there is a performance penalty for 32-bit integer arithmetic and 32-bit loads and stores with the 68000 and 68010, but, if the performance penalty is acceptable given other constraints (such as "must look like other 32-bit 4BSD machines to a programmer"), that might be an appropriate choice.
To put this in terms of the man with the truck, the right model here is that:
  • the company making the trucks initially made only a 1 m^3 truck, because, at the time, material costs, material weights, and manufacturing technologies meant that a 4 m^3 truck would be too expensive;
  • the man with the truck nevertheless would let you hire him with a contract to transfer 4 m^3 of material, rather than requiring you to hire him 4 times, each time with a contract to transfer 1 m^3 of material, because he was planning ahead, as he knew that the company was working on a 4 m^3 truck (perhaps twice as tall and twice as long) for a competitive price (based on cheaper and lighter material, or whatever; there's not much of an equivalent to Moore's law for motor vehicles, so the analogy needs a little handwaving);
  • eventually, he traded in the 1 m^3 truck for the 4 m^3 truck, and the same 4 m^3 transfer took place faster (at least partially because only one trip was necessary).
In addition, note that, even for systems using the LP32 model, the "P32" part means that, for example, it's easier for the programmer to manipulate arrays larger than 65536 bytes than on a 16-bit implementation of a 16-bit instruction set with a segmented address space, so the (mostly) 32-bitness of the instruction set is different in ways that aren't just marketing fluff. (Even if you have a compiler that hides the 16-bit segment offsets by loading segment registers, or manipulating a register pair, that adds more instructions to the code path, so a 16-bit implementation of a (mostly) 32-bit instruction set might be faster than a 16-bit implementation of a 16-bit instruction set with segmented addresses.) Note also that the Z8000 had 16-bit registers, so, even though there were 32-bit arithmetic instructions operating on register pairs, and segmented addresses could reside in a register pair, you'd have to have more than 16 general-purpose registers to have as many bits worth of registers as the 68000 did.
(By the way, if by "The 360 manual also details instructions which command the CPU to load up to four 32bit registers" you're referring to the Load Multiple instruction, you misread the manual - it can load up to 16 of the 32-bit general-purpose registers. The "Multiple" part indicates that it's not to be thought of as a single up-to-512-bit load, it's to be thought of as a sequence of 1 to 16 32-bit loads.
And the reason why I mention the S/360 so much is that it was one of the first systems to make an explicit distinction between instruction set and implementation, which was a major advance in computer engineering; that's a very important distinction to understand if one's going to discuss instruction sets and implementations thereof.) Guy Harris (talk) 09:18, 14 August 2020 (UTC)
"One large problem here is that the notion of "bit width" isn't as simple as some think it is". No. There are some cases of exotic systems or VLSI designs where it becomes hard to be specific. The 68000 is NOT one of those cases. "where it might Even at the instruction set level, independent of the characteristics of a particular implementation, there may be multiple register types, with different bit widths; there may be integer data registers, address registers, general-purpose registers used for both integers and addresses, floating-point registers, vector registers, SIMD registers, etc..". There you go again. Beginning by trying to fog the issue with a load of things which are just not relevant. It's like looking at a V8 and having somebody talk about how complicated it is to determine how many cylinders there are in an engine because a motor vehicle engine is made up of all sorts of different things. We aren't talking about motor vehicle engines generally nor esoteric designs which may exist elsewhere. The subject is how many "cylinders" a specific "engine" has. It really isn't that hard. There may be cases where it's hard, this isn't one. The 68000 is a comparatively simple 1979 68-70,000 transistor CPU with much more in common with its 8bit forebears than with its 32bit successors. The most work, in terms of bit width, you can get out of a 68000, with a single instruction, is a register-to-register move.l and it takes 4 cycles. That is Not the behaviour of a 32bit CPU and it isn't even how this is determined and there isn't anything else about a 68000 which is more 32bit than the basic on-chip register-to-register move operations. All sorts of things might be all sorts of complicated but none of them change these well-documented facts of the 68000: It has a 16bit ALU (The ALU is literally the main thing. Once that's 16bit, you don't really a place to go, but let's continue. Maybe I've missed something more important than the ALU somehow?). Maybe we can stretch the 68000 case out a bit by honing in on other 32bit things about the 68000, right? If we find any let's not worry about whether it's actually relevant to calculating the bit rating of the CPU itself. Let's try that out and look deeply into the 68000 for anything 32bit: The 68000 attaches to a 16bit motherboard. It attaches to it through a 16bit data bus. The 68000 has a 24bit address bus to which it attaches to 16bit RAM over 16 data lines on 16bit motherboards. OK, so rather than only considering the only things that might actually inform the discussion, let's also reach for the money shot let's just set out solely to eke out anything 32bit about the 68000, irrespective of relevance: Long-word instructions; they only instruct the processor to work on two 16bit values in series, one after the other. The 32bittest thing, operationally, you can do on a 68000, as I've already pointed out trying to help you out, is on-chip register-to-register long-moves, and they take 4 cycles each. So that's it. If there is anything else let me know below but please this time restrict your words to what you think are 32bit 68000 features, leave everything else at the door. Even the IBM 360 you want to talk about is classified as 32bit, despite the fact the IBM 360 has 64bit registers, and it has commands which instruct it to fill four 32bit machine registers before moving to the next instruction. Does that make it 64bit or 128bit somehow? No it doesn't. The irony of using the IBM 360 as your main comparative is it is yet another perfect platform for explaining why it's classified as 32bit and nothing else despite those other features like 64bit registers and 4x32bit move operations.
When talking about basic fact like how many cylinders an engine has, diverting the conversation to talk about how complicated the engine management unit is or how complicated exhaust manifold construction is doesn't make it any harder determine how many cylinders it has because those details are not relevant to the discussion. It's just fog. Guess what? The IBM 360 has 64bit registers. You know what else is true? It has instructions to move 4x32bit words (which, by Motorola-fan standards, should make it a 128bit CPU, but don't). The 360 is a 32bit CPU. When it's all so complicated and there are so many things to consider however did they arrive at their conclusion with all those surrounding details? Answer: Word length. At its very best, the 68000 has simple circuitry outside the ALU which allow it to perform 2x16bit register-to-register moves in 4 clock cycles. I've tried to help you out here by reaching in and extracting the 32bittest feature there is to find on a 68000, and it's half what you'd expect from a 32bit chip. If you bother to reply to this then I have one request only: Restrict what you write to only the facts of the 68000 which you think mean it's in any way 32bit. Only verifiable 32bit 68000 facts. Nothing else is relevant. However much you love to talk about the IBM 360 you cannot possibly have a shred of a valid argument supporting the idea the 68000 is in any way 32bit until you can start producing 68000 operating-features of more than 16bits. I'll even help you by extracting everything there is you can reach for: The 32bittest thing on a 68000 is a register-to-register move, which takes 4 cycles. The 68000 has various ".l" instructions, which ask it to treat pairs of 16bit values in series. Those instructions are made of a 16bit instruction followed in memory by two 16bit memory locations, treated singularly by the assembler representation: They are treated as 3 separate 16bit values by the CPU. That's all. If you're trying to stretch out the 68000 to 32bits that's the whole story in six lines of text. If you really think the overriding factors of the ALU and data bus widths which are both 16bit is really somehow not important (despite the fact they are the overriding features). If you want to set aside those overriding details as not important then I'll step aside to allow it but you still have work to do. If those 16bit big-block CPU features aren't important then what, about the microprocessor, is? What else other than those things are 32bit which changes the operational format of the 68000? Your challenge is to do this without talking about anything else except what you think are 32bit 68000 features. A bullet point list of 32bit 68000 facts would be perfect. Vapourmile (talk) 14:47, 14 August 2020 (UTC)
OK, one thing that anybody who wants to discuss the bit width of the 68000 must understand is that:
  • there is the 68000 family instruction set, the initial version of which was mostly 32-bit, with 32-bit address and data registers and instructions using them as 32-bit operands (it only lacked 32x32 multiply and divide with a 32-bit-or-wider dividend and 32-bit divisor), and that has been extended over time (including adding the missing 32-bit multiply and divide capabilities);
  • there are multiple implementations of versions of that instruction set, the first few of which (68000, 68010, 68012, 68008) had a 16-bit ALU for data and a pair of 16-bit ALUs for addresses, and external buses with 24 (68000, 68010), 20 or 22 (68008), or 31 (68012) address bits and 16 (68000, 68010, 68012) or 8 (68008) data bits, and later models of which had 32-bit ALUs external buses with 32 address bits and 32 data bits.
The System/360 is similar, in that:
  • there is the System/360 instruction set, which is 32-bit, with 32-bit general-purpose registers and instructions using them as 32-bit operands, and that has been extended over time (with some extensions being given name changes to the System/370 instruction set, the System/370-XA instruction set, the System/390-ESA instruction set, and the z/Architecture instruction set, and with other extensions just being additions that don't change the name);
  • there are multiple implementations of the System/360 instruction set and its successors, some of the first of which had 8-bit ALUs for integer operations and others of which had 32-bit ALUs for integer operations and 24-bit ALUs for address operations.
That's why I mention it - to emphasize that there's the bit width of the instruction set, which is relevant, and there's the bit width (or widths, if you have multiple ALUs, or if the ALU and internal bus width is different from the external bus width) of the implementations.
The instruction set is relevant (repeated assertions to the contrary, without facts, do not disprove this). Code doing 32-bit arithmetic on a 32-bit instruction set, if limited to the original version of the instruction set, is bit-for-bit identical on all implementations, and, even though it may require 4 passes through the ALU on an implementation with an 8-bit ALU or 2 passes through the ALU on an implementation with a 16-bit ALU, will require only one pass through the ALU on an implementation with a 32-bit ALU, so it's not as if it needs recompilation. In the case of the 68000 family, the only difference recompilation makes in that regard is that, if you recompile for the 68020 and later processors, you don't have to call subroutines to do 32x32->32 multiplication or 32/32->32 division or remainder; the same code works just fine for addition, subtraction, and bitwise operations. (You also get the newer addressing modes, but that's not a bit width issue.) Furthermore, with 32-bit registers used to hold addresses, you have a linear address space larger than 65536 bytes, even if, as architected in System/360, and as implemented in the 68000, 68010, 68008, and 68012, not all of the 32 bits were available as address bits.
The implementation bit width is relevant from a performance point of view. It doesn't affect the type of code that's possible, and doesn't force code that can run on a narrower bit-width implementation to work in a fashion that's slower than necessary on implementations with a larger bit width (with the exception of the multiply and divide issue in the 68000 family - I'm curious why they didn't bother microcoding 32x32->64 and 64/32->32,32 instructions).
If you're designing a system with an embedded controller, and expect to rewrite all the software for the next such system you build, the bit width of the implementation matters more than if you're designing a general-purpose computer system that you expect to be part of a binary-compatible family of systems continuing into the future, with future implementations of the instruction set having an internal ALU bit width equal to the bit width of the instruction set. In the former case, if the implementation you're considering has an internal ALU bit width less than the bit width of the instruction set, there may be no reason to use the full bit width of the instruction set if it imposes an performance hit, as you don't have to plan to allow developers to write software using the full bit width of the instruction set - there won't be any such developers. In the latter case, you might want to use the full bit width of the instruction set, even though there's a performance hit, to make it easier to, for example, move code for VAX BSD to your workstations, and to have that code make full use of later implementations.
"Even the IBM 360 you want to talk about is classified as 32bit, despite the fact the IBM 360 has 64bit registers, and it has commands which instruct it to fill four 32bit machine registers before moving to the next instruction." And despite the fact that there are implementations of its instruction set where a register-to-register move take multiple cycles, and that have a single 8-bit ALU. The System/360 instruction set is classified as 32-bit because its general-purpose registers are 32-bit (the floating-point registers are specialized), it has instructions that treat those registers as 32-bit quantities and do 32-bit arithmetic on them. With the exception of the lack of 32x32 multiple and a division instruction that handles 32-bit dividend and divisor, the same applies to the instruction set of the 68000, 68008, 68010, and 68012.
So, just as the IBM System/360 Model 30 is an 8-bit implementation of a 32-bit instruction set, the 68000 is a 16-bit-data/32-bit address (three 16-bit ALUs, one used for data, two used in parallel for addresses) implementation of a mostly 32-bit instruction set.
So what's 32-bit about the 68000 instruction set? The address and data registers are 32-bit; they are not two 16-bit registers treated as register pairs.
A ".l instruction" is not "a 16bit instruction followed in memory by two 16bit memory locations"; it's a 16-bit word containing the opcode, an operand size indication, and other fields, followed by zero or more 16-bit units.
This is not a special characteristic of ".l instructions"; it is true regardless of whether the operands are 8-bit, 16-bit, or 32-bit. All the ".l" does is set the operand size indication to indicate 32-bit operands, and the size, in 16-bit instruction units, of immediate operands, just as the ".b" causes the operand size indication to be set to specify an 8-bit operand size (immediate operands still take 2 bytes, because instructions have to be aligned on a 16-bit boundary), and the ".w" causes the operand size indication to be set to specify a 16-bit operand. In all those cases, the instruction format is the same, except for the operand size indication and the width of immediate operands in the instruction stream.
This is also true in the 68020 and later, so that has nothing to do with the bit-width of the processor that implements the instruction set; it has to do with the way Motorola defined the instruction set.
(And the System/360's instructions are also built from 16-bit pieces, so it has nothing to do with the bit width of the instruction set or the processor that implements the instruction set there, either.)
Your challenge is to demonstrate to me that, as I assume is the case, you understand that there's a 68000 family instruction set, which was originally mostly 32-bit and picked up the few missing bits of 32-bitness (the multiply and divide instructions) later, and there are implementations of the 68000 family instruction set, some of which are not fully 32-bit in the internal or external data paths. If you do, you will understand why calling the 68000 a 16-bit implementation of a mostly 32-bit instruction set, or a "16/32-bit microprocessor", is perfectly reasonable; otherwise, further discussion will be impossible, as there'd be a fundamental concept that you just don't get. Guy Harris (talk) 21:39, 14 August 2020 (UTC)
You start OK in that at least I can see what you're trying to get at, but you don't stay focused on the topic. You diffuse into a fog after a few sentences.
All I want is ONLY a bullet point list of what you think is 32bit about the 68000. Nothing else. I'm not interested in anything but what you think is 32bit about the 68000, simple fact by simple fact.
I might come back to this later but for now you've started going around in circles on things we've already done. First, it's necessary to separate the instruction set, especially its assembly language form, from the underlying architecture, for the reasons I've already given: It's representational, not actual. It is notional, not corporeal. It doesn't determine the underlying arrangement of the microprocessor. You keep reasserting it matters, because you want it to, even though it is irrelevant. I've just told you again why it's irrelevant. You're the one who needs to bring more important facts than you're own reassertion. I've already explained: You can have an assembler with a .word directive on an 8bit micro. Example: .word $FFFF, $1000, $00A0, $C000. Those 16bit directive are NOT making the underlying architecture any more 16bit. The idea they're 16bit is exactly that: It's notional. A representation. It is not part of the underlying architecture. It might look to the programmer like they're using 16bit values but an 8bit CPU will deal with them 8bits at a time. The representation doesn't change the classification of the CPU. You're falling on the same argument the fans do: You're saying it matters because you want it to matter because without it mattering there is no debate, but the idea it matters is your own idea, it is not how these things are determined. For the reason given: It isn't part of the CPUs operation, which is how this is dermined.
The 68000 tries to look 32bit to the programmer, via the instruction set layout in combination with an assembly language which further hides its 16bit underlying architecture, in order to imbue it with some representational consistency, and potential forwards binary compatibility, with forthcoming 32bit CPUs in the same line. Those 32bit CPUs materialised. The 68000 is is not one of them. To have instructions and an assembly to make it look like a forthcoming 32bit CPU does not make it a 32bit CPU.
I've already explained the reasons for the fact, and why the representation to the programmer at the assembly level doesn't make any difference. So it isn't true to say I have brought nothing to it but reassertion. I've already explained it, several times, in a number of different ways. It's the same as before: "Hey, how about we ignore the salient details of the microarchitecture and look at how Motorola tried to make it look in userland and, using much the same argument as their marketing did, which is make a smokescreen of pretending it's all terribly complicated so let's just invent a unique definition of our own making and take that as pivotal". No. Let's not keep looking at a myriad things which don't inform the subject. Let's not engage with or allow Motorola's marketing sleight on hand. Reading what you've posted is like reading the writing of somebody who loves Googling pages on the details of microprocessor architecture but despite having a miasma of collected ideas does not actually understand how the bit ratings of CPUs are determined, which is the word length. All sorts of stages of a design may have all sorts of numbers of pathways, but most of them don't matter. So I'm sorry but, like it or not, you ARE writing pages of material which is not relevant. When determining the power output of a car, looking at the details of the cabin isn't relevant. It's what the engine does that matters. Vapourmile (talk) 23:08, 14 August 2020 (UTC)
I will not engage with you any more at all, as you appear not to understand a concept as straightforward as a 16-bit implementation of a 32-bit instruction set; you just have a long list of prejudices and beliefs, not based on fact, that you refuse to reconsider. (By the way, the "word length" isn't "the length of what the documentation calls a word"; by that definition, both VAXes and Alphas are 16-bit processors, because DEC decided to use the same terminology for VAXes that they did for PDP-11s, and to use the same terminology for Alphas that they did for VAXes, so a VAX register is a "longword" long and an Alpha register is a "quadword" long. And, yes, I've written user-mode and system software that runs on all three of those, as well as code that runs on Sun's 68010-based, 68020-based, SPARC-based, and x86-based machines, and for other machines, and debugged it, requiring knowledge of machine-level code at times, so I'm not "somebody who loves Googling pages on the details of microprocessor architecture".) Guy Harris (talk) 02:25, 15 August 2020 (UTC)
I am quite happy you've decided not to "engage". In fact, I hesitate to say this, but I think most of the discussion I've had with you should simply be deleted because nothing productive has come out of it and you don't seem to have the right knowledge to answer the question, or even the right idea how to. Every exchange we've had, you continue to return to wanting to talk about "the instruction set architecture", or "the instruction set", as I prefer to call it. Most of what you've written takes far more interest in high-level language design than microprocessor design. The type of primitives a language allows you to define has no relevance. I'm not interested in hearing about "the instruction set architecture", because it isn't meaningful. We're talking about whether or not the 68000 is 32bit or 16bit. I can define a language for a 6510 microprocessor which allows the programmer to define 32bit ints. The underlying architecture didn't get an upgrade out of that. Abstractions which protect you from the technical vagaries of the underlying architecture are the whole point of having a high level language, but by doing that they aren't changing the definition of the architecture they appear on.
The 6510 is 8bit, irrespective of the language you use for programming it, and irrespective of what the language you use for programming it allows you to define. The conversation about "the instruction set" you want to have isn't fit for this Wikipedia entry because it isn't relevant. From your very first response you started out talking about the IBM 360. Your talk about "ILP32" belongs to archaic mainframe terminology. It has no relevance. I started responding to your desire to talk about nothing but the introduction set more than a week ago. Since then we've been going around in circles with you wanting to talk about nothing else except the "instruction set architecture" and the first response I made to that, explaining why it's irrelevant, just flew straight past. There's basic simple nuance here which you're simply refusing to see: The primitive types you get to define in a high level language don't mean anything. The idea you're somehow writing "32bit" programs is a nonsense. I can define a language which allows me to declare 64bit ints on a Timex/Sinclair Spectrum if I like. It doesn't matter. It isn't part of the conversation. Even if I did do that, the Z80-based Time/Sinclair Spectrum is still 8bit. Nothing changed, no matter what the compiler/assembler allows me to define. On an 8bit CPU, say a 6510, as I've said many times already, and you've ignored, I may have an assembler which allows me to define 16bit integers through .word directives. Such as: .word $FFFF. It doesn't change anything. The existence of such instructions means nothing. They tell you nothing about the underlying architecture. So the entire debate about "the instruction set architecture" is pointless, and should be deleted, because we have learned nothing from it. You can define a compiler language for an 8bit microprocessor, such as a 6510, in which you can define 32bit primitives or 64bit ints or 64bit floats. So what? It doesn't get you any closer to answering the question about whether the 6510 is 8bit. All it really tells you for sure is it makes no difference at all what the /"instruction set architecture"/ of a language allows you to do define. Whether you can define 64bit primitives in a language running on a 6510 makes absolutely no difference, it's still 8bit. The processor doesn't become "8/64bit" by virtue of being an 8bit processor for which you can write languages which allow you to define 64bit ints: It's a software layer. You could write an assembler language for the 6510 which *only* allows you define 4bit primitives. It doesn't matter. None of this does a anything to alter the fact it's still an 8bit microprocessor, no matter what language you run on it. You may want your talk about "instruction set architecture" to be relevant somewhere, but it isn't relevant to the discussion here. Vapourmile (talk) 17:46, 15 August 2020 (UTC)
It's actually quite simple. The 68k family is a 16/32-bit family. It uses a 32-bit instruction set. The 68000/10/08 are 16-bit designs as the ALU and all internal processing are in 16 bit. Note the difference to e.g. the i386SX which has a 16-bit data bus but is a 32-bit design internally (somewhat similar to the 68008 which uses 8 bit externally but 16 bit internally). Inversely, the Intel Pentium isn't 64-bit just because it has a 64-bit data bus. --Zac67 (talk) 05:26, 6 August 2020 (UTC)
I'd include the 68020 and successors in the 68k family, so "the 68k family has 16/32-bit and 32-bit processors, and a 32-bit instruction set, with the 68000/08/10/12 internally and externally 16-bit and the 68020 and later being internally and externally 32-bit". Guy Harris (talk) 07:58, 6 August 2020 (UTC)
@Guy Harris: "The 68000/08/10/12 [is] internally and externally 16-bit and the 68020 and later being internally and externally 32-bit". Bingo. There you go... you just said it perfectly.
@Zac67, No it isn't. You don't get to state it's 16/32 with no relevant supporting information. That claim has nothing behind it but campaign of marketing fog launched by Motorola. The bit classification of a CPU is defined operationally. As previously explained some 68000 instructions can be followed by 2x 16bit values in RAM but they are not treated as a unit, they are treated in 2 successive 16bit operations. You do not substitute the representation as presented to the programmer in an assembler for what actually happens on the chip level: The assembly language no relevance at all. I understand 68000 fans want it to matter, because there is no other way to try to classify the operationally 16bit 68000 as 32bit, unless you prepared to grasp at straws and try to redefine how CPUs are classified according to irrelevant information. In an assembly language you can have constructs of any number of bits. It's what the chip does operationally which matters. The argument which favours that the 68000 is somehow 32bit has the same character and validity as the comedy sketch in This is Spinal Tap, where the guitarist is arguing their amplifiers are louder because they all go up to 11. The comedy of that sketch is because it's obvious to the audience what's written on the amplifier - how it is labelled - makes absolutely no difference. The power of the amplifier, like the bit-classification of the CPU, is determined by its operation. The guitarist cannot intellectually appreciate the operational disconnect between how the dials are labelled and the amplifier's operational behaviour. The labels on the amplifier are just labels, they aren't operationally linked to anything. Meanwhile, other CPUs may behave in all manner of difference or exotic ways, there may even be some where their categorisation is ambiguous, but they don't matter because they are not the 68000. Whatever other CPUs do, the 68000 CPU is operationally 16bit, what it does operationally is what makes it 16bit. Vapourmile (talk) 12:18, 6 August 2020 (UTC)
@Vapourmile That's what I said (with a lot less words), didn't I? 16-bit ALU makes 16-bit design (68000/8/10), 32-bit ALU makes 32-bit design (68020/30/40/60). It's the processing size that matters. --Zac67 (talk) 08:13, 15 August 2020 (UTC)
Somewhat, the key difference is it isn't "16/32". The "32" moniker was added for largely marketing reasons. The underlying CPU operation is 16bit. Hence this discussion: The 68000 is marketed as being "16/32" without warrant, except for their own marketing spin. It's a 16bit CPU. Vapourmile (talk) 16:48, 15 August 2020 (UTC)
"@Guy Harris: "The 68000/08/10/12 [is] internally and externally 16-bit and the 68020 and later being internally and externally 32-bit". Bingo. There you go... you just said it perfectly." To be precise, they are implementations, with 16-bit internal data paths and a 16-bit external data bus, of a mostly 32-bit instruction set architecture, just as the System/360 Model 30 was an implementation, with 8-bit internal data paths and an 8-bit external data bus of a 32-bit instruction set architecture. Guy Harris (talk) 20:05, 6 August 2020 (UTC)
No it isn't and I'm bored to death of explaining why over and over again. I'm also bored of your totally irrelevant wish to talk incessantly about an obsolete mainframe. I know about the mainframe you love to talk about and also how it was marketed, it isn't the topic. The topic is the architecture of the 68000 which, for the many reasons given, is 16bit not 32bit. Vapourmile (talk) 12:44, 27 October 2020 (UTC)