# Talk:16-bit

WikiProject Computing (Rated Start-class, High-importance)
This article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Start  This article has been rated as Start-Class on the project's quality scale.
High  This article has been rated as High-importance on the project's importance scale.

## Untitled

Hello Friends.

Can any body explain a little that what is the basic difference between 16-bit and 32-bit words. I just want to know that what is the actual advantage of 32-bit word over 16-bit.

regards

Raheel

Sure.
It's basically like this. If the word size of a machine is 16-bit, then you can store at most 2^16 = 65536 different combinations in one word. These combinations are often taken to be the numbers 0 to 65535 or -32760 to 32759 or memory locations.
If you need larger numbers, or if you need to keep track of more data that needs more memory, you'll need to use two words.
Using two words for storing numbers increases the time it takes to do calculations with them. This is because instead of one operation to for instance add two numbers, more operations are needed.
If you use a 32-bit machine however, you can use 4294967296 different combinations. That means you can use a single word to contain larger numbers or references to a wider range of memory locations.
Therefore a 32-bit machine will perform better at the same clock speed, with the added bonus of making the lives of programmers easier. (The last bit has also got a lot to do with the way some processor manufacturers (Intel springs to mind) and OS developers (Microsoft for instance) chose to overcome the restrictions of 16-bit words.)
I hope this helps.
Yours sincerely,
Shinobu 07:12, 31 August 2005 (UTC)

## 16-bit DOS and Windows 1.0/2.0 applications

Those applications were 16-bit, not 8-bit. The 8088 was a processor with a 16-bit instruction set and an 8-bit bus; it's the instruction set, not the bus width, that matters to the "width" of applications. Guy Harris (talk) 18:56, 20 January 2008 (UTC)

So how many kilobytes can 16 bit processor address?--169.232.119.28 (talk) 02:11, 1 May 2008 (UTC)

From looking at this and other pages, it appears it can address ${\displaystyle 2^{16}=65536}$  bits, which is 8192 bytes (octets) or 8 kibibytes. If by "kilobyte" you mean "1,000's of bytes," then it's 8.192 KB, but if you meant "1,024's of bytes," which is what Windows and I think Macintosh now use, then it's 8 KiB. Eebster the Great (talk) 05:04, 14 September 2008 (UTC)
Its actually 65,536 bytes, which is 64 kilobytes. Using certain techniques, a CPU can address more memory than the width of its ALU. An example of this would be the Intel 8086, which is a 16-bit microprocessor with a 20-bit physical address capable of addressing 1 megabyte. Rilak (talk) 07:58, 14 September 2008 (UTC)
Eebster made the assumption that every single bit has its own address, but addressing is done per byte. Therefore Eebster was a factor eight off. Shinobu (talk) 03:03, 4 October 2008 (UTC)
Good call; sorry. I guess I'll try restrict myself to answering questions to which I actually know the answer. Eebster the Great (talk) 05:12, 4 October 2008 (UTC)
But if we stuck with what we already know, how would we ever learn anything? I agree that, with a flat memory model, 16 bit addresses can be used to specify at most one of ${\displaystyle 2^{16}=65536}$  addressable locations. With byte-addressable memory, that gives 64 KiB. But some 16 bit digital signal processors use 16 bit word addressable memory, and so they can directly address the equivalent of 128 KiB. However, some systems don't have a flat memory model -- as Rilak implied, some systems have hardware that uses bank switching or memory segmentation to allow the CPU to indirectly address significantly more memory. —Preceding unsigned comment added by 68.0.124.33 (talk) 20:20, 24 October 2008 (UTC)

Good point! I think we should clarify the article, and perhaps all n-bit articles. Rilak (talk) 08:48, 25 October 2008 (UTC)

## Shouldn't the Z80 and the 6809 go here since you still consider 68000 as being 32-bit?

You always consider 68000 being 32-bit for no other reason other than it's registers being 32-bit, regardless of it having a 16-bit ALU and data bus, but you never call the Z80 and the 6809 16-bit, despite them having the same reasoning? —Preceding unsigned comment added by 75.57.174.145 (talk) 16:06, 16 May 2009 (UTC)

## Add list of 16-bit CPUs/MPUs?

I believe that it would be beneficial to have a consolidated list of 16-bit CPUs and MPUs on this page, much like the 8-bit page's list. This would make it very easy for readers to browse through all the 16-bit architectures and processors without having to fish around. Therefore, I am adding an incomplete list to this article with the processors and architectures I am aware of. If there is a more preferable way to do this, or if you know of something to add to the list, please feel free to help. Daivox (talk) 03:10, 3 May 2011 (UTC)

## 16-bit application

Is [1] an improvement? I do not see an improvement. BTW, I hastily used rollback and destroyed some other changes. I object only against “On the x86 architecture, a 16-bit application normally means any software written for…” and am indifferent to changes in other parts of the article. Incnis Mrsi (talk) 21:35, 26 July 2013 (UTC)

I'm still looking for documentation, but there might have been 16-bit applications written for UNIX System V/286 that ran, along with 32-bit applications, on UNIX System V/386, and the same might have applied to pre-386 and 386 Xenix, so there might have been 16-bit applications not written for "MS-DOS, OS/2 1.x or early versions of Microsoft Windows". The term largely applies to DOS, OS/2, and Windows applications, but it might not exclusively apply to them, even if there were relatively few UNIX applications for x86 at that time. Guy Harris (talk) 22:28, 26 July 2013 (UTC)
In addition, in theory, there might have been versions of some of those OSes that ran on machines other than PC compatibles and to which the 16-bit vs. 32-bit distinction applied.
(BTW, I put back all of HLachman's changes other than the one to which you object.) Guy Harris (talk) 22:36, 26 July 2013 (UTC)
Thanks to both of you for your attention to this issue. I'm OK with either wording (they're both mine!). Was my 2nd wording an improvement? I borrowed it from the 32-bit application article because it sounded more concise than my 1st wording, while being synonymous (although I'm not certain of that). Either way, my main intent is to make sure that if the paragraph is PC-centric then it declare itself as such, because prior to my 1st wording (04:46, 11 June 2012‎) it merely said, "A 16 bit application is any software written for... (various PC platforms)...." That seemed to imply that PDP-11 programs, for example, cannot be called "16-bit applications" (even while they might run on a VAX in compatibility mode). So I'm fine with leaving the wording as-is. As for the remaining issues in the paragraph (Xenix, etc.), I'm not too clear what to do about those. Thanks also to Guy for restoring my other edits. HLachman (talk) 23:59, 27 July 2013 (UTC)

I did this at 64-bit computing years ago, but nobody followed up. So now I've done it here. Nounized the title and ditched the lead template. I invite comments, and invite others to do something as good or better at 32-bit and elsewhere. Dicklyon (talk) 06:36, 14 December 2014 (UTC)

Codename Lisa reverted the move and the edit, and in edit summary says Amazing how people who describe things as "stupid" often fail to divine the purpose of what is described as "stupid". I still think it's stupid, and I think I understand that the point is to make all the articles follow the same pattern, but the pattern itself is stupid, and hard to fix. Since he hasn't come to talk about it, I'll revert to get his attention to the discussion. Dicklyon (talk) 02:11, 15 December 2014 (UTC)

Hi. It is with major difficulty that I am writing this. While was composing this message the other day, Dick Lyon not only violated WP:BRD but also called me "Asshole", effectively going against pillar #2. So, I aborted midway. Discussion is for people who are here to edit encyclopedia and Dick Lyon is definitely a case of WP:NOTHERE, as shown by his edit warring. Still, I had never before shirked from a discussion if my dispute party has started one. I don't want this case to be an exception. In the interest of having an actual discussion, I am asking user Jeh and FleetCommand to join this discussion; the former is contributor to many CPU-related articles and the latter is the primary author of MOS:COMPUTING.
The issue with Dick Lyon's edit is twofold: First, "16-bit computing" has WP:COMMONNAME problem. The reason for the rename, on the other hand, is ... well, none is provided except saying status quo is "stupid". And he seems to hold some unexplained value in the title being a noun group. (Well, "16 bit" is a noun group and doesn't have WP:COMMONNAME problem.) Second, the re-written lead is outright wrong. It contends:

In computer architecture, 16-bit computing is the use of primarily 16-bit data, including integers and memory addresses. That is, data elements are at most 16 bits (2 octets) wide. CPU and ALU architectures based on registers, address buses, and data buses of 16-bit width have been popular for decades.

Not true because:
Updated 15:23, 17 December 2014 (UTC)
1. "Primarily" is a weasel word and wrong; a 16-bit app must use a combination of 8-bit and can strictly use 16-bit registers; there is absolutely no other options.CPU registers in 16-bit mode are strictly 16-bit. On the other hand...
2. The data elements have no obligation to be at most 16-bit. They can be of any length, e.g. 8, 16, 32, 64 or 128 bits. Strings have arbitrarily unlimited lengths. (Their practical length has been initially 64 KB.) Length of the data elements is an issue of programming, not CPU architecture.
3. 16-bit CPUs and ALUs have not been popular for decades; they weren't even around for one decade (1978–1985), and even then, they were not popular. Back then, computer was not a consumer's product.Both "decades" and "popular" are weasel words that need source and clarification. How many decades exactly? One, two, four, eight, or a hundred? And does "popular" means "general popularity" or "popular in some specialized field"? If the latter, please define the field.
Best regards,
Codename Lisa (talk) 11:25, 15 December 2014 (UTC)
Greetings, Codename Lisa. Actually, I'd like to hear what Dicklyon and Jeh have to say first before taking sides. Jeh knows a great deal more than me. But I can tell that Dick has long cross the line of disrupting Wikipedia to make a point. Instead of reverting for a second time, he must have dropped you a line and asked why you disagree. Or... at least this is the standard to which I am always held. Fleet Command (talk) 22:07, 15 December 2014 (UTC)
First, a couple of nits re CNL's criticisms:
1. "Primarily" is wrong; a 16-bit app must use a combination of 8-bit and 16-bit registers; there is absolutely no other options.
Not so. There are many 16-bit architectures (HP 2100, Data General Nova, PDP-11) that have no 8-bit registers. Even if you do have 8-bit registers, this does not preclude the correctness of the "primarily" claim.
2. The data elements have no obligation to be at most 16-bit. They can be of any length, e.g. 8, 16, 32, 64 or 128 bits. Strings have arbitrarily unlimited lengths. (Their practical length has been initially 64 KB.)
I believe Dicklyon was referring to data items that fit within a machine "word" and the processor's arithmetic and logical instructions that operate on such data. Yes, you can deal with 32-bit integers in a 16-bit CPU, but all of your basic operations are going to need more than one instruction. Of course any architecture can be used to work on data structures larger than its word size, but that's not a matter of architecture, it's a matter of programming.
3. 16-bit CPUs and ALUs have not been popular for decades; they weren't even around for one decade (1978–1985), and even then, they were not popular. Back then, computer was not a consumer's product.
They were certainly popular in terms of the overall market for computers. As for longevity, the HP 2100 series was introduced in 1967 and lasted (renamed to HP 1000) about thirty years; similarly, DEC shipped the first PDP-11 in 1970, and it was a healthy business for them for at least twenty years. It's true that in the personal computing world 16-bit machines (8086 through 80286) were but stepping stones on the way to 32 bit-hood, but the personal computing world is not all there is to computing. Jeh (talk) 23:00, 15 December 2014 (UTC)
This is a very shortsighted view of computer history—equating computer with "microcomputer". The IBM 1130 and the 360/20 were very popular 16-bit machines dating from the mid 1960s. I'm not sure of the dates of other systems, but somewhere here there's a list. Peter Flass (talk) 17:00, 16 December 2014 (UTC)
Hi. As told Jeh below, "popular" and "employed in key locations" must not be confused. When something is popular, every person on the street knows about it. Did 1130 had this state for several decades? How many people in the street even know what IBM is? Best regards, Codename Lisa (talk) 17:39, 16 December 2014 (UTC)
Popularity is relative. Both machines I mention were widely used. Obviously this was before the age of a computer on every desk, but both were very popular by the standards of the times, with the 1130 selling over 10,000 systems. The PDP-11 was undoubtedly much more popular yet. Peter Flass (talk) 20:20, 16 December 2014 (UTC)
Hello, Jeh. Thanks for joining in. First, I've fixed a year in my comments and your quotation and made the quotation part of your post to quote my list numbers as well, for ease of referring to it and commenting. I hope it is not construed as undue intrusion into your message. Now, moving forwards:
1. Actually, you might be surprised that I agree with you. Totally. In fact, I know that x86 didn't have 8-bit address registers either; CS, DS, SS and ES are 16-bit. It does have general-purpose 8-bit registers but they are either available under 64-bit mode (R8 through R15) or are taken as a whole of their 16-bit parents, AX, BX, CX and DX. So, if I am going to fix my sentence according to your review, it would become: "Primarily" is wrong; a 16-bit app must use strictly 16-bit integers and address registers; there is absolutely no other options. Still, "primarily" is wrong. (Please bear in mind that integers can't be 8-bit; on the other hand, the 8-bit aspect of byte order is always present.) In fact, "primarily", even when correct, has the potential to be WP:WEASEL.
2. Again, surprisingly, I agree with you completely and fully. However, what you didn't address is that Dicklyon's wording leaves absolutely no room to believe he is referring to architecture only. It is not even ambiguous; he is unwittingly talking about programming.
3. Well, here, there be dragons; You see, Dick's contribution doesn't address longevity alone. (Otherwise, I'd apologize and agree with you.) It says they were "popular for decades". First, let's address the intrinsic WP:WEASEL issue: How many decades? Two? Ten? One hundred? Second, let's say these computers were adopted in 2000 universities and research institutes; in world with 2 billion computers, that's not popular; that's "employed in key locations and roles". Mind you, this is more important than being popular. That's what was Microsoft's business plan: Instead of selling to masses, they sold to those few who buy en masse. Now, this fact itself shows the prospect for a correction and a compromise, don't you think?
Best regards,
Codename Lisa (talk) 14:02, 16 December 2014 (UTC)
Why can't integers be 8 bits? Peter Flass (talk) 17:00, 16 December 2014 (UTC)
For the same reason that my Wikipedia username can't be "Peter Flass"! You see "Integer" is a name given to a certain thing. "8-bits" are already called a "Byte". Best regards, Codename Lisa (talk) 17:24, 16 December 2014 (UTC)
Yes, "integer", in computing, is a name given to data types whose possible values are subranges of the integers. -128 to 127 is a subrange of the integers, so an integer data type could be 8 bits long. And, on many platforms, 16 bits are called a "halfword" or "word", so the fact that a byte is 8 bits does not prevent an integral value being stored in a byte. Guy Harris (talk) 22:23, 16 December 2014 (UTC)
CNL, you are flatly wrong about this. In computing, "integer" is not the name of a specific data type of a particular size; it is more generic than that. What you're claiming is equivalent to saying that you can't be called a WP editor because you already are called Codename Lisa. An 8-bit datum, even though stored in what is commonly called a "byte" (or "octet" in some places), can most certainly contain a value that is interpreted as, and manipulated by machine instructions as, an integer. Note that C (just for example) uses the name "char", and allows both unsigned and signed chars, and allows ordinary integer arithmetic to be performed on them. If those aren't integers, then what are they? Certainly not floating point! Jeh (talk) 23:52, 16 December 2014 (UTC)
@Jeh: Acknowledged. Struck through. Anything else here? Best regards, Codename Lisa (talk) 14:38, 17 December 2014 (UTC)
I'm not pushing for any particular wording, just for something more flexible, readable, meaningful, relevant, and interesting than what the current stupid template generates. As for popularity, I'd bet that 16-bit systems are still more popular than any but 8-bit systems. You're just not looking at a very wide scope if you're not seeing that. Every car and house probably has more than a few of each. Dicklyon (talk) 17:09, 16 December 2014 (UTC)
1960's/1970's 16-bit minicomputers weren't adopted in a world with 2 billion computers. Are you saying that, prior to the introduction and widespread use of personal computers, no computer could be described as "popular"? Guy Harris (talk) 22:27, 16 December 2014 (UTC)
Agree with GH here too. The Tektronix 465 can legitimately be called a very "popular" 'scope of its day, even though the average person on the street never heard of it and probably is not even aware of the product category. Jeh (talk) 23:52, 16 December 2014 (UTC)
@Jeh and Guy Harris: I am on your side about both the integer types and the popularity. First, I have Intel 64 and IA-32 Architectures Software Developer's Manual in front of me, opened at page 4-3 Vol. 1, which defines integers as numeric data types without floating components. She seems to have confused the point about the two-byte alignment of integers with the definition of integers. Second, I believe Codename Lisa's idea about popularity is closer to that of Bill Gate's 2002 one, when he contended that computers had never before been popular because they were expenses instead of strategic assets. Because both ideas are valid in their own domains and use different set of parameters, Codename Lisa's argument of invalidity is null and void.
However, I still believe that CL's objections #1 and #3 still hold because of their other reasons. We serious have WP:WEASEL and WP:BURDEN problems here. All in all, it appears that Dick Lyon has converted the article from one without problems to a highly problematic one, while maintaining the reproachable attitude worthy of his namesake. So, I support keeping the original version until a better one is written and all objections addressed. Fleet Command (talk) 01:18, 17 December 2014 (UTC)
BTW, the only change to 48-bit resulting from {{subst:}}ing the template and editing it was changing "In computer architecture, 48-bit integers, memory addresses, or other data units are those that are at most 48 bits (6 octets) wide." to ""In computer architecture, 48-bit integers, memory addresses, or other data units are those that are 48 bits (6 octets) wide.", i.e. removing "at most" from before "48 bits (6 octets) wide". Guy Harris (talk) 03:30, 17 December 2014 (UTC)
Objection #1:
A 16-bit app on a register-oriented machine must, obviously, not use registers wider than the maximum register width. :-) However, that doesn't prevent it from having data items wider than the maximum register width. Late-1970's versions of C on the 16-bit PDP-11 had a 32-bit integral data type long int.
The {{n-bit}} template just says "In computer architecture, {{{1}}}-bit integers, memory addresses, or other data units are those that are at most {{{1}}} bits {{{2}}} wide." That specifically refers to computer architecture, so it allows for wider integer arithmetic, which would have to be done with multiple arithmetic instructions.
64-bit computing, which doesn't use that template, says "In computer architecture, 64-bit computing is the use of processors that have datapath widths, integer size, and memory addresses widths of 64 bits (eight octets)." This is similar; it speaks of "processors", but it's still talking at the instruction set level.
I'm not convinced that "In computer architecture, 16-bit computing is the use of primarily 16-bit data, including integers and memory addresses. That is, data elements are at most 16 bits (2 octets) wide.", from Lyon's version, is an improvement. It doesn't change it from something that doesn't allow, for example, a 32-bit long llong int to something that does. It might better handle the PDP-11, in which one of the floating-point instruction sets (the one introduced with the 11/45's floating point processor) provided limited support for 32-bit integers (converting to and from floating-point, but no arithmetic), or the 80286, with its 24-bit segmented addresses, but it does so by adding the somewhat weasel-wordish "primarily".
Objection #3:
"Popular" varies with time and with market segment. 16-bit minicomputers were quite popular in the computing market throughout the 1970's, and 16-bit personal computers were quite popular during at least the first half of the 1980's. However, you won't find many 16-bit minicomputers or personal computers these days. You may find lots of 16-bit microprocessors in embedded applications now. I'd say that any talk of "popularity" belongs in text giving more detail than just "CPU and ALU architectures based on registers, address buses, and data buses of 16-bit width have been popular for decades.", giving some historical background. That can, BTW, be done whilst still using the {{n-bit}} template - put it in another section, another paragraph, or another sentence in that paragraph, after the template.
So, for now, I'd say "leave things as they are, pending further discussion". Lyon's rationale for the rename of 64-bit to 64-bit computing, and some discussion of the issues with the {{n-bit}} template can be found at Talk:64-bit computing#Title. More of his arguments against the template can be found at Template talk:N-bit#Longlasting problems from ancient template. I would suggest that further discussion of the merits of the template be done there, and that replacing the template in this or other "n-bit" articles not be done until something approximating a consensus is reached there. I'm not sure where the merits of renaming "n-bit" to "n-bit computing" should be discussed, but I'd like to see that discussed a bit more and, again, something approximating a consensus reached before renaming any other articles. Guy Harris (talk) 03:18, 17 December 2014 (UTC)
I'm not crazy about "16-bit computing", but WP:TITLE does say that titles should be nouns or noun equivalents. This certainly suggests that "16-bit" by itself will not do. Jeh (talk) 08:54, 17 December 2014 (UTC)
Very well. If you support "16-bit computing", I am conceding my opposition to it as well. It wasn't part of my original three objections anyway. Best regards, Codename Lisa (talk) 15:02, 17 December 2014 (UTC)
I completely agree with everything you said in this thread. (Yes, agreeing with what is written under "Objection #3" means I am conceding my validity point as FC requested. As for what is written under "Objection #1", the first paragraph is actually my objection #2. As Jeh said, programming issues must not be confused with architecture issues. But we can have this article cover both separately, e.g. 16-bit arch and 16-bit data types.) Best regards, Codename Lisa (talk) 15:02, 17 December 2014 (UTC)
OK, I was a bit miffed that he went elsewhere editing rather than response to the conversation I started here beore his first revert of my edits, and I was a bit more miffed when he decided to revert my edits a second time before taking up my suggestion to join the conversation. Thinking that an "asshole" move, I said so. Of course, there's no excuse for this particular loss of self control, even though I was beset elsewhere; so I apologize. Now back to the point. The template is "stupid" as I said in my edit summary. It makes it very hard for an editor to improve the lead, which is awkward and pointless to the extreme. And as to the title, if "computing" is not the ideal noun, find another one. The current adjective title is just silly. The "computing" move was accepted at 64-bit computing; what's a good alternative? Dicklyon (talk) 04:49, 16 December 2014 (UTC)
Yet you seem intent on continuing your reproachable behavior: [2]. Your apology would have looked more genuine if you didn't. Frankly, I don't find her reason for revert any less plausible than your reason for making the edit. (You don't see me going around changing g from 9.8m/s2 to 28 because I think 9.8 is stupid and insult Newton in process!) And you had no rights to counter-revert 15 minutes after her revert. Like I said, the right thing to do was dropping her a polite note. Fleet Command (talk) 01:18, 17 December 2014 (UTC)
So now reverting a refactoring of talk page comments in "reprehensible"? OK, I'll go away and let you guys sort it out or leave it for a few more years. Dicklyon (talk) 01:27, 17 December 2014 (UTC)
Your pointy edit summary clearly shows that such was not your intention; harassment was your intention and yes, harassment is reprehensible. Our dear Mr. Flass shouldn't have tampered with other people's messages and if I were CL, I'd hit the revert button. (But CL treated it as fair as I can tell.) Of course, if you love such interjections, I can gladly follow you around Wikipedia an interject in the middle of your messages. But the more civil way is that you get the point I don't be dick to make a point. Fleet Command (talk) 01:54, 17 December 2014 (UTC)

## Popular, common, or whatever

The section above about fixing the title and getting away from the stupid lead template seems to have been largely derailed by my suggestion that 16-bit computers long were and still are very popular. Of course what I really meant was widely used, widely chosen and designed in, etc., not widely known to consumers. At least one source, cited at List_of_common_microcontrollers#Texas_Instruments, says a 16-bit micro is the "most popular", but I doubt that. Almost certainly the 8-bit 8051 and PIC are more popular. Dicklyon (talk) 01:12, 17 December 2014 (UTC)

## Request to revert to my version of the page.

The section can actually be challenged due to misleading information that I found in the section. In the section it stated that 16-bit applications can run only on MS-DOS operating systems. That is totally wrong. A 32-bit Windows XP or Windows 7 which both ate not MS-DOS, can run a 16-bit application. For this reason I am stating that there must be references cited in that section. Also, the section is too small in what its main purpose is in talking about 16-bit applications, there are many other things that can be put into that section, ranging from why they are not made anymore, to why it can not run on a 64-bit operating system. That is why I placed the expand section tag. This why I am asking to revert to my version of the page. Thank you in advance. Doorknob747 (talk) 01:20, 22 March 2015 (UTC)

Where in
In the context of IBM PC compatible and Wintel platforms, a 16-bit application is any software written for MS-DOS, OS/2 1.x or early versions of Microsoft Windows which originally ran on the 16-bit Intel 8088 and Intel 80286 microprocessors. Such applications used a 20-bit or 24-bit segment or selector-offset address representation to extend the range of addressable memory locations beyond what was possible using only 16-bit addresses. Programs containing more than 216 bytes (64 kilobytes) of instructions and data therefore required special instructions to switch between their 64-kilobyte segments, increasing the complexity of programming 16-bit applications.
(the current version of that section) does there appear anything about 16-bit applications only being able to run on certain operating systems? It speaks of them as being written for certain operating systems, but, heck, some applications "written for" MS-DOS can, apparently, run on Android using DOSBox - i.e., can run on an operating system that has a Linux kernel and that's probably running on an ARM processor rather than an x86 processor! The point is that people don't write 16-bit applications for 32-bit or 64-bit Windows - there's no point (other than to learn something about 16-bit DOS, OS/2, or WIndows, or "just because") to write a 16-bit application if you don't care whether it'll run on anything other than Win32 or Win64. Guy Harris (talk) 01:40, 22 March 2015 (UTC)

## I'm not sure if it is amusing or sad that the lede...

I am not sure if it is amusing or sad that the lede doesn't define the value of 16 binary bits. I'm not even sure it is defined anywhere in the article. I SHOULD be amused, but I find it just sad. I am going to add the sentence: "Mathematically, 16 binary bits can represent up to 2^16, or 65536, different values." While not used commonly, I also note that tri- and other multi-state possibilities exist. That is, there is a reason it is formally called a BINARY bit. I'd also like to comment that "16-bit integers" aren't part of "computer [cpu] architecture", at least in the classical sense. (CPU architectures which include (16-bit) arithmetic units would of course operate on/with them.) I also note that the mention of the bus width should be as prominent in the lede (or the several busses) as memory is. It is really misleading to imply (as I think the lede does) that 16-bit integers are the same as 16-bit memory addresses. Also the really unhelpful phrase "or other data units" should be removed. 16-bits means 16 binary bits, with each bit either representing 1 or 0, or + and - (or occassionally "on" and "off", "high" and "low", or -1 and +1) with a total of 65536 possible combinations. (The integers 0, 1, ..., 65535 are the most that can be represented using 16-bits at any one time.) In modern digital electronics, transistors (or other binary logic devices) are the dominant means of storing (and processing) information. The transistors storing data are usually combined into larger groupings of bits for more efficient and quicker data processing. Similarily, the connections between various parts of a digital electronic chip (the bus) usually transfer data in groups of bits. Bits are by definition abstract, since the physical embodiment can be voltage, current, resistance, marks on a page, magnetic polarization, spin polarization, and on and on. So this article should refer to Information Theory, Digital electronics, TTL, digital logic (which has unfortunately been named boolean algebra here), and logic gate.216.96.78.101 (talk) 17:58, 11 June 2015 (UTC)

People who call it a "binary bit" must be members of the Department of Redundancy Department, as the "b" in "bit" is for "binary", as in "Binary digIT". Presumably you meant "binary digit".
The first two sentences of the first paragraph come from the template N-bit, and should be discussed on that template's talk page. Guy Harris (talk) 18:17, 11 June 2015 (UTC)
It is indeed sad and amusing that these articles with odd adjective titles can't even be edited to make more sense, due to the odd template that constrains them. Dicklyon (talk) 20:23, 31 December 2015 (UTC)