Wikipedia:Reference desk/Archives/Computing/2017 January 14

Computing desk
< January 13 << Dec | January | Feb >> January 15 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


January 14

edit

Intel 80186 manufacturing

edit

Our article on the Intel 80186 says that production of this chip began in 1982 and continued until 2007. Aside from replacements of older hardware (especially for the embedded system for which the 186 was largely used), what, if any, market would there have been for the 186 by this time? I can't imagine a reason to do anything except replace old parts (and even then, why use a quarter-century-old design when you can upgrade to something much newer and better supported; museum/archival needs would be a tiny fringe of the market) with such an old chip design. Nyttend (talk) 01:15, 14 January 2017 (UTC)[reply]

Lots of military hardware used the MQ80186-6/B, and had very long production runs. If an embedded system does the job you designed it to do, why redesign with a "newer and better" chip. Also few of the more modern chips come in ceramic and glass packages. --Guy Macon (talk) 03:47, 14 January 2017 (UTC)[reply]
Which also explains why space-faring devices are powered by a chip found in 3 generation old products. Clubjustin Talkosphere 05:49, 14 January 2017 (UTC)[reply]
Displaying my ignorance here — if you put in the newer and better chip, even one with a ceramic and glass package AND one that fits with the connections (the screws, or whatever attach it to the rest of the board, will keep it from falling off), would that require retooling a bunch of the other hardware? I was imagining that the newer and better chip (at least a later generation of an Intel chip in this line), as long as it could be attached to the board securely, would be compatible. Nyttend (talk) 12:23, 14 January 2017 (UTC)[reply]
[ec] Nyttend. there are newer, faster drop in replacements for some chips. One example is the Dallas/Maxim DS89C420 which drops in to a standard 8051 socket and runs the existing 8051 software, but twelve times faster with the original clock crystal and fifty times faster with a crystal change.[1][2]
The 80C186EA is a newer, faster drop in replacement for the 80C186. [3] --Guy Macon (talk) 16:51, 14 January 2017 (UTC)[reply]
No, that's very much not the case. You could, of course, make a "better" 80186 now - you could probably even get a Raspberry Pi running software to emulate an 80186 in real time. But real newer chips use newer designs, and they are not usually fully compatible. There usually is some backwards compatibility engineered into newer chips, but that is not holding up over 30 years. You e.g. need different firmware for low-level interaction even if user programs and most of the OS can remain unchanged. You also need different voltages. The 80186 ran on 5V system power. I have a harder time reading the i7 data sheet, but it looks like the maximum supported voltage is 1.6V [4]. --Stephan Schulz (talk) 12:55, 14 January 2017 (UTC)[reply]
This thread is a very good demonstration of why I use software and hardware without attempting to modify either one of them...Thanks! Nyttend (talk) 13:00, 14 January 2017 (UTC)[reply]
Yes, an enormous amount of "legacy" hardware and software is in everyday active use. The world's financial systems run largely on decades-old COBOL software running on z/OS, which maintains backwards compatibility back to the 1960s. (The software is generally maintained and updated as necessary, but it isn't rewritten from scratch.) U.S. nuclear power plants are run by PDP-11s. CNC and SCADA systems running MS-DOS or other old software are everywhere. --47.138.163.230 (talk) 13:52, 14 January 2017 (UTC)[reply]
Chips aren'tn screwed onto a board, but soldered with their connection pins. A newer chip is not compatible, unless it has exactly the same functionality and pin layout. Using a newer and "better" chip means redesign of the entire system: specifications, peripheral electronics, system board, software, etc. And then testing all of it. The cost would be huge. If the old chip is still available and it does the job, there would be no reason to go through this all. That's precisely the reason why some popular chips are kept in production for such a long time (and often at low cost). Hope this explains a bit. :-) Jahoe (talk) 13:17, 14 January 2017 (UTC)[reply]
Parts obsolescence can be a huge problem. The manufacturer always claims that the "new and improved version" is also a "drop-in replacement". The problem is that it can be arbitrarily difficult to prove that the drop-in replacement actually meets all your requirements, including the requirements you forgot to document, or didn't even realize you were depending on.
The other big problem is testing. The longer a system has been running, the more likely it is that over the years, you fixed a bug or added a feature but forgot to write it down, and forgot to add a test for it to your list of test cases. So for a big, complicated, old system, testing it thoroughly (to make sure it does everything it's expected to, even after making some significant change like swapping out the CPU for a "new and improved" one) isn't just timeconsuming and expensive, it can be downright impossible. —Steve Summit (talk) 14:18, 14 January 2017 (UTC)[reply]
I see mention of space and military hardware above. I assume this has something to do with a relative vulnerability of high-resolution circuits to radiation, though I don't pretend to know the details. Wnt (talk) 13:57, 15 January 2017 (UTC)[reply]
We have an article on that: Radiation hardening. --Guy Macon (talk) 19:55, 15 January 2017 (UTC)[reply]
Thanks --- looks like my guess was totally off base, because they talk entirely about specially made radiation-hardened chips, using e.g. different substrates, redundancy, counters, etc. Wnt (talk) 22:12, 16 January 2017 (UTC)[reply]