Wikipedia:Reference desk/Archives/Computing/2011 April 10

Computing desk
< April 9 << Mar | April | May >> April 11 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


April 10 edit

PCI-Express Standard - Missing Widths edit

Hi.

  The PCIe standard defines bus widths between x1 and x32. I have commonly seen x1, x4, x8 and x16; but I've to date not found either a motherboard with x2 or x32 slots. Neither have I come across any daughtercards with x2 or x32 connectors. Why do these two widths not seem to be in use? I can appreciate that there may not currently exist any application for which a x32 width is needed (although I can think of atleast one - a single Host Interface Card that connects four or more Tesla GPUs), but it seems to me that there would be plenty of applications for the x2 width.

  Thanks as always. Rocketshiporion 04:25, 10 April 2011 (UTC)[reply]

Inconsistency between different calculators edit

When I enter the equation   into my trusty old TI-83, it responds with  . When I enter the same equation into wolfram alpha it responds with  . bc just gives a runtime error.

Wolfram alpha seems to be the correct one of the three. When the equation is phrased instead as   the TI-83 reaches the same answer as wolfram alpha (bc still gives the same runtime error (Runtime warning ("func=(main), adr=29): non-zero scale in exponent")). What is the cause of this inconsistency? Horselover Frost (talk · edits) 04:50, 10 April 2011 (UTC)[reply]

Apparently bc only allows integer exponents. So that's one inconsistency down. I still don't know where the TI-83 is getting it's answer wrong. Horselover Frost (talk · edits) 05:37, 10 April 2011 (UTC)[reply]
The difference is simply that the TI-83 is computing   while Wolfram Alpha is computing  . -- BenRG (talk) 17:25, 10 April 2011 (UTC)[reply]
That explains the significant difference between the systems. This calculation seemed too simple to have a huge variation only due to floating-point error; though error propagation can sometimes be very significant in some algorithms. For future reference, the TI-83 does not use a normal floating-point format. (The format it uses isn't documented in the TI-83 Guidebook). The TI-89, for example, uses 80 bit floating-point math - better precision than a 64-bit supercomputer cluster! So, we expect a little variation in the last few decimal places, due to implementation of the arithmetic. But when there's such a significant difference, as we saw in this case, it's either due to different interpretation of the same syntax (as BenRG pointed out above); or due to numerical stability problems in a complicated algorithm. Nimur (talk) 20:26, 10 April 2011 (UTC)[reply]
See Common Errors in College Math.
Wavelength (talk) 15:17, 11 April 2011 (UTC)[reply]
Our order of operations article specifically mentions that different conventions on the precedence of unary minus can be problematic. --Sean 17:22, 11 April 2011 (UTC)[reply]
As noted in that article, in order for unary minus to work as expected on the leading term in polynomial expressions like  , the Wolfram Alpha precedence order must be the "correct" one. « Aaron Rotenberg « Talk « 06:44, 12 April 2011 (UTC)[reply]
I agree; I was just pointing out that Opie isn't the first to be bitten by this issue. --Sean 16:59, 12 April 2011 (UTC)[reply]

Nehalem multipliers edit

I'm looking for information about the multiplier in the Nehalem processor. Specifically, I want to know if the multiplier in the integer unit and the significand multiplier in the FPU use the same basic design, and what radix they are. Wikipedia's article on the Nehalem doesn't discuss the arithmetic circuits, and neither do all the tech websites I know of. I can't find any relevant results with Google, which only turns up discussion about Nehalem's clock multiplier. I suspect if the information I'm looking for is out there, it's in an IEEE-published paper, but I'm not keen on parting with money to get it. Thanks in advance. Rilak (talk) 08:59, 10 April 2011 (UTC)[reply]

To clarify - you're looking for the implementation that is actually used, at the bit level, to perform multiplication throughout the chip?
I would suspect (though I am not certain) that such details would be pretty proprietary to Intel. If Intel operates like other companies in my experience, they probably have a giant internal shared library of VHDL and Verilog implementations of standard tools, like adders, shifters, and multipliers, which are used many times throughout a chip. A computer architect then selects the appropriate version for a particular logic block, chip element, or re-implements a new version with some specific optimization. High-level system integration architects work with block-level logic units, while low-level engineers push around individual bits. The Nehalem internal design team probably documents various implementation choices, but it's very unlikely that they would publicize such details.
You might take a look at http://opencores.org for free and open-source HDL implementations of various IEEE-754 (and other "wacky" specifications) for floating-point math. I suspect that you'll find a modern FPU implementation to be pretty incomprehensible (I do, at least). Bits get pushed through weird flows that do not directly correspond to our straightforward high-level concept descriptions of FPU math. We can attribute this to 25 or 40 years of theoretical mathematical development, engineering optimization, and process-tuning, since the invention of the floating-point hardware. (Not to mention: pipelining!)
For example, here is an unpipelined IEEE-754 implementation with HDL source-code. It even has a block-level spec, and here's the meat of the code, including the bit-arithmetic for the radix and significand. Nimur (talk) 20:05, 10 April 2011 (UTC)[reply]
Am I looking for low-level details like transistor- or gate-level circuit descriptions? No, I'm interested in the high-level architecture details like the reduction network and what the radix the Booth recoding is. It's not difficult to find such information for processors designed during the 1990s, and discussions regarding the circumstances and trade-offs made that led to the resulting design. The goals of processor design and the technology has obviously changed a lot since the 1990s, so I am curious about the current design of arithmetic circuits from leading companies. Rilak (talk) 01:11, 11 April 2011 (UTC)[reply]
Chances are, if Intel publishes the information you're looking for, it's gonna be here: Architecture & Silicon Technology. There's a few white papers, and some very technical articles, but mostly, I think that page is a bit "fluffier" than what you're seeking. It seems that they're more willing to publicize how their logic transistors are built rather than how they are connected into logic elements. I suspect this is because you can't replicate 32-nm hafnium high-K devices without a fab - so they'll gladly tell you everything about how they do it, knowing that their intellectual property is essentially un-infringeable; but if they explain their logic microarchitecture implementations, you can easily design that on your own process. As far as the "high-level stuff - Intel publishes voluminous tomes - the x86 and IA64 manuals - encompassing tens of thousands of pages of the mundane details. the x86/ia64 Basic Architecture manual has an entire chapter on the FPU environment; but this is all x86/ia64/x87 level stuff, so microarchitecture details are intentionally not revealed. Nimur (talk) 04:53, 11 April 2011 (UTC)[reply]
Another helpful link - Core Microarchitecture. Still no details on the FPU innards; the white-paper is hyping up the various Core features. Nimur (talk) 15:57, 11 April 2011 (UTC)[reply]
Don't forget patents! If Intel has disclosed any details of its multiplier circuits, it has certainly patented them, and that means the description is in one or more of Intel's thousands of U.S. patents, all of which are freely available. The search might not be fun, but the information is there if it is anywhere. Try the USPTO full-text search page. -- BenRG (talk) 19:45, 11 April 2011 (UTC)[reply]

Look on Agner Fog's x86 optimization site (find with web search). He has reverse engineered a lot of that stuff, though Nehalem is pretty new and might not be there. I don't know if sandpile.org is updated any more. 75.57.242.120 (talk) 10:49, 12 April 2011 (UTC)[reply]

I had a look at Intel's white papers and searched their site. Didn't find anything. Unfortunately, it's the same with patents. Agner Fog's site doesn't have any information about the design of the multiplier, only its latency. I found a presentation at Hot Chips 2008 about the Nehalem microarchitecture that might have what I'm looking for, but the PDF copy seems to have disappeared somewhere (probably why I missed it in the first place, since Google has no cached copy of it). Thank you everyone for the help. Rilak (talk) 08:03, 13 April 2011 (UTC)[reply]

SSD as C:\ edit

Hi,

I'm thinking of upgrading my PC to have a SSD as my C:\ (i.e. run Windows 7 off it) and am just wondering if anyone here has had any experience with this and specifically the lifetime of the drive? I'm a little concerned that it may wear out rapidly given it is likely to have many read/write cycles every day running the OS and programs etc... I'd like to think it has a >95% chance of lasting for 3 years...

Thanks, --58.175.32.140 (talk) 10:42, 10 April 2011 (UTC)[reply]

In case it helps, I'm thinking of getting something of similar specs to this: http://www.crucial.com/store/partspecs.aspx?IMODULE=CTFDDAC128MAG-1G1 (I know that page says '3 years' but its a limited warranty and things may be different in the demanding situation of being the main OS drive...) --58.175.32.140 (talk) 10:49, 10 April 2011 (UTC)[reply]
I have Microsoft Windows 7 Professional Edition running on a RAID 1 active partition on a couple of Kingston SSDNow V+ 512GB 2.5” SATA II SSDs. It's been running smoothly for over a year, but I haven't switched off my computer during the last one year (except for 20-second reboots), so maybe I'm not the typical computer user. Rocketshiporion 11:28, 10 April 2011 (UTC)[reply]
I expect my Kingston SSDs to last atleast five years. But I've neither any idea about the lifetime of, nor any experience with, Crucial SSDs though. Rocketshiporion 11:31, 10 April 2011 (UTC)[reply]
See Solid_state_drive#Comparison_of_SSD_with_hard_disk_drives. StuRat (talk) 17:32, 10 April 2011 (UTC)[reply]
Although SSDs do have an upper limit on writes before they fail, I think the limit is so large that you're very unlikely to reach it. Most of the fear of "write failure" seems to come from the misconception that writing to a particular part of the disk (such as a particular file) will wear that part out. Actually, the limit is on the total amount of data written to the drive. If you have a 64 GB SSD and it's quoted as supporting 10,000 write/erase cycles, that means you can write 640,000 GB to the drive over its lifetime. Although a Windows system will sometimes write small amounts of data to the drive in the background even when idle, I'm pretty sure it would take millennia of idle time to make a significant dent in the write limit. If you write constantly to the drive at its maximum supported data rate, you might manage to hit the write limit of a low-end drive before it fails for some other reason. -- BenRG (talk) 18:07, 10 April 2011 (UTC)[reply]
If you consider a worst case scenario: an incredibly stupid/naive firmware, an application flushing after writing every byte (ie a very naive log file writer), and a not uncommon erase size of 128KB, every single byte written would be amplified 128*1024 times. 640000GB/128KB = 5242880000 writes, or just 5GB of 1-byte-writes to reach the drive's end of life. I'm not saying any current drive is that bad, but it does show that applications and the quality of firmware might matter quite a bit for the life expectancy of a SSD drive. Unilynx (talk) 21:11, 11 April 2011 (UTC)[reply]
Also, such statistics are really based on a MTTF estimate: for a fair comparison, you should compare to a MTTF on a mechanical magnetic disk-drive. The number of reads/writes is an estimate; and I don't have any meaningful intuitive interpretation to "number of reads/writes". But I can easily compare an SSD with a mean-time-to-fail of 2 million operating hours, vs. 600,000 operating hours for a mechanical drive. Even that is a statistical extrapolation based on a HALT test - try to find an actual example of a 2 million operational-hour history for any computer! Nimur (talk) 20:15, 10 April 2011 (UTC)[reply]
I don't think write wear will be a significant issue for a modern SSD with good wear levelling. The main specification to look for is the random write speed with 4k sectors. Sequential and large-buffer write speed doesn't matter nearly as much. I'm not familiar with the Crucial drive you mention. The Intel X-25 series were the first well-known SSD's to figure out the issues and fix the problems that plagued earlier SSD's, but now there are some other good ones. anandtech.com has lots of good reviews and benchmarks. I don't think it's worth paying for a 128gb SSD since most large data isn't randomly accessed. I use a 64gb SSD for system partition and personal stuff in immediate and frequent use, and a standard hard drive (1.5TB) for larger stuff like media downloads. The combo works great. 75.57.242.120 (talk) 21:25, 10 April 2011 (UTC)[reply]