Wikipedia:Reference desk/Archives/Computing/2008 March 27

Computing desk
< March 26 << Feb | March | Apr >> March 28 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


March 27 edit

Source for Operating System Free New Computers edit

Is there a reliable source where an individual can buy a new computer with no operating system whatsoever? I want to load it with an operating system of my own choice. I do not want to pay for an operating system I will not use.

Thanks 12.183.100.8 (talk) 01:15, 27 March 2008 (UTC)[reply]

Perhaps a barebone kit at Tigerdirect.com or Newegg.com. Or build it yourself from scratch. Useight (talk) 01:17, 27 March 2008 (UTC)[reply]
I believe wal mart sells a linux machine on walmart.com.. since it's free you can expect little markup for the OS :D\=< (talk) 01:38, 27 March 2008 (UTC)[reply]
http://www.walmart.com/search/search-ng.do?search_constraint=3944&search_query=everex&Find.x=0&Find.y=0&Find=Find&ic=24_0 these use gOS, a deviant of ubuntu with google apps interaction :D\=< (talk) 01:42, 27 March 2008 (UTC)[reply]

Windows Media Player 11 edit

Is it possible to rip music straight from the hard drive or do I have to copy the music first of all to a CD before ripping it to MP3? —Preceding unsigned comment added by Jack Casement (talkcontribs) 10:45, 27 March 2008 (UTC)[reply]

OK here goes. If you have DRM'd WMA tracks, no it's not possible. Should have thought of that before you bought it; no sympathy here!! Fortunately WMP lets you burn it to CD and you have a wide open digital hole, but that results in reencoding and nasty quality degradation. Buy from amazon music next time, they have DRM-less mp3s so you actually get what you buy. :D\=< (talk) 10:58, 27 March 2008 (UTC)[reply]
Oh don't use itunes plus either since even though they're drm-less they still use crapfest AAC encoding.. our article worships it but don't believe a word; it's a total mess :D\=< (talk) 11:22, 27 March 2008 (UTC)[reply]
However, it was this "evil empire" Apple whose market dominance finally led the record companies to allow Amazon.com to sell DRM free music. Kushal 23:16, 27 March 2008 (UTC)[reply]
Bah. AAC is a good codec, don't listen to him! ;) -- Kesh (talk) 21:17, 27 March 2008 (UTC)[reply]
AAC is a good codec. However, you always lose out something when saving it as a different lossy file format like MP3, OGG, or WMA. Not everything supports AAC, yet. Kushal 23:20, 27 March 2008 (UTC)[reply]
No it's not possible? Yeah sure. Use recording software that records the audio stream from your player to the speakers. The Apple Store has iTunes Plus by they way, and you can also use Limewire or BitTorrent. Mac Davis (talk) 23:05, 27 March 2008 (UTC)[reply]
Oh come on if you're going to be that sloppy you might as well burn it to disks since you're going to lose sound data anyway :D\=< (talk) 04:21, 28 March 2008 (UTC)[reply]

l3 cache x86 edit

New x86 processors eg AMD phenom have an L3 cache.

A. What purpose does this cache serve in the x86 architecture.

B. Does windows/OSX/others use this cache

(Hint the L3 cache link lacks information on L3 cache.)87.102.16.238 (talk) 19:36, 27 March 2008 (UTC)[reply]

The CPU cache always serves the same purpose: mitigate the speed difference between CPU and main memory. It is traditionally organized hierarchically. On these new processors the general idea is to have one L1 data cache and one L1 instruction cache per core, one L2 cache per core, and one L3 cache per die. The number of levels is an implementation detail of the processor and ultimately not very important.
OS kernels do not use the CPU cache directly. It is a internal processor mechanism used to speed up memory access. Software doesn't have to do anything to take advantage of it. This new cache precisely doesn't change the machine architecture in any way. Morana (talk) 00:14, 28 March 2008 (UTC)[reply]
What is the difference between L3 and L2 cache?87.102.16.238 (talk) 12:31, 28 March 2008 (UTC)[reply]
L3 is even bigger and slower than L2. It just provides an extra layer of protection against (shudder) having to wait for something out of main memory. :D\=< (talk) 16:00, 28 March 2008 (UTC)[reply]
There is no difference. The reason a new level was introduced was because the various cores compete for bandwidth on the same memory bus. Since multitasking doesn't lock threads to a specific core the same thread may execute on two different cores at different times. If there was no L3 cache, they would both have to fetch the same code and data thus wasting bandwidth. Of course some threads won't migrate between cores so core-specific L2 caches are still needed. Morana (talk) 16:34, 28 March 2008 (UTC)[reply]
I don't understand the last part - how does the 'computer' know if a individual data item is going to be shared / or not. 87.102.16.238 (talk) 20:32, 28 March 2008 (UTC) Or indeed the reverse - how (what algorhythm (sic) is used ) to decide what data is thread specific? (I can't think how)[reply]
It raises the question why not just have a large shared L2 cache then?87.102.16.238 (talk) 20:38, 28 March 2008 (UTC)[reply]
Does anyone know more precise details of these caches operation - eg
are data items in l3 flagged as having been used by each of the cores?
can/do l2 caches check each other for data replication?
Can data migrate upwards from l3 to l2? if so is there a cycle latency before the data is 'bumped up' to prevent shared data being wrongly 'up-bumped' before it has been accessed by more than one core
Are any assumptions made about identical program code running on different cores being executed synchronously? - (ie or are there any advantages to doing that - making that assumption I mean)87.102.16.238 (talk) 21:20, 28 March 2008 (UTC)[reply]
ABOVE I might have got your reasoning the wrong way round - ie thinking l3 is used for shared data whereas l3 should be used for unshared data - the questions above remain - but you can re-factor them if you want to take account of the change - in the cases where what I asked is still relevent87.102.16.238 (talk) 11:30, 30 March 2008 (UTC)[reply]
What do you mean there's no difference? Yeah it's the same memory technology, but higher level caches get larger, making lookups slower as the cache size increases. L3 cache is slower than L2 because it has more space to look through :D\=< (talk) 21:07, 29 March 2008 (UTC)[reply]
I thought the lookups were done in parallel ie simultaneously -hence the speed and also the energy requirements (did you know that the most recent intel processors use 90%+ of the energy consumed just for cache - that means less than 10% of the electricity is used actually doing stuff!)87.102.16.238 (talk) 11:27, 30 March 2008 (UTC)[reply]
On a related note.. It's fairly important that the software isn't aware of this stuff. Imagine how problematic computers would be if you needed to update your OS every time there was a new processor. Computing as we know it wouldn't particularly be able to exist. One way to look at this issue is by thinking in terms of separation of concerns. (This term is specifically aimed at software, but you see the basic concept everywhere.) The existence of these different layers who don't need to know internal details of other layers is probably the single most important concept in all of IT. Friday (talk) 16:05, 28 March 2008 (UTC)[reply]
Cell (microprocessor) should not be popular then - oh - maybe you're right!87.102.16.238 (talk) 20:38, 28 March 2008 (UTC)[reply]
Far be it for me to disagree with Dijkstra(god bless you) Reade and live, but surely computer programmers responsible for 'core' code could be expected to take advantage of (and even write programs to manage) a relatively small yet much faster area/page of memory.87.102.16.238 (talk) 21:23, 28 March 2008 (UTC)[reply]

old definition edit

I've read elsewere a definition of L3 cache to be that it is an external part to the microprocessor - can this definitiob be considered obsolete now or wrong even?87.102.16.238 (talk) 20:38, 28 March 2008 (UTC)[reply]

Er, wrong. The only memory external to the processor is main memory, which isn't CPU cache. Cache exists only to prevent having to fetch things from main memory, so it doesn't make sense to call memory a kind of cache. :D\=< (talk) 21:07, 29 March 2008 (UTC)[reply]
http://www.webopedia.com/TERM/L/L3_cache.html found it here. 87.102.16.238 (talk) 11:00, 30 March 2008 (UTC)[reply]
Hm. Well this is from our cache article:
With the 486 processor an 8 KiB cache was integrated directly into the CPU die. This cache was termed Level 1 or L1 cache to differentiate it from the slower on-motherboard, or Level 2 (L2) cache. These on-motherboard caches were much larger, with the most common size being 256 KiB and frequently utilizing a SIMM form factor. The popularity of on-motherboard cache continued on through the Pentium MMX era but was made obsolete by the introduction of SDRAM and the growing disparity between bus speed and CPU clock speed, which caused on-motherboard cache to be only slightly faster than main memory.
Only crappy processors move their L2/L3 caches off-chip. Here's another quote from the article:
As the latency difference between main memory and the fastest cache has become larger, some processors have begun to utilize as many as three levels of on-chip cache. For example, in 2003, Itanium 2 began shipping with a 6 MiB unified level 3 (L3) cache on-chip. The IBM Power 4 series has a 256 MiB L3 cache off chip, shared among several processors. The new AMD Phenom series of chips carries a 2MB on die L3 cache.
:D\=< (talk) 18:09, 30 March 2008 (UTC)[reply]

l2 with l3 edit

Is it right to say that having a relatively small l2 cache(s) augumented by a larger (shared) l3 cache is primarily an energy saving feature? (that was a guess by the way)87.102.16.238 (talk) 21:11, 28 March 2008 (UTC)[reply]

No, having a small l2 cache but a big l3 cache is just cost-saving technique while looking good in marketing. l3 is slower (and cheaper) than l2 and l2 is slower (and cheaper) to l1, but I don't think there are much differences on the power consumptions on each. --antilivedT | C | G 12:32, 29 March 2008 (UTC)[reply]
How is it cheaper - l3 takes up space on the die just like l2. Do you think manufacturers would produce a substandard chip so that their advertising people can talk big? (you don't have to answer that)87.102.16.238 (talk) 11:05, 30 March 2008 (UTC)[reply]
Seriously though - your answer is a bit annoying without any references to read - does anyone else share the opinion you have?87.102.16.238 (talk) 11:22, 30 March 2008 (UTC)[reply]
I guess he meant primarily having no PC at all would be an energy saving feature. --212.149.216.233 (talk) 16:33, 30 March 2008 (UTC)[reply]
Must be, because I'm pretty sure every level uses SRAM :D\=< (talk) 18:07, 30 March 2008 (UTC)[reply]
There are yield issues. All CPU's of the same stepping are made out of the same die, and then they are tested for the highest speed for which they are capable of. Ones that can run at 3+Ghz are much rarer than ones that can run at 1.8Ghz reliably, which contributes to the high cost of such CPU's. Since L3 runs at lower frequency than L2, it has a higher chance of running reliably than L2 at the same frequency, improving the yield of the CPU. --antilivedT | C | G 07:34, 1 April 2008 (UTC)[reply]
My spies report that 'modern' l3 is effectively the same as l2 (same speed etc), though it might be a different example you are describing - thanks for explaining how it would be cheaper - if true I agree. Thanks.87.102.16.238 (talk) 20:49, 1 April 2008 (UTC)[reply]

Wiki-markup text/code editors? edit

Does anyone know of any text editors or code editors that recognize wiki-markup? It'd make it easier to do offline writing for articles. (I'm using a Mac, but I'd be glad to hear of any *nix or Windows ones, or just plug-ins for existing editors). -- Kesh (talk) 21:16, 27 March 2008 (UTC)[reply]

See Wikipedia:Text editor support. --Sean 15:34, 28 March 2008 (UTC)[reply]