Wikipedia:Reference desk/Archives/Computing/2008 August 2

Computing desk
< August 1 << Jul | August | Sep >> August 3 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


August 2

edit

Ubuntu LiveCD crashes (turns off automatically) during use

edit

When I use the LiveCD, it sometimes turn off automatically. I don't know how/why it happened. I use two Ubuntu 7.10 LiveCDs to access the Internet with a DSL connection. I have to use the LiveCD because my laptop's hard disk drive is damage. I don't have money to buy a new one. The one I made is the primary one and the ShipIt CD is the secondary one. It does crash sometimes. I clean the CD with alcohol and towels to reduce crashes. It crashes then it loading too much (like when I'm using YouTube). What should I do to reduce the number of LiveCD crashes? Jet (talk) 03:23, 2 August 2008 (UTC)[reply]

Remove the hard drive itself from the machine. How do you know its the LiveCD and not the hardware that is at fault? --mboverload@ 03:33, 2 August 2008 (UTC)[reply]
It could also be the increased load of using too many programs at any given time. The CD drive may not be able to keep up with the demand and just stops. I suggest cutting down on using too many things at any given time. Like use only 1 program at a time. If that does not work. Then do the above or if you really want to. Do both. RgoodermoteNot an admin  04:23, 2 August 2008 (UTC)[reply]
I don't think that explains it. LiveCDs are built to be loaded off CDs and wait for the data. What you describe would immensely slow down the OS, not crash it. At least I've never encountered it. --mboverload@ 05:14, 2 August 2008 (UTC)[reply]
I used to run LiveCD too on my laptop because of an HD problem. It would crash every time I ran too much. I fond out that it was because the CD drive on the laptop didn't do very well being very active and just stopped working. I was thinking he was having the same problem as I was. RgoodermoteNot an admin  06:08, 2 August 2008 (UTC)[reply]
Try Ubuntu 8.04.1 LiveCD? Or run it off a USB flash drive using Unetbootin and see if it still crashes. --antilivedT | C | G 06:40, 2 August 2008 (UTC)[reply]
What do you mean by "turns off automatically"? Does the system power off completely? Does it crash to the black-and-white PC console? If the computer itself is turning off, I'd suspect either hardware problems or your computer has reached a heat or fan threshhold and ACPI is powering off the system. -- JSBillings 14:45, 2 August 2008 (UTC)[reply]
Adding to what Jsbillings said, if the computer is randomly shutting down you may have some sort of power-saving hardware device which automatically shuts down when you have not used the computer for a certain amount of time. Or, if the computer is crashing when you run too many programs, or a few resource hungry programs, you can try adding more RAM. But it could be that the processor is just not fast enough to cope with what you are doing, especially if it is an old computer. 20I.170.20 (talk) 15:07, 2 August 2008 (UTC)[reply]
You could be running out of memory. This can happen because there is no swap disk, so the only memory is the physical memory (RAM). You get less of this than normal because some memory is used to make the read-only CD-ROM into a read-write CD-ROM+RAM disk. Linux normally kills high-memory processes when out of memory, but this could cause the computer to shut down. Use gnome-system-monitor, free or top to check memory usage. --h2g2bob (talk) 17:59, 2 August 2008 (UTC)[reply]

Multi-touch: what's behind the hype?

edit

Why is multi touch so hyped? I love the touch screen on my Sony Ericsson W950 but that's only because it enables handwriting recognition and can do things akin to the mouse, but so far the only intuitive application I've seen is browsing photos and be able to zoom, rotate etc. with your fingers. I can see that being quite useful in areas like desktop publishing and graphic design, but otherwise in other uses (web browsing, checking email, the basic uses) I don't see how it is any better than normal touch screen or the traditional keyboard/mouse approach. Most of the non-zooming/rotating things on the Perceptive Pixel demonstration video can already be done with normal touch screens, and to me the panning/zooming UI is way cooler than the multi touch capability (although I also fail to see how that's useful in normal use but at least it's cool). If anything people seems to be amazed more by the fluid and organic animations, like something powered by Clutter (computing) or Core Animation, as evidenced by the whole iPhone hype. None of the demonstrations that I've seen deals with problems like gorilla arm if it's vertical, or neck fatigue if it's angled/horizontal. Can anyone enlighten me how is it the UI of the feature? --antilivedT | C | G 11:02, 2 August 2008 (UTC)[reply]

Multi-touch is a breakthrough because it lets you touch more than one place on the screen. That's all. Right now, UI (including common mouse-driven UI) is single-touch. So, there isn't much existing UI for multi-touch. If it works well, it may be adopted over time. It may be one of those things that is lost. Who knows - we don't foretell the future here. -- kainaw 12:41, 2 August 2008 (UTC)[reply]
Translation for non-computer people: UI = "user interface". StuRat (talk) 13:44, 2 August 2008 (UTC)[reply]
I tend to agree that it doesn't seem all that useful, especially in a hand-held device where, presumably, one hand is busy holding the device and thus only one hand can be used, anyway. The added complexity might make it more likely to fail, too. I'm thinking it will be like the Segway Scooter, kind of fun but, in practical terms, not all that much better than what it aimed to replace (the bicycle), just more expensive. StuRat (talk) 13:44, 2 August 2008 (UTC)[reply]
Actually, I find the multi-touch pretty useful on handheld devices, *because* I only have one hand. Scroll around with one finger, use two fingers to zoom in and out, it doesn't require switching tools or holding down a modifier key. For the laptop, it's nice to have a multi-touch trackpad because you can use one hand to point and scroll without needing to use the keyboard. -- JSBillings 14:36, 2 August 2008 (UTC)[reply]
Isn't it great because it's like hollywood science (ie the same tech as they have on startrek) - rather than having any (major) user performance benefits..?87.102.86.73 (talk) 13:52, 2 August 2008 (UTC)[reply]
Star Trek? Don't you mean Minority Report? --70.167.58.6 (talk) 22:09, 3 August 2008 (UTC)[reply]

Just like the mouse, interfaces need to be designed to take advantage of multitouch. At the moment I don't know of any good uses. But I leave those ideas to other people. --mboverload@ 19:17, 2 August 2008 (UTC)[reply]

We forget really how much current GUIs suck because we've become inured to the horror of them (and we're glad they're marginally less horrid that their precessors). Multitouch is the next step on a gradient of progressively better touch interfaces. Imagine scrolling by just grabbing the page like it was a sheet of silk and moving it to where you want (rather than the absurdity of the scrollbar); imagine cut and paste by circling the text you want and then tearing it out of the page; imagine playing a desktop game with your kids where you throw objects and catch them. Anyone who plays a musical instrument (particularly something "touchy" like a violin) knows that there's a great depth of skill and dexterity to using it, and people have been happy (ish) to put up with the undoubted complexity of beginning to play the violin because of the great interaction and control you get when you're a practiced user. The current keyboard/mouse/touchscreen interface has, by comparison, a very shallow learning curve, but one that reaches only a low level of capability. Touchscreen (for some applications) is better than keyboard and mouse; multitouch is better than plain old touchscreen. What's needed is more subtle control yet - can you pinch something or press it or palm it around. Beyond that you really need some haptic feedback (a technology that, for consumer at least, is mostly useless and gimmicky right now). Why can't I riffle through a stack of mp3s the way I can with a stack of vinyl records? Why aren't the files I last used warmer than ones I never touch? Shouldn't the Vulcan nerve pinch pinch me back? Mboverload is dead on the money about interfaces needing to adapt to new input and output technologies. This is rarely as planned as might be hoped, and often things get used in ways very different from how anyone had intended. I can't imagine how Doug Englebart could have imagined people interacting with 3D worlds (Half Life 2 etc.) using his mouse and WSAD (and not those fancy 3d trackball devices and datagloves that all the VR guys were convinced we'd all need). In a couple of decades we'll look back on our current modes of interaction and think our former selves to have been downright disabled. Right now touch and multitouch work best in vertical segments like kiosks and in a bunch of gimmicky ways, but it's another step to our shedding the surly bonds of WIMP. -- Finlay McWalter | Talk 20:12, 2 August 2008 (UTC)[reply]
Except most of the things you said are already doable with keyboard/mouse. Google Maps uses your "grabbing the page" idea to scroll; you can select a section of text and drag it somewhere else to copy it (at least on Linux anyway), akin to your tearing out of the page idea. It still comes back to my original gripe: all these new innovations in UI design do NOT require multi-touch. Those Perceptive Pixels guys already modified the UI of a few applications in addition to writing their own applications to take advantage of multi-touch, and yet it's still limited to the whole rotate/enlarge thing. Is that all there is to multi touch: zooming and rotating with 2 fingers? --antilivedT | C | G 00:25, 3 August 2008 (UTC)[reply]
There was a much older peripheral device which seemed even better, but was alas discontinued. It was a board with 8 dials used on graphics workstations (affectionately called the "nipple board" since it looked like a pig's nipples). One dial would zoom, one would pan right and left, one would pan up and down, one would rotate in the plane of the screen, two more would rotate out of the plane vertically and horizontally. One was used for the front Z-clipping plane and one for the rear Z-clipping plane. With this combo you could manipulate a 3D object far easier than with current methods or with a multi-touch keypad. A dial has the advantage of being able to turn it quickly or slowly (with precision), depending on the need. By contrast, volume controls on TVs that only have "up volume" and "down volume" buttons fail in both respects, you can't change the volume as quickly as with a dial, or as precisely. StuRat (talk) 03:13, 3 August 2008 (UTC)[reply]
Yah, I always wondered why there are not more dials on controls. I have a volume wheel on my Logitech G15 and I love it. When music is baring at you because you left it on 100% you can't wait for some Up or Down control to change it fast enough. --mboverload@ 04:04, 3 August 2008 (UTC)[reply]

thumb drive adding to RAM in Vista

edit

I've converted an XP PC with 0.5 GB RAM to Vista. Having read that Vista was capable of utilizing the storage space on a thumb drive to supplement the RAM, I plugged in a 2 GB thumb drive, but the system reports only 480 MB of RAM. First, is what I read true? Second, if so, how can I make Vista recognize and utilize the thumb drive as RAM? --Halcatalyst (talk) 12:51, 2 August 2008 (UTC)[reply]

I'm a bit skeptical that you can use it as RAM, or would want to, since it's likely to be slower, but perhaps it could be used for paging space. StuRat (talk) 13:31, 2 August 2008 (UTC)[reply]
What you're read about is ReadyBoost, which augments disk cache, not RAM, with thumb drives. -- Finlay McWalter | Talk 13:43, 2 August 2008 (UTC)[reply]
Vista (and many other OSs, including earlier versions of Windows, can use disk space as Virtual memory. However, you really don't want to use a flash drive for VM, because of the slow speed of the device (compared to your hard drive) and limited number of writes flash media has. Using a thumb drive in this manner is just a bad idea, I don't know where you'd get that idea. -- JSBillings 14:40, 2 August 2008 (UTC)[reply]

TFT Vs LCD

edit

Do you use a TFT or a ordinary LCD? Does it matter? Is ordinary LCD very difficult to use and read compared to TFT? —Preceding unsigned comment added by 59.92.109.243 (talk) 17:06, 2 August 2008 (UTC)[reply]

 
and older type of LCD display
Did you mean as a monitor? I think all LCD monitors use some form of TFT technology.87.102.86.73 (talk) 18:08, 2 August 2008 (UTC)[reply]
See Thin film transistor liquid crystal display I would say that these screens are easier to read than the original type of LCD (see image) - but this may because of the higher resolution, and the backlight..)
Perhaps you meant something like the display on a Z88 (see article for image) - I think that may not be TFT - but it is quite easy to read and effective.87.102.86.73 (talk) 18:16, 2 August 2008 (UTC)[reply]
Is anyone else aware of non-TFT LCD displays nowadays?87.102.86.73 (talk) 18:12, 2 August 2008 (UTC)[reply]

Programs for puppy linux

edit

I want to install some program in puppy linux, but I´ve found only Fedora, Debian, Ubuntu, Gentoo and SuSe version. What ´flavor´ is more appropriate for me? —Preceding unsigned comment added by Mr.K. (talkcontribs) 18:41, 2 August 2008 (UTC)[reply]

I could be wrong, but I think that each session of Linux can use another sessions programs..I am not too sure. But I believe it doesn't matter. RgoodermoteNot an admin  18:53, 2 August 2008 (UTC)[reply]
Why are then various versions of Gimp? —Preceding unsigned comment added by Mr.K. (talkcontribs) 18:59, 2 August 2008 (UTC)[reply]

Puppy uses .pet or .pup packages and its package manager is PETget which will download & install software from the repos. Link1 Link2 Link3 -Abhishek (talk) 20:12, 2 August 2008 (UTC)[reply]

IIRC Puppy Linux is Debian based. However, to install Debian packages you have to do something to the package to make a .pet or .pup which you can then install. Astronaut (talk) 22:19, 2 August 2008 (UTC)[reply]

Intel - risc

edit

It's an often repeated fact(/factiod) on internet discussion forums etc that modern x86 convert instructions into risc like micro-ops prior to execution..

I'm wondering if this is true , especially with reference to the Intel Atom ? 87.102.86.73 (talk) 18:49, 2 August 2008 (UTC)[reply]

That's true (I don't know specifically about Atom, but most other Intel x86 architecture processors have done this, for at least several generations). Our micro-operation article is rather sad, but this intel document says (of Core) "In modern mainstream processors, x86 program instructions (macro-ops) are broken down into small pieces, called micro-ops, before being down the processor pipeline to be processed." -- Finlay McWalter | Talk 19:36, 2 August 2008 (UTC)[reply]
Something more concrete: Jon Stokes' Inside the Machine (ISBN 978-1-59327-104-6) describes the Intel P6 architecture (that's more than a decade old, the Pentium Pro). He says P6 pulled 16 byte wide chunks of IA32 instructions from the L1 instruction cache into three parallel decoders which push the resulting microops (up to 6 microops per cycle) onto a micro-op queue. Those go then to the re-order buffer and then off to be executed. The hard bit is the complex decoder, which handles the (rather rare) complex IA32 instructions - these are processed by microcode which emits a sequence of simple micro-ops. So the answer to your question is that simple instructions (loads/stores/arithmetic/logic) get translated into micro-ops (where probably a given instruction turns into 1 or 2 ops) where complex instructions are processed into a longer sequence of micro-ops. Stokes quotes a source that estimates "40% of P6's transistor budget is spend on x86 legacy support" (up from 30% for pentium, where everything was either done in gates or microcode). -- Finlay McWalter | Talk 19:50, 2 August 2008 (UTC)[reply]
It should be noted that micro-ops are not quite equivalent to a RISC instruction set because ultimately an x86 processor has to support all the complications of the x86 architecture. In particular, x86 micro-ops have to be able to update one or more flag bits in the flag register, which means that many micro-ops take the old state of the flag register as an additional input and produce a new state of the flag register as an output. RISC instruction sets with a flag register (such as PPC) simplify the implementation by always updating all the flags at the same time. Another complication is that the memory semantics of x86 micro-ops must incorporate the current values of the segment registers. 84.239.160.166 (talk) 16:48, 3 August 2008 (UTC)[reply]
On the same architecture, Hennessy & Patterson Computer Architecture: A Quantitative Approach ISBN 1558605967, says "if an IA32 instruction requires more than 4 uops [micro-ops] it is implemented by a microcoded sequence that generates the necessary uops in multiple clock cycles". -- Finlay McWalter | Talk 20:25, 2 August 2008 (UTC)[reply]
Things haven't changed much since then. Core 2 is just an evolutionary improvement on the Pentium Pro design. AMD's processors work in essentially the same way, as do modern RISC processors. Itanium was a radically different design which failed spectacularly (relative to expectations, anyway). -- BenRG (talk) 01:39, 3 August 2008 (UTC)[reply]
With Atom, however, it doesn't seem to work quite the same way. This article says they don't bother with micro-ops for many instructions (they just execute them directly). There more on that (some more concrete estimates) in the Intel Atom#Architecture section, but they don't have supporting sources. -- Finlay McWalter | Talk 20:33, 2 August 2008 (UTC)[reply]
The reason is that whereas P6 etc. are built for speed, Atom is build to consume less power. P6 performs out-of-order execution on those micro-ops, to try to get as much going simultaneously as possible. Atom is strictly in-order, so it doesn't have most of that back-end pipeline architecture. All that gubbins needs space on the die and power to run, and it seems they've binned all that for Atom in an effort to keep its power usage down. In a lot of ways Atom seems architecturally rather like a Pentium, albeit built with modern processes. -- Finlay McWalter | Talk 20:41, 2 August 2008 (UTC)[reply]

That's what I was wondering. So it could be quite an old design built to new process perhaps.87.102.86.73 (talk) 21:16, 2 August 2008 (UTC)[reply]

Philosophically its architecture is something of a throwback, but it's clearly an entirely new design, and it does a bunch of clever things to do with power saving that Intel IA mobile/desktop/server microprocessors haven't done (but that other chips aimed at the embedded space like StrongARM and DragonBall have). Atom seems to be a sign that smart people in Intel's embedded-space product marketing have prevailed in the perpetual "can't we can just downscale a desktop chip into the embedded space" war. -- Finlay McWalter | Talk 21:22, 2 August 2008 (UTC)[reply]

Thanks. I think I've found out as much as I can without getting involved in serious industrial espionage...87.102.86.73 (talk) 22:14, 2 August 2008 (UTC)[reply]

Mouse clicking?

edit

Is it possible to ruin a laptop just by clicking the mouse too much? (Superdupermaryaaam (talk) 20:42, 2 August 2008 (UTC))[reply]

If by "mouse" buttons you mean those buttons beside the touchpad on most laptops, that work the same as a real mouse button, then it seems maybe yes. They don't ever seem to be all that well constructed, and its not uncommon to see older laptops with those buttons broken or snapped off. To some extent maybe that's because that bit gets damaged when something nasty happens to the laptop. Of course you can always just plug in a USB mouse: breaking the button isn't the end of the world. -- Finlay McWalter | Talk 20:47, 2 August 2008 (UTC)[reply]

That is very very unlikely to happen. The mouse is built to withstand alot of pressure and impact. The only way to truly break your computer by clicking by using a hammer. I think somebody just told you this because they didn't want you to use their laptop! (Josh389 (talk) 20:50, 2 August 2008 (UTC))[reply]

Or, someone was tired of hearing "click... click click... click... click... click click... click... click... click..." -- kainaw 21:28, 2 August 2008 (UTC)[reply]

I work with laptops all day. Short of hitting the laptop with a mallet there is nothing you can do to the track pad or keyboard to damage a laptop. The only ways to break a computer are:

  • Liquid inside
  • Blunt force that cracks internal components
  • Force that breaks the LCD screen
  • High amounts of pressure that bend metal together and short out the computer

Short of any of these things you do not need to worry. At all. Just enjoy your computer. It will take care of itself. --mboverload@ 22:11, 2 August 2008 (UTC)[reply]

I had a laptop once where the mouse couldn't "click" anymore (it did still work, but there was no tactile feedback and it would occasionally click by accident) - annoying, but nothing that couldn't be bypassed by disabling it and using an external mouse. To someone who's not resourceful enough to think of this, something like this could easily make someone call the computer "broken". --Random832 (contribs) 22:12, 2 August 2008 (UTC)[reply]

Good observation Random. Even then with fine motor skills, good lighting, and a small tool most keyboard/mouse problems can be fixed. I once used a very tiny piece of rubber cut with an exacto knife when the real thing fell out. It's a different feel but it didn't cost $100 to replace! --mboverload@ 22:43, 2 August 2008 (UTC)[reply]

Installing graphics drivers w/o uninstalling

edit

What will happen if you install graphics cards drivers without uninstalling the older version? Both ATI and Nvidia recommend uninstalling old ones first, but I'm just wondering what will happen if you don't. Lowered framerate? Graphics glitches? 67.169.56.232 (talk) 20:52, 2 August 2008 (UTC)[reply]

A (windows) graphic device driver is, these days, a big complicated beast with lots of files (some that do stuff, some for configuration). Between releases of the driver (particularly if you're upgrading from quite an old one to brand new one) they move things around and rename stuff. If you somehow ended up with files pertaining to the old driver left around when you're trying to run the new one, it might inadvertently pick up an old DLL or config file or registry entry or something, producing unexpected (and uniformly unpleasant) results (crashes, loss of features, or total inability to work at all). -- Finlay McWalter | Talk 21:02, 2 August 2008 (UTC)[reply]

How does a computer work?

edit

I know the general ideas and stuff, and I'm not a computer n00b, but I never found out how a computer really works. What are the most basic instructions given to a processor, and how does the processor perform them?--96.227.106.168 (talk) 21:48, 2 August 2008 (UTC)[reply]

Have you read computer processor? -- kainaw 22:01, 2 August 2008 (UTC)[reply]

The most basic instructions are called machine code, the program takes values stored in memory (pointed to by the program counter) which typically consist of simple commands such as

  • add two numbers
  • store a number at a point in memory
  • compare two numbers and set a special 'flag' if one is greater than the other
  • move the position of the program counter to a new position.
  • store the value stored at a position in memory into a Processor_register

Typically a computer uses 'registers' to store values locally or temporarily (usually they have 8 to 64 registers)

The individual instructions are done using logic gates, numbers are represented by binary and maths is done on the binary representations of these numbers.(If you want more information on this please ask)

a good introduction might be to read about hardware adder also see Adder (electronics) —Preceding unsigned comment added by 87.102.86.73 (talk) 22:13, 2 August 2008 (UTC)[reply]

The page Zilog Z80 gives some simple examples of this. Unfortunately Wikipedia doesn't seem to have a page that represents a good introduction for a starter to this topic - (a lot of the articles assume prior knowledge)

Maybe someone else knows of a good introduction?87.102.86.73 (talk) 22:10, 2 August 2008 (UTC)[reply]

Danny Hillis' The Pattern on the Stone: The Simple Ideas That Make Computers Work -- Finlay McWalter | Talk 22:12, 2 August 2008 (UTC)[reply]
A list of instructions in most 32-bit machines is on the x86 instruction set page. The How computers work section of the Computer page is also worth looking at. --h2g2bob (talk) 03:16, 3 August 2008 (UTC)[reply]

I've found The Unix and Internet Fundamentals HOWTO to be a clear introduction, even though I have experience mostly with Windows --Iamunknown 04:14, 3 August 2008 (UTC)[reply]

When studying this topic, it is good to get into the mindset of changing level of abstraction. In the lowest level, volts and amps float around through physical pathways that are carved into a piece of silicon by photochemical manufacturing processes. These electrical properties represent logical bits, cumulatively forming a very complex finite state machine with a variety of inputs and outputs linked together through a fairly standardized connection architecture. Each interconnected device translates those digital representations to its own internal "meaning" (for example, a VGA controller will convert a bunch of inputted data into a bunch of analog timing signals for controlling a monitor; a keyboard converts physical button-presses into a sequence of voltage pulses which can be understood by another digital circuit). Since we started making computers, we have built lots of conventions for these types of information translations, culminating in the user interface which you are familiar with today. One of the earliest transformations is the abstraction of software, or instructions to the machine, as a modifiable program code; subsequently, other software was designed to translate human-readable text into machine-instructions to make programming a machine easier. As a desktop user, you may write your own software or use other people's designs, but internally, you are using this extremely complex set of electronic circuits to perform a series of conceptual and physical translations from high levels of abstraction into low-level electrical impulses with very definable behaviors.
This overview was necessarily broad. If you substitute your original question with (for example), "How does my Dell Inspiron 5150 work?" you can start replacing specific electronic circuits instead of vague topics. You can obtain datasheets for these chips, learn what wires need to be connected to that chip, and how fast the signals need to be clocked, and so forth. Some of these data sheets are available to you right now, and some require that you work at a large corporation, because the information is confidential and expensive to obtain; but there are a variety of open-source digital circuit designs which together can be used to build an entire machine. Nimur (talk) 19:09, 3 August 2008 (UTC)[reply]

Icons and Extensions

edit

I have enabled file extensions on my computer (Windows XP Home Edition V 2002 SP 2) and can see them on the desktop. I have some .txt's and some .cpp's among other things on my desktop. I see that links to programs don't have a file extension and it does not matter what file extension you give it, it's still a link. Well, I haven't renamed these files or anything, but today when I got on, I noticed there were the little black arrows in white squares (like are used to indicate a link) at the bottoms of some of the .cpps and .txts. I checked the properties and they were not links and it gave no option to change the icon. Can anyone tell me what's going on here. (Also on a possibly unrelated note, I have disabled my AIM (using msconfig) from opening on start-up and prompting me to sign in. Sometimes, though, when I log in, it is turned back on. Can someone please tell me what's going on with this as well and how to fix it?)
Thanks, Ζρς ι'β' ¡hábleme! 23:47, 2 August 2008 (UTC)[reply]

Quick and dirty idea (in a rush at the moment). Download TweakUI from Microsoft and go to repair -> icons. --mboverload@ 02:30, 3 August 2008 (UTC)[reply]
I don't have a solution to your problems (except to second mboverload's suggestion) but I wanted to point out that Windows shortcuts do have an extension, .lnk, but it's never shown by Explorer even when you tell it to show all file extensions. -- BenRG (talk) 16:17, 3 August 2008 (UTC)[reply]
Oh yeah, I knew the .lnk but I didn't know explorer wouldn't show it. Why is this? Ζρς ι'β' ¡hábleme! 17:41, 3 August 2008 (UTC)[reply]
Just how windows works. --mboverload@ 19:41, 3 August 2008 (UTC)[reply]
Windows stores the file extension information (icon, program, etc) in the HKCR (HKEY_CLASSES_ROOT) part of the registry. You can see it with regedit.exe. The .lnk extension points to lnkfile, which has the NeverShowExt value set. --grawity 16:07, 4 August 2008 (UTC)[reply]
Thank you Grawity!!!!!!!!!!!!!!!!!!!--mboverload@ 19:09, 5 August 2008 (UTC)[reply]