Wikipedia:Reference desk/Archives/Computing/2009 September 7

Computing desk
< September 6 << Aug | September | Oct >> September 8 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


September 7 edit

Fonts edit

I have multiple computers in my home, and I have recently used one (not the one from which I am typing now) to view a user's signature. I thought it nice, and decided to copy some of the code for my own. Unfortunately, it seems the computer I am on now does not have the font I want installed, so all I see when I look at that user's signature on this computer is regular text. Could someone tell me how I can install, download, or otherwise acquire the font "Monotype Corsiva"? Intelligentsium 00:03, 7 September 2009 (UTC)[reply]


P.S. - I have a feeling my new signature looks a lot better to those of you whose computers can show Monotype Corsiva - to me it only looks like italicized text. Intelligentsium 00:03, 7 September 2009 (UTC)[reply]

You mean that it looks like this? I have the font of which you speak, and it looks quite similar to that font. Sorry, but I can't help you get it; the only time that I tried to download a font, it failed without my understanding why. Nyttend (talk) 00:53, 7 September 2009 (UTC)[reply]
Going off topic but it's a really bad idea to rely on fonts for your signature. If you want a proper signature, include an image; if you want to have your name printed clearly - use a normal font. Script-font-signatures are generic, unreliable, and very 90's :p --antilivedT | C | G 01:23, 7 September 2009 (UTC)[reply]
To install a font, download the font file(s) (typically *.ttf files), and then copy them to the C:\WINDOWS\Fonts. You have to copy them to the Fonts folder from another, uncompressed, folder, on the computer. If I remember correctly, it is, for some reason, not possible to copy them to the Fonts folder directly from a compressed foler (*.zip) in (at least) Windows Vista. You can download loads of highly interesting fonts for free (see "free fonts"), but I believe that Monotype Corsiva is not free, but included in either Microsoft Windows (Vista) or Microsoft Office (2007). --Andreas Rejbrand (talk) 09:21, 7 September 2009 (UTC)[reply]
I've got Monotype Corsiva and the signature doesn't display in it... when I paste the signature into Word it says it's in "Decorative". If you delete the <font face="Decorative"> and the matching </font> you end up with Intelligentsium (the code is <font style="font-family:Monotype Corsiva; font-size:15px;"><i>[[User:Intelligentsium|<span style="color:DarkGreen">Intelligent</span>]]<b>[[User_talk:Intelligentsium|<span style="color:Black">sium</span>]]</b></i></font> ) , which I'm seeing in something that looks like Monotype Corsiva. Re downloading the font, http://www.newfonts.net/index.php?pa=show_font&id=130 claims to be a free-to-download Monotype Corsiva and the filename looks plausible - I can't get at my fonts folder to check at the moment. AJHW (talk) 11:23, 9 September 2009 (UTC)[reply]

Mac on PC edit

How do I do it? I heard it's called Hackintosh, what exactly do I do —Preceding unsigned comment added by 82.43.88.99 (talk) 11:28, 7 September 2009 (UTC)[reply]

That article links you to resources, such as they are. Apple claims that doing so violates their EULA and that it's illegal, and they make strenuous legal and technical efforts to keep breaking Hackintoshes. -- Finlay McWalterTalk 11:57, 7 September 2009 (UTC)[reply]
And it depends on your hardware; hacked distros such as iDeneb are a bit easier to install, but still your mileage may vary, as it may not work as well as it should. And obtaining an ISO image of it may be illegal in most jurisdictions, too. Blake Gripling (talk) 12:02, 7 September 2009 (UTC)[reply]

int in C++ edit

What are the benefits of having a type "int" in C++ that can be a short, a long, or even a long long? Should a programmer always use definite types, such as short and long instead? -- kainaw 11:39, 7 September 2009 (UTC)[reply]

I guess it made sense to Ken Thompson and Dennis Richie and those guys back in the day, to have a type that was the size of the machine word, although in practice it's pretty hard to think of a programming case that having that adaptive int makes for easier programming of a given task than using a fixed-size. My only guess is that, while modern compilers can treat values from 8 to 64 bits with facility, things were harder and less efficient with DMR's first C compiler (and they couldn't afford the space or time to have the compiler invisibly insert a bunch of instructions to complement arithmetic ops on a size the architecture didn't natively support). Without exception, every single serious systems programming job I've ever worked on has used uint32_t (et al) or something very similar. -- Finlay McWalterTalk 11:55, 7 September 2009 (UTC)[reply]
What I mean by that is that, particularly with a rather basic compiler and a very slow and instruction poor CPU,
       int x;
       for (x=0; x<10000; x++) {
         *p=x + x/2 + x%5;
       }
would be much faster than if the type of x was short or long, because all those ops on a non-native-wordsize x would require lots of masking and shifting to get them to work. More modern CPUs have a rich enough instruction set that this isn't a major issue right now (except for long long). -- Finlay McWalterTalk 12:07, 7 September 2009 (UTC)[reply]
In addition, C99's stdint.h has some nice types that encode the idea of minimum-width and fastest-minimum-width integers, so you'd probably code the above example with the type of x being uint_fast16_t, so you get the advantage of using the CPU's native int if that's appropriate, but without prescribing a possibly suboptimal type. -- Finlay McWalterTalk 12:36, 7 September 2009 (UTC)[reply]
I understood the question, and so did Finlay, who answered it well, but make no mistake: an int can never "be" a short or a long. Even if int has the same representation as another integer type, the types remain distinct. It is also impossible for "int" to provide as much storage as "long long", unless "long" and "long long" are the same size, because an int can't be larger than a long. My advice is to not worry too much about short and long, and just stick with int, unless there's specific reasons to worry about those things. decltype (talk) 13:05, 7 September 2009 (UTC)[reply]
Correct. I was referring to bit-size, not type. I do not see why a compiler couldn't use 8 bytes (long long) for an int and still be ANSI C/C++ compliant. I know that "long long" hasn't been completely adopted, but 64-bit machines are common enough that an optimized system could opt for 8-byte ints. -- kainaw 14:13, 7 September 2009 (UTC)[reply]
When working with hardware acceleration, using a generic name for the "standard" floating point or integer representation allows for easy portability. In reference to your original question, this means that I can compile the same code for two different machines, (e.g. a 32-bit and a 64-bit system), and get all the benefits/hassles of immediate conversion to the new bit size. For complex applications, this would be undesirable; but for numerical algorithm kernel code, this is extremely helpful in improving portability. Nimur (talk) 15:06, 7 September 2009 (UTC)[reply]
True. My point was that on such a system, sizeof(long) would also have to be exactly 8. decltype (talk) 06:42, 8 September 2009 (UTC)[reply]
The original specification for C said that 'int' was the most efficient word-length for the hardware and only guaranteed that it was no smaller than a 'short' and no longer than a 'long'. Small microcontrollers and ancient C compilers for things like the 6502, 6800, Z80 and 8080 families often used 16 bit 'int' but so many programmers of larger-scale computers assumed that an 'int' was 32 bits that there were severe portability problems with these systems. Nowadays, some microcontrollers still use 16 bit int's because it's rare to be able to port anything from (say) a PC with a couple of gigabytes of main memory to a microcontroller with 512 bytes! However, even the compilers for those systems often have a compile option to always compile 'int' as 32 bit. Hence we now have a 'de-facto' standard that char is 8 bits, short is 16, int is 32, long is either 32 or 64 depending on whether the underlying hardware is 32 bit or 64 and long-long is 64 (if it's implemented). I would be rather surprised if any compilers ever broke that 'standard' in the future because it's so widely assumed to be true - even when the language specification doesn't guarantee it.
The reason not to always use the maximum word length is performance. Even though a 64 bit machine may be able to add two 64 bit numbers just as fast as a 32 bit one - it's really quite rare to actually need 64 bits in most of the situations where integers occur - and the additional memory storage requirements and RAM bandwidth demands of using 64 bits tend to completely outweigh their usefulness. When I worked at L3 Simulation, we had over a million lines of code in the application I was in charge of - and there were only (I think) two places where we used 'long' - and that was for date-fetching functions as mandated by the Linux kernel API. We were careful to do that precisely because of 32/64 bit portability issues.
The major benefit of transitioning to 64 bit processors is nothing to do with integer calculations - but rather that of improved double precision floating point performance and the ability to address more the 2Gbytes of RAM without kernel slowdowns and other ikky problems.
SteveBaker (talk) 15:43, 7 September 2009 (UTC)[reply]
Improved floating point performance? Are you talking about the extra XMM registers? But there are extra GPRs too and that improves integer performance... -- BenRG (talk) 19:45, 7 September 2009 (UTC)[reply]
The ability to perform fast bitwise operations on 64-bit integers has a tremendous impact on the performance of chess engines and other board game implementations that make heavy use of bitboards, such as Othello. decltype (talk) 06:42, 8 September 2009 (UTC)[reply]
Use the stdint.h ones instead, or for big projects set up your own include file which defines you own names for the types you want. That'll make it easier to port. By the way you do sometimes get funny sizes like 18, 24 or 36 bits in embedded work. The other thing I'd warn against is assuming that int is the same size as a pointer, use intptr_t instead, same with things like file sizes. Dmcq (talk) 16:36, 7 September 2009 (UTC)[reply]
If you're not using C99, here's some portable C89 code to generate fixed size types. "Portable" meaning that it does not make any assumptions about the size of char, short, int or long. Similar definitions can be made for signed types. (Although personally I believe that if you want portable code, you should just use int, or long if the value might exceed 16 bits.) Mitch Ames (talk) 11:50, 8 September 2009 (UTC)[reply]
/*  Portable definitions of fixed size integer types */
#include <limits.h>

/* 8 bit */

#if UCHAR_MAX == 0xff
    typedef unsigned char u8;
#elif USHRT_MAX == 0xff
    typedef unsigned short u8;
#else
    #error No 8 bit type available.
#endif

/* 16 bit */

#if UINT_MAX == 0xffff
    typedef unsigned int u16;
#elif USHRT_MAX == 0xffff
    typedef unsigned short u16;
#elif UCHAR_MAX == 0xffff
    typedef unsigned char u16;
#elif ULONG_MAX == 0xffff
    typedef unsigned long u16;
#else
    #error No 16 bit type available.
#endif

/* 32 bit */

#if UINT_MAX == 0xffffffff
    typedef unsigned int u32;
#elif ULONG_MAX == 0xffffffff
    typedef unsigned long u32;
#elif USHRT_MAX == 0xffffffff
    typedef unsigned short u32;
#else
    #error No 32 bit type available.
#endif

DV handycam edit

If it possible to capture video in .MOV, Windows .AVI, or .MPG files using DV Handycam? --AquaticMonkey (talk) 13:38, 7 September 2009 (UTC)[reply]

When I import video to my Windows machine from my MiniDV camcorder, the video is in an AVI file. Not sure what codec. Tempshill (talk) 15:30, 7 September 2009 (UTC)[reply]
No, you can't capture in those formats, but you can convert DV files to them pretty easily with something like ffmpeg. --98.217.14.211 (talk) 20:04, 7 September 2009 (UTC)[reply]

Can Wikipedia please add a 'share' facility for Facebook and myspace users etc? edit

Hello,

I am not sure where to send my request for the idea of Facebook and myspace share buttons to be added to Wiki? Sorry if I have wasted your time, please can you forward on to the right place, or instruct me where I need to send this idea to?

Thank you

Why can't you simply use the hyperlink? It's the most general way to "share" a web document. Wikipedia's architecture has been conveniently designed so that its URLs are human-readable. I think the best place to ask about this sort of feature would be the WP:Village pump, where technical issues and policy suggestions are discussed. Nimur (talk) 15:09, 7 September 2009 (UTC)[reply]
I should warn you that I strongly suspect this suggestion will get a resounding no Nil Einne (talk) 19:39, 10 September 2009 (UTC)[reply]
I'm sure this was discussed on VPT recently when someone from Wikinews showed off n:Template:Social bookmarks which is now enabled there, but can't seem to find it in the archives. Nanonic (talk) 20:13, 10 September 2009 (UTC)[reply]
And we already have User:TheDJ/Sharebox for those with accounts. Nanonic (talk) 20:16, 10 September 2009 (UTC)[reply]

Binding application window to a particular monitor edit

I recently added a second monitor to my Windows Vista computer and am looking for a Windows equivalent to devilspie (UNIX). My primary concern is iTunes - the Library window is maximized on the 2nd monitor, but the Get Info insists on opening on the 1st monitor. I am thinking of something like devilspie so I can pattern match by windows name / parent application name / etc to convince iTunes to keep on its own monitor. Any suggestions would be appreciated. Freedomlinux (talk) 16:02, 7 September 2009 (UTC)[reply]

GIMP/PSD text layer problem edit

Greetings. Am gonna be terse today, because I'm getting sick and am tired, so here's the problem.

User is using GIMP 2.4.3 running on Zenwalk Linux. User does the following:

  1. Open old PDF as single-layer graphics and edit out certain portions (text). What remains is one layer of graphics.
  2. Save so created document as .psd (user's customer wants .psd).
  3. Open so created .psd on following day; introduce text as layers into document. Save (as .psd still).

Now, when user tries to edit the text in the .psd, he can't. Is this:

a) impossible b) easy, but user doesn't know how to do it c) complicated, but user also doesn't know how to do it.

I thought .psd was a useful and editable format. Is it the GIMP that doesn't fully support it? What gives?

I can share the incriminated .psd file to show what I mean. Thanks in advance. Gonna get some tea now. Cheers, Ouro (blah blah) 16:35, 7 September 2009 (UTC)[reply]

When you import into Gimp, you are converting the PDF into an image of the PDF. When you save as PSD, you are saving an image, not text. Gimp (even through the latest version, as far as I know) does not save a PSD text layer. It saves it as an image. -- kainaw 16:43, 7 September 2009 (UTC)[reply]
One down. Thanks, Kainaw. Have to look for another tool, then... Ouro (blah blah) 16:53, 7 September 2009 (UTC)[reply]
Try importing the PDF in Inkscape, remove the things you want, and then export. Inkscape won't write to PSD, but will write to SVG, PS, EPS, and AI (adobe illustrator) formats. -- Finlay McWalterTalk 17:17, 7 September 2009 (UTC)[reply]
If you're ever going to open .svg files created with Inkscape in AI, be sure to save it in plain .svg or AI will have problems. 142.20.146.226 (talk) 20:09, 8 September 2009 (UTC)[reply]
I'm probably forgetting something obvious here... but can't the user save the interim versions as .xcf (which should keep the layers intact) then save the document as .psd when it's finished? AJHW (talk) 11:04, 9 September 2009 (UTC)[reply]
The user can. But the user's customer can't handle xcf (or for whichever reason prefers psd). --Ouro (blah blah) 07:34, 12 September 2009 (UTC)[reply]

powerpoint edit

Which type of the following screen elements is displayed below the slide pane and allows you to type additional slide information. —Preceding unsigned comment added by 98.166.18.43 (talk) 16:57, 7 September 2009 (UTC)[reply]

There's a 'slide notes' that you can add lots of notes -these don't appear on-screen but you can print them off so that you have the notes that relate to the appropriate page (you can also send the doc to people so they can see it with the slide-notes for further reference). Is that what you mean? ny156uk (talk) 17:21, 7 September 2009 (UTC)[reply]

System freezes with nvidia geforce 9800 edit

Hello there, I am having trouble with system freezing issue. I have bought a new card nvidia geforce 9800. Whenever I try to play crysis or other game it freezes within 10 minutes. I recorded temperature.

  • In idle 49 Celsius and 3d load it is 83 Celsius (Graphics card)
  • Processor 27 Celsius
  • Mainboard 36 Celsius

What is the problem and how can I rectify it? I am planning to buy a processor cooler. Will that solve that issue? I have full tower chasis with four fan inside.

My specs:

  • Core 2 Quade 9400
  • nvidia geforce 9800
  • mobo: 750 SLI nvidia geforce
  • 4 gb ram (800 MHz)
  • Full tower chasis

Any advice would be appreciated.--119.30.36.53 (talk) 17:25, 7 September 2009 (UTC)[reply]

The processor temperature sounds good to me. The nvidia temperature running the game is higher than I'd like though still below when they'd slow it down. The fan should speed up a great deal at that temperature - do you hear it doing so? Freezing though sounds to me more like a possible power supply problem. Dmcq (talk) 17:52, 7 September 2009 (UTC)[reply]
I agree - if it's not overheating then you're almost certainly overloading the power supply. You need a more powerful one. SteveBaker (talk) 03:41, 8 September 2009 (UTC)[reply]
  • My PSU is Thermaltake 750W. I have increased the fan speed from 35% to 40% by using Riva Tuner. Still I don't hear fan noise. What should be the ideal fan speed at 83 Celsius? Will it reduce fan life span (If I increase)? Thank you —Preceding unsigned comment added by 119.30.36.53 (talk) 12:05, 8 September 2009 (UTC)[reply]
Is it a GT or GTX or GTX+? My friend has a GT with a single slot cooler and it gets really hot, whereas the GTX+ with dual slot cooler is considerably cooler. If you have the former, you should improve the ventilation in your case or your card will just keep recycling hot air around. --antilivedT | C | G 12:14, 8 September 2009 (UTC)[reply]
It's funny about you not hearing a change in the nvidia fan, I'd have though it was probably the loudest part of what you've got. If the system lasts much longer or doesn't crash with the side off then it's probably overheating. You have to be careful about airflow, it's usually best to keep everything clear but sometimes a baffle to ensure the air goes the way one wants can help if there is a particular problem. Dmcq (talk) 18:20, 8 September 2009 (UTC).[reply]
Are you sure the GeForce's fan is actually working/spinning at all? I know it sounds a bit obvious, but as you've said that you don't hear it I think it's worth verifying and it certainly would explain the overheating. I used to have a 7950GT which suffered similiar issues and that was because my fan had seized up and stopped and like Dmcq above said, I would also expect it to be quite noisy. Try loading a game and "playing" it so the card is working hard and then placing your ear next to it should be enough to hear if it's on. ZX81 talk 19:17, 8 September 2009 (UTC)[reply]
  • After increasing the fan speed to 50%, I am hearing a little noise from card. I also tested with "furomark" software. When temperature goes to 85% the fan speed increases with huge noise in it. I have tested my system with another card but same thing is happening. Can it be motherboard or processor? bit worried :(--119.30.36.55 (talk) 19:31, 8 September 2009 (UTC)[reply]

Symptoms of processor overheating edit

Hello there what are the symptoms of processor overheating? —Preceding unsigned comment added by 119.30.36.53 (talk) 17:40, 7 September 2009 (UTC)[reply]

Other than the computer's CPU temp alarm going off, the most common symptom is apparently random system restarts. That is not always a CPU overheat. Just about any part of the computer can overheat and produce enough heat to cause the computer to reboot. What makes it complicated is that a faulty fan can cause a perfectly cool system to reboot. -- kainaw 18:49, 7 September 2009 (UTC)[reply]
Many CPU's use clock-throttling (aka Dynamic frequency scaling) to try to keep cool - so if your PC seems to run slowly when doing CPU-intensive tasks, it's worth checking the temperature. Some GPU's can do that too. If this happens a lot - if it's happeneing then it's a sure sign that your machine is inadequately cooled and may crash under more extreme loads. SteveBaker (talk) 03:40, 8 September 2009 (UTC)[reply]
  • Can system freezes because of processor overheating? --119.30.36.55 (talk) 19:17, 8 September 2009 (UTC)[reply]
Probably, yes, and definitely crashes followed by restarts. I've been having problems with my 2005 AMD Athlon64 PC recently, which turned out to be caused by processor overheating. The symptoms were: during CPU-intensive tasks, during file copying, and during any activity which involved USB traffic, the fan sped up, becoming much more noisy. It was instantaneous and highly reproducible, move the mouse, and the fan speed increased. I tried to hear whether it was the fan of the PSU or the fan attached to the CPU, and first thought it was the PSU. Since I had installed two extra hard disks, I found that reasonable, and replaced the PSU. Same symptoms. Then I read about the Cool'n'Quiet feature of the Athlon 64 CPUs. So I removed the fan that cools the CPU, and saw that the cooling paste had gone dry, and that there were patches where it was simply gone - there was air between the metal block to which the fan was attached, and the CPU. So I bought cleaning liquid (two components) to remove the cooling paste, new cooling paste, and fastened the fan again. That fixed the problem completely. --NorwegianBlue talk 19:46, 8 September 2009 (UTC)[reply]
  • I have freezing issue with my system. I thought it was my graphics card but when I tested system with another graphics card, same freezing problem is happening. Is it processor or motherboard or what else? —Preceding unsigned comment added by 119.30.36.47 (talk) 09:57, 9 September 2009 (UTC)[reply]

Ubuntu edit

I've been a Windoze guy for a long, long time. However, when my box recently died, I thought I'd give Ubuntu Linux a try. With my old 'puter now pushing up daisies, I got an old junker off a friend and tried to install Ubuntu 9.04 on it. As installation begins, Ubuntu craps out, saying that the BIOS on the computer is too old (1999, and it needs 2000) and proceeds to give error number 16, which I assume is related to the BIOS date issue. So, what's my best option? I've never updated a BIOS before; I don't know if such a thing is practical/possible. Should I instead look for an older version of Ubuntu? What release would I need? Or should I just try a different distro? I'm willing to do some learning, but Linux is brand new to me. Matt Deres (talk) 17:50, 7 September 2009 (UTC)[reply]

Try Damn Small Linux. -- Finlay McWalterTalk 18:43, 7 September 2009 (UTC)[reply]
You could try an older version of Ubuntu, but that would expose you to bugs/security holes which have since been patched. Your best bet is to talk to Ubuntu experts to see if there isn't some simple setting that you can change to get it to work (the unofficial forums (ubuntuforums.org) is probably the best place to do that - they are remarkably beginner friendly). Failing that, you can look for another linux distribution which may work better with older equipment. (Again, the Ubuntu forums may help to point you in the right direction.) -- 128.104.112.179 (talk) 18:51, 7 September 2009 (UTC)[reply]
Ubuntu is more forgiving of an older machine than, say, Vista, but asking a decade old machine to run it is unlikely to be fun. It may very well be possible to get it to run, but the stuff it installs by default is (comparatively) memory hungry. DSL is a better fit, as is Puppy Linux. Knoppix might work okay, depending on the machine's specifics. -- Finlay McWalterTalk 19:04, 7 September 2009 (UTC)[reply]
Errors during installation might be caused by errors in the burning of the CD; make sure to verify it first. Xubuntu is a version of Ubuntu that is intended for lower end computers. --Spoon! (talk) 20:13, 7 September 2009 (UTC)[reply]
Puppy linux will boot off a cd without needing to be installed - try that for a start. There are probably other variants of Linux as well that will boot off the cd. 78.149.167.102 (talk) 23:48, 7 September 2009 (UTC)[reply]
I assume it's a GRUB error 16, which is usually a file-system issue. Updating your BIOS is a trivial task, and I haven't had any problems doing so. That's assuming that the BIOS is actually the problem. I've learned to avoid the bi-yearly Ubuntu releases. They're very buggy. It's ridiculous that Ubuntu releases a new version every six months. That's not enough time to fix the bugs. Windows Vista was in beta for 1 ½ years! And even then, it wasn't ready. I use Ubuntu 8.04 -- the long-term support release. They've patched that one too many times to count, so it's very stable. The latest version is 8.04.3, meaning it's on it's third "service pack" (that's what I call it) after only 1 ½ years. There's also Debian, which is what Ubuntu is based on. That is very stable. If you mess around with any of the new Ubuntu releases, you're frankly asking for trouble.--S1kjreng (talk) 03:52, 8 September 2009 (UTC)[reply]

Thank you all for the replies. I burned a CD of Puppy Linux and that also is failing to install. In the case of Puppy, it appears to be starting correctly, and even allows me to access the boot options with <F2>, but will go no further - it just freezes. Both Ubuntu and Puppy fail to either install or run from disc, so I'm beginning to suspect the problem is more... complicated than an out of date BIOS. Discs are cheap, so I'll give one of the other options above a crack at it and then perhaps haul the box out to the curb for the weekly pick up. Computers are not my friends this week it seems. Matt Deres (talk) 18:42, 8 September 2009 (UTC)[reply]

I had a lot of trouble a while back installing Ubuntu on a relatively old secondhand computer: the problem turned out to be the CD drive - out of my four second-hand drives, the Ubuntu CD was only happy to install using one (counter-intuitively, the oldest). 213.122.39.88 (talk) 22:20, 8 September 2009 (UTC)[reply]

External drives that appear to be internal edit

Dear Wikipedians:

This has been a problem that bugged me for a long time:

At times I find opening up the computer, put in new hard drive, etc, to be a major hassle. Unfortunately, so far all the "canonical buses" are internal -- by which I mean the PATA and SATA interfaces. I know that Linux can be installed and booted off of external devices such as USB stick. However, I googled and found that it takes a lot of hacking to isntall Windows XP on a USB stick.

I'm wondering if there is a solution that would allow me to install Windows XP (and all operating systems for that matter) transparently (i.e. no hacking) onto an external storage device that lies OUTSIDE the system chassis and is powered independently.

What I mean is that as far as I'm aware, there are three external buses used for storage devices:

  • USB
  • Firewire
  • eSATA

I'm wondering if :

1. Are eSATA drives recognized and treated in the same way as internal SATA drives? (That is, can WinXP be installed onto eSATA devices with no hacking necessary).

2. Are there any other alternative bus architecture that I'm unaware of that would allow me to accomplish the effect of being able to install and boot OS off of external devices?

3. Is there a way of plugging in an expansion slot device (like extra USB port hookup) that translates internal IDE cable signals into an external USB/Firewire signal so that all my USB sticks transparently become USB devices (because the system has no way to tell what's at the other end of the IDE cable, as far as it's concerned it's a perfectly legal IDE hard drive).

4. Or, better yet, is there someway of FOOLING the system/OS into thinking that a USB/Firewire device is a proper internal PATA/SATA device?

Thanks.

174.88.240.61 (talk) 22:17, 7 September 2009 (UTC)[reply]

There are probably some hacks, but you probably need to check if your motherboard will boot from these devices. Many new BIOS will boot from USB. ---— Gadget850 (Ed) talk 23:42, 7 September 2009 (UTC)[reply]


Thanks, I also did some digging aruond myself and found out that eSATA does exactly what I want to: internal and eSATA appear EXACTLY the same (i.e. as internal) to the system, and therefore to WinXP and all other OS. As a perk, eSATA also has beat all other external buses (USB, 1394) in speed by roughly one order of magnitude ;) I'm going eSATA! 70.52.150.227 (talk) 01:56, 9 September 2009 (UTC)[reply]
  Resolved