Wikipedia:Reference desk/Archives/Computing/2008 September 15

Computing desk
< September 14 << Aug | September | Oct >> September 16 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


September 15

edit

Opening a file via the context menu in Windows Vista

edit

On previous versions of Windows, there was an option to edit how clicking on a file with a certain extension would allow it to open. This option exists under Windows Vista under a program called Default Programs. However, it only allows an edit to change which file it opens on double-click; it completely wipes out any other options available under the right-click context menu.

For example, I have svg files on my system, as well as Inkscape. I had the default to open under Wordpad to allow manual edit, another option to open with Inkscape was added by the said program. But when I told the system to make Firefox the default program (to allow quick viewing), it wiped out all the other options.

Help. Magog the Ogre (talk) 03:27, 15 September 2008 (UTC)[reply]

You can edit what programs are used to open which filename extension from Control Panel → Folder Options. Alternatively, you can add a shortcut to a program into the "Send To..." menu. There are some disadvantages: In Vista the "Send To ..." is well buried in c:\users\<username>\App Data\Roaming\?????? (I'll find the rest of the path later), that shortcut will appear in every "Send To..." menu no matter what type of file it is. Astronaut (talk) 15:26, 17 September 2008 (UTC)[reply]
If it means anything, that first option appears to no longer be available on Windows Vista, at least on SP1. Magog the Ogre (talk) 16:58, 19 September 2008 (UTC)[reply]

Webdesign for dummies

edit

I recently had an idea for a website, but I have limited coding experience and almost no webdesign experience. I won't go into the details of my website because, well, quite frankly I'm sorta paranoid about my idea getting jacked, but in any event after poking around I found Drupal and I think I want to use it to implement my project. Do I need to learn SQL or PHP to play around with Drupal? Should I try to maintain my own local server and database, or can I design the site locally and run it off a hosted server? My old computer is burnt out so I've been thinking of reformatting for use as an Apache server/MySQL database for use in this project, but I don't know if I need to go that far. --Shaggorama (talk) 08:21, 15 September 2008 (UTC)[reply]

My (limited) experience is that it'll be handy to learn some PHP with Drupal; that's true for WordPress, where I have somewhat more experience. That said, since you say you have little experience, WordPress has many, many themes (for overall appearance) and plugins (for various kinds of customization), such that you might not need to get into the PHP code, or might be able to get by with examples from WordPress forums. I imagine it's similar with Drupal.
For overall appearance of your site, you can do a great deal with CSS (cascading style sheets). As a simple example, a WordPress theme set the H2 tag as 18point blue italic centered text with 15 pixels of padding above and below. You could edit the CSS to change any or all of those to suit your taste. If you're serious about learning this, I recommend Head First HTML -- there's even a sample chapter at that link so you can see their approach.
I myself would not want to run my own server; hosting is a commodity business, so you have many options that should not be expensive. Again, WordPress.com will give you the specifications that a hosting service would have to provide; some services advertise on the WP site. And for starters, you could begin your site at WordPress.org (though the number of themes is smaller, and you'd still have to pay a bit to have the ability to edit the CSS); later, you could find host companies who will move your mySQL data to a hosted site for you. --- OtherDave (talk) 12:07, 15 September 2008 (UTC)[reply]
I've looked into WP and I don't think it's right for my needs. I want my users to write articles, yes, but I want the articles to be associated with both the users and more global categories, sort of like the architecture of a product review site. Could this work within wordpress? I got the impression wordpress is really for straightforward blogging, so I thought Joomla or Drupal would probably be better options. --Shaggorama (talk) 19:50, 15 September 2008 (UTC)[reply]
Well, as I said, I don't know much about Drupal, and I'm not trying to sell you WP (since it's free). You could give regular writers the ability to write posts (which you could treat as articles) without publishing them (leaving you as the final authority). You can assign both tags and categories to posts. E.g., you could have a category for REVIEW (or one for SOFTWARE REVIEW and one for HARDWARD REVIEW); you could have tags for vendor names or product types (a Microsoft tag, an open-source tag). That means with very little technical knowledge, you could have a site where I can easily find:
  • All the articles written by Fred Frack
  • All the reviews of hardware (in general)
  • All reviews of 4-gigabyte berm divots (assuming you had a category "4-gigabyte berm divot")
  • All content tagged "MicroPro," or tagged "office applications," or tagged "predictions."
You can assign more than one tag, and more than one category, to any item. This is not hard stuff; I'm not a computer wizard.
My point is only that there are a number of ways to control how your content appears in WP (and I'm sure in Drupal as well). If I can tell you any more about WP, send me an email. My impression (this is only an impression) is that the administration of Drupal requires more technical knowledge than does WordPress -- but I'm sure some Drupal expert will address that.
One thing you might consider is using either WP or Drupal (are there free Drupal hosts?) to build a small version of the site you're planning. You don't even have to make it public, but messing around with the software will help you see how things work, and will probably give you ideas for how to organize your eventual site. Throw together three dummy pages for each type of content you plan to have, and see if you can link the stuff together. --- OtherDave (talk) 23:56, 15 September 2008 (UTC)[reply]

Line spacing in CSS

edit

Generally, two paragraphs are separated by one additional line. How do I do single line spacing using CSS?

What I need is:

<p>Line 1</p>
<p><blockquote>Line 2</blockquote></p>
<p>Line 3</p>

There shall be no additional lines between two paragraphs. -- Toytoy (talk) 11:37, 15 September 2008 (UTC)[reply]

The p-tag is a block container. It has padding and margin spacing (see "block container" rules on any of the millions of CSS introduction sites). So, you want the rule p { margin:0px; padding:0px; } to ensure there is no margin or padding around your text. -- kainaw 12:06, 15 September 2008 (UTC)[reply]

Howdy there fella

Single-spaced paragraphs here

You just check the source

--98.217.8.46 (talk) 19:53, 16 September 2008 (UTC)[reply]

Roxio Streamer

edit

I'm trying to use ROxio Streamer to stream my videos to my iPod Touch. However, when I try to go to the address it gives me to view my video, the browser says it can't find the page. I tried enabling UPnP, NAT-PMP and TCP but none of those work. I still cant see the webpage for some reason. Please help. --Randoman412 (talk) 11:43, 15 September 2008 (UTC)[reply]

Perl

edit

What programme do you use to write a perl script, and to run it?

Thanks. —Preceding unsigned comment added by 87.84.118.226 (talk) 12:18, 15 September 2008 (UTC)[reply]

That would depend on what operating system you are running. Most Linux distributions come with a Perl interpreter pre-installed, and you could use vi or EMACS or a specialized editor and/or development environment. For Windows, you'd have to install a Perl interpreter and could use notepad. For Macintosh, I dunno. Lots of links in our handy Perl article. --LarryMac | Talk 12:47, 15 September 2008 (UTC)[reply]
Normally you'd use any old text editor. I like to use vim. I usually run perl scripts from the command line, like this:
$ chmod u+x ./my_perl_script.pl
$ ./my_perl_script.pl
--Kjoonlee 13:35, 15 September 2008 (UTC)[reply]
Or you could run the script using the perl interpreter.
$ perl ./my_perl_script.pl
--Kjoonlee 13:36, 15 September 2008 (UTC)[reply]

vista directory explorer loses IE icon for .htm files

edit

well, that about says it all.... all of a sudden, the vista explorer gives me a blank page icon for .htm instead of the little blue e icon for IE. what did i change just before that? umm... I reinstalled Real player... but the .htm files are still associated with IE, double clicking opens them up in IE; IE itself still has the blue e icon; I tried rebooting, I even tried going into control panel and associating the .htm files with IE again; no dice. ???? any advice will be gratefully appreciated. TIA. Gzuckier (talk) 15:07, 15 September 2008 (UTC)[reply]

Just get linux... 74.14.48.190 (talk) 23:29, 22 September 2008 (UTC)[reply]

Computer Woes

edit

I can open none of the drive-folders when I double-click the icons in the My Computer window. I use Windows XP Professional SP2. Can anyone help?? 117.194.224.253 (talk) 18:54, 15 September 2008 (UTC)[reply]

If you can still open other folders, you should be able to navigate to your drives from there. Type the address of the drive in the address bar of the folder, or you can try right-clicking a folder and selecting "explore" to get a map of your computer, including drives. --Shaggorama (talk) 19:53, 15 September 2008 (UTC)[reply]
I ran into a similar problem and found the solution on Microsoft's help site. Your problem may be the same. --Bavi H (talk) 02:23, 16 September 2008 (UTC)[reply]
Well, now the problem is that when I double-click the icons, the folder opens in a new window, instead of in the old one itself. Help, please. 117.194.227.36 (talk) 11:49, 16 September 2008 (UTC)[reply]
Try this: Open My Computer. Click on the Tools menu, then click Folder Options. Select the option "Open each folder in the same window". --70.254.87.166 (talk) 01:59, 20 September 2008 (UTC)[reply]

Compression by generating sequences of data?

edit

You could compress the first billion digits of pi in a normal zip fashion, which would likely generate a large-ish file, or you could compress it down to almost nothing, just a few lines of code to generate the whole sequence, if you knew or discovered the pattern in the input data. So the question is this: is it useful, or even possible, to compress data by looking for patterns and trying to create algorithms that re-create the correct sequence of bits? You don't of course need to generate the whole archive from one algorithm, but perhaps try to identify sections that can be successfully attacked this way. How might this be implemented? Zunaid 19:12, 15 September 2008 (UTC)[reply]

As a start I can come up with one "blunt instrument" approach: try many different algorithms (with different parameters) to see what sequences they generate and see if it matches any sequences in the data. This can be sharpened up by starting on sequences in the data that initially look promising by some prior test or knowledge. Zunaid 19:12, 15 September 2008 (UTC)[reply]
The problem here is that without prior knowledge of the data, there is an infinite amount of "patterns" you could look for. Generic lossless compression algorithms are relatively crude, they look for one type of "pattern", ie repetition of bits or bytes or sequences thereof, which are then stored with sufficient information to recreate the original data through an algorithm. They work well enough for data commonly encountered such as text, machine code, pictures with large blocks of the same color ... much of the data we use is repetitive by nature. An advantage of this method is that the algorithm to recreate the original sequence can be externalised, saving further space, something that would not be possible with generated algorithms (though this is not so much of advantage on large amount of data). Prior knowledge of the data to compress makes it possible to come up with better schemes (for instance, it would be possible to externalise an "english language" sequencer based on the most common repetitions of characters and words).
Anyway, I digress. What we have here is basically an optimization problem with a very complex solution space. Given the complexity of the solution space, a brute force approach searching for a global optimum is not reasonable. We might however, want to search the solution space for local optima. An appropriate scheme here would be a genetic algorithm, where each individual is an algorithm and the fitness function is a factor of the length of the algorithm and ability to generate data close to the original data (and also to terminate in a reasonable time). This is no simple task to implement, you would most likely want to set up a virtual machine with an appropriate code set (prior knowledge of the data to compress would be very useful here), and the evolutionary scheme of a genetic algorithm is always a tricky part. Both lossless or lossy compression would be achievable. Now of course, there's no guarantee such a compression process would ever produce better results than common compression schemes, and it might take a very very long time to run. Equendil Talk 21:19, 15 September 2008 (UTC)[reply]
This is more of a practical approach than a theoretical one: check your sequence against all the ones at the On-Line Encyclopedia of Integer Sequences. This isn't really a compression question so much as a math one: "given a sequence of numbers, can one deduce the function that generated them?". I suspect the answer is "no", but perhaps the folks at WP:RD/Math might know better. --Sean 20:28, 15 September 2008 (UTC)[reply]
It would work, but it wouldn't be too effective: given the infinite number of potential algorithms, you could never be sure that there was one or not. As a specific example, try finding a pattern in the following base64-encoded data that will let you compress it:
Kln7j02PJpB59GqXW3zoX3fhPNaILd4srZw25cw0hNHtzOjB+lkSd2BQ75AVKjbfc+2ZS0/ODN6j
vgLNRtz9Sq47YvxuE1rJHdznm5SwGJv0ScR14LBPuCx+FoK3i+E+T2LA6dh8YvQz2bsBf8yscExz
EtqmllGnw1ITdn9RwuARTqNcYHTkNJq3cmpGhT6kQdqyrTLtoTjZiz8mif075VpwhJzQCn/YpwBQ
Xuy9Iy2Xp95ooh9Fpp055yjvJyo6lopcMdvbQhROOjZCJ1T35Kofi6QII9F4W2TYLkUt1GOQqnIp
lDmuWaObrjMsyGbdQrBkw211EsmWWzdhgFV2FQ==
(Hint: it's impossible. This is 256 bytes of truly random data.)
--Carnildo (talk) 20:35, 15 September 2008 (UTC)[reply]
It looks like you just mashed the keyboard to me; which for future reference would not produce truly random carachter strings. 92.16.148.143 (talk) 16:51, 20 September 2008 (UTC)[reply]
If only we knew the pattern for pi... It's irrational; I think that means there's no real pattern. So I doubt you could compress it in the manner you're discussing here. --Alinnisawest,Dalek Empress (extermination requests here) 20:46, 15 September 2008 (UTC)[reply]
The way to do this is to generate a sequence of algorithms, and test if they make the result. The type of computation is NP complete, or computationally difficult, so you probably cannot use it. For particular applications you could probably improve the compression, eg XML, HTML, English text. In practice you don't often want to compress algorithmically generated sequences like pi to a billion decimal places. And yes there are methods to generate π decimal places. There is a classification of a number in the form of a bit string that asks if the string can be represented by a description shorter then writing out the bits. Graeme Bartlett (talk) 21:13, 15 September 2008 (UTC)[reply]
You may be interested in procedural generation. When I was younger, about the era of the 14.4k baud modem (and dinosaurs roamed the Earth), when downloading a few megabyte demo was an all-night affair (and hope your phone didn't get disconnected!), there was a very "demo scene" game that had the look, polish, and engine (more or less) of - let's say, Quake - and the entire thing about about 27kb in size. Every asset was procedurally generated - in playable form, on the order of a few megabytes. Spore (2008 video game) would be a recent, commercial example of this. Now, arbitrary, post-facto, discrete algorhytmic compression? I believe above responders mostly cover it, although for real data, Benford's law may be a bit of a catch. 98.169.163.20 (talk) 01:51, 16 September 2008 (UTC)[reply]

There is actually an error in the question. The equation for calculating pi does not in fact expand out to the first billion digits of pi - it expands to ALL of the digits of pi. However, you could fix that by saying how many digits you'd like it to decompress to.

Anyway, we can easily PROVE that you can't compress every arbitary string of data:

  1. Let us suppose you could take every possible sequence of N bits (binary digits) and using your algorithm, compress it into something representable in M bits (where M is always less than N) such that you could then take your M bit number and uncompress it again to get the exact same N bit number that you started with.
  2. There are 2N possible N bit numbers. If you took every single one of those and fed each one, individually into your compression algorithm, getting strings that were at most N-1 bits long. You'd have 2N sets of compressed results - one from each of the possible inputs.
  3. However, if the outputs only have at most N-1 digits then we must remember that there are only 2N-1 possible N-1 bit numbers.
  4. Since we got 2N N-1 digit numbers out of our compressor - but there are only 2N-1 unique N-1 digit numbers possible - it follows that some of those N-1 digit numbers we got back are actually completely identical.
  5. If we take two of those identical N-1 bit numbers and feed them into an algorithm that decompresses them back into their original strings - then there is a problem. How does the decompression algorithm "know" which of those two different N digit strings to produce since it's being fed the exact same N-1 digit string to decompress in both cases?
The answer of course is that it can't. Hence lossless compression of all N digit inputs is flat out impossible - no matter how your algorithm works - no matter how clever it might be.
Knowing that - it can only be that your approach cannot possibly work. However, it clearly does work for some numbers (like PI and e and 0.3333333333333333333...). What that means is it must fail for many other numbers - which leads you to the rather interesting conclusion that if your approach doesn't work - then some numbers are so "random" than every possible equation in any possible mathematical representation must result in an equation that's at least one bit longer than the number itself!
We need a name for these numbers (Darn! "Complex number" has already been taken!)
That's a very cool conclusion! SteveBaker (talk) 02:11, 16 September 2008 (UTC)[reply]
To put it more simply: if you could represent any sequence of bits as a shorter sequence of bits, then you could represent *that* shorter sequence of bits as an even shorter sequence of bits etc, which would eventually result in a sequence of bits of zero/negative length, which is absurd, hence not all sequences of bits can be "compressed". Reductio ad absurdum. That is pretty much a given however, what is interesting is that *some* sequences of bits can be compressed. Equendil Talk 17:25, 16 September 2008 (UTC)[reply]

Also, finding the shortest algorithm that would generate a piece of data is not possible in general, because that algorithm would have size equal to the Kolmogorov complexity of the data, which is incomputable. --71.147.13.131 (talk) 09:15, 16 September 2008 (UTC)[reply]

Thanks guys. Some beautiful maths and engineering answers given here (which, being an engineer, are like air and water to me). Plenty of reading to do! I like the idea of checking sequences against the OIES. You could potentially keep a data store of hundreds of sequences and simply spit out algorithms to generate portions of your compressed data. SteveBaker, I think the term you are looking for is non-computable number. Anon 98, I've seen those procedurally generated demos before, in fact that is what I had in mind when I though of this, just didn't know the correct name for it so I stuck with billions digits of pi as an example.

I suppose this is (sort of) resolved, just thought I'd post one or two more examples to think about:

  1. You could generate an hour long video game demo and either save it as compressed lossless video (tens of gigabytes), or you could save the entire game (or if the company makes a "game viewer" then even better) and the player's mouse/keyboard inputs at a cost of maybe a few gigs.
  2. Procedural generation as 98 mentioned.
  3. You could use Microsoft Excel to generate dozens of charts from tabular data and save and compress the xls file (couple of megs maybe?), or you could record a macro of yourself carrying out all the chart-generating actions and simply save the macro and the input data (a few kb's). This one is actually quite useful if you're trying to send lots of excel graph data to someone.

Zunaid 16:31, 16 September 2008 (UTC)[reply]

The thing about procedurally generated stuff in games is NOT that you start off designing some carefully thought out game level - then compress the heck out of it. What they are actually doing is using random numbers (or perhaps pseudo-random numbers) to generate data - and writing the algorithm such that ANY sequence of data fed into it makes a "reasonable" game level. Superficially, this can be made to make a nice-looking game - but it'll never be as good as a game level designed by hand by experienced artists and level designers. Those guys have a lot of very "deep" knowledge that you're not going to get by random generation techniques. (BTW: I'm a professional video game developer.) SteveBaker (talk) 02:03, 17 September 2008 (UTC)[reply]

No - you don't understand what I'm saying. Your speculations for the uses of such an algorithm are truly pointless - it's not just that there would be a few instances of data that wouldn't compress...it's MUCH worse than that.

My argument (above) that proves that compression of arbitary data is impossible also yields a rough probability of being able to compress any given random bit-string by any given ratio: Consider that if you try to compress N bits of data by just one bit to produce N-1 bits. There are only 2N-1 compressed data possibilities that have to "code" for 2N possible input sequences. It follows that AT BEST (2N-1/2N)=0.5...only half of all possible N bit sequences can be compressed down to N-1 bits!

Suppose you wanted to do some serious compression - perhaps halving the size of random 1000 bit data strings. 2:1 is a pretty modest compression ratio. So that means that you have to encode 21000 possible input data sequences using only 2(1000/2) possible compressed data strings...so only 2500 of those 21000 strings can be compressed successfully. That's a TINY fraction of them! REALLY tiny!! In fact, if you fed random 1000 bit strings into your algorithm at a rate of one per second from now until the end of the life of the universe - then it's highly unlikely that you'd find even one set of input data that would compress successfully! That's a monumentally useless algorithm!! But even if it worked - a 2:1 compression ratio is pretty much useless for most purposes - and 1000 bits of data is less than the length of this paragraph! Now apply the same math to (say) a megabyte of data...there are effectively NO algorithms that can achieve any compression whatever of any piece of random data beyond a few tens of bits long.

So it's not just that there would be some things that wouldn't shrink - what you'd find is that this algorithm (and any other that attempts to work with truly random data) is simply doomed to being quite utterly useless.

Real compression algorithms (such as the one that 'ZIP' uses) only manage to do lossless compression because they know something about the nature of your data at the get-go. Mostly they assume there are lots of repeated sequences within the data.

So, for example, if you know the input data is ASCII text containing only grammatically correct English sentences - then there are ways to achieve very effective compression. (eg, you could make a list of the 1,000 commonest words with more than four letters and replace any of those words you found with an '?' symbol followed by a three digit number. All English text would get shorter (or at least, no longer) as a result. However, you can only do that because you know that there are definitely no strings like '?xxx' (where 'x' is a digit) in grammatical English sentences. Compression of English text is therefore possible - but only because we know something about the statistics of English and because there are some symbols that are only used in certain contexts.

However, if you tried to apply my trick to compressing (say) photographs - it would never manage to achieve any compression whatever!

SteveBaker (talk) 02:37, 17 September 2008 (UTC)[reply]

Your amount of knowledge is batshit insane. --mboverload@ 05:22, 20 September 2008 (UTC)[reply]

Wireless network control

edit

What is a good free replacement to the wireless network control feature in Windows? 79.75.190.211 (talk) 19:21, 15 September 2008 (UTC)[reply]

Usually computers or wireless cards come with one. It's rarely better though. What's wrong with the Windows one? 24.76.161.28 (talk) 23:36, 15 September 2008 (UTC)[reply]
I support over 600 laptops in the field. With XP SP3 (even SP2) the Windows wireless management software is perfectly fine for most people. Did you know that the Intel wireless management software takes 60 megabytes of RAM when it's running? I have written in our image standard that Intel wireless is not to be used. --mboverload@ 05:17, 20 September 2008 (UTC)[reply]

The CAPTCHA problem

edit

I am trying to prepare a pywikipedia bot for use on Catalan wikipedia. I have followed all the instructions, but when I try to login I get the message "wrong password or CAPTCHA answer". I am completely sure the password is correct, but I don't know what the CAPTCHA problem is. I've added the solve_captcha = True to the user-config.py file, to no avail. What should I do to solve this problem? Thanks. Leptictidium (mt) 19:40, 15 September 2008 (UTC)[reply]

I'm failry certain that the captcha only appears when you are adding external links, so if that's not fundamental to how your bot works, maybe remove that functionality. Also, the captcha goes away when a user is autoconfirmed, which takes ten edits and four days. Why don't you edit manually using your bot-account for ten edits, and wait a few days :) 90.235.12.16 (talk) 11:03, 16 September 2008 (UTC)[reply]

XML comment trick

edit

Hi, when i do a comment in xml instead of <!-- and --> , i just do <!--> on both sides and like that i just copy past it around. Do anyone here used this method also? 212.150.162.66 (talk) 21:55, 15 September 2008 (UTC) —Preceding unsigned comment added by Jobnikon (talkcontribs) 21:50, 15 September 2008 (UTC)[reply]

Presumably a good reason for not using this is that it makes code harder to read: you can't tell which <!-->s are starts of comments and which are ends. 84.12.252.210 (talk) 13:20, 16 September 2008 (UTC)[reply]
Yeah, just stick to the regular kind. The more standard your XML the better. Easier for other people to read. 195.58.125.46 (talk) 15:02, 16 September 2008 (UTC)[reply]