Wikipedia:Reference desk/Archives/Computing/2013 January 11

Computing desk
< January 10 << Dec | January | Feb >> January 12 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


January 11 edit

What was the very first written source code file of Linus Torvald's kernel? edit

20.137.2.50 (talk) 16:03, 11 January 2013 (UTC)[reply]

The first public release that Torvalds made was 0.11, which can be downloaded from http://www.kernel.org/pub/linux/kernel/Historic/old-versions/ It contains 51 .c files and 33 .h. I don't think anyone, including him, has an older version. I don't know if his autobiography goes into any more detail than that. -- Finlay McWalterTalk 16:11, 11 January 2013 (UTC)[reply]
Thanks. 20.137.2.50 (talk) 16:35, 11 January 2013 (UTC)[reply]

Why would "Not supported in HTML5" features still work in an HTML5 page? edit

I read the w3schools page for the tt tag and in red all-caps it said it was not supported in html 5. I created a page with !DOCTYPE html (between angled brackets of course) with some text in between some tt and /tt tags and it looked monospaced, as expected. So the "not supported in html5" tags worked in an html5 document. What exactly does "not supported" mean then? 20.137.2.50 (talk) 17:00, 11 January 2013 (UTC)[reply]

What they mean is not valid against HTML5, as HTML5 removed it (HTML4 had it). So if the browser being used strictly supported only what HTML5 codified, the tt element would indeed do nothing much. http://w3fools.com/ http://validator.w3.org/ ¦ Reisio (talk) 17:13, 11 January 2013 (UTC)[reply]
Most browsers are very forgiving, and accept many things that are not valid according to the standard. --ColinFine (talk) 17:44, 13 January 2013 (UTC)[reply]

An program that do something like that would be hard to do or not? edit

I had some musical idea and was thinking about some program to test it.
What the program would do:
1-User would select some amount of seconds higher than 1.
2-User would select some source mp3 sound.
3-Program would calculate the average frequency and amplitude of the first second (lets imagine he chosen 1 second at step 1). And would change it with an 1 second square wave tone that has the frequency and the amplitude that the program got.
4-The Program would then get the second second of the song (if the song has more than one second) and do the same.
5-The program would continue to do that.
6-If the last part of the song has less than 1 second (remember that in this specific example the user selected 1 second at step 1, there is only 1/44100 chance that the last part will have exact 1 second, the program will do the usual math but change with an 1 second tone instead of changing it to a tone that has the lenght of the last part.
7-Program would save this new sound/song as an mp3.

This would be an hard thing to do?177.40.130.22 (talk) 18:59, 11 January 2013 (UTC)[reply]

 
Spectrogram of a violin
All of that is very straightforward indeed, except #3, which is very far from it. The problem is that term "average frequency". First consider a very simple case: if that 1 second chunk of samples contained a mathematically pure (that is, synthesised) sine wave. You'd perform time–frequency analysis (with something based on a Fourier transform) and you'd get a single spike, a single number, which would meet your requirements well. But real generators of sound don't produce sine waves - even simple vibrating instruments like a violin produce harmonics - lots of them (see the spectrogram to the right). So your Fourier analysis yields a dozen or more spikes. They're not all of the same height, but several roughly are. Given all these numbers, which is the "average"? For a simple instrument like this (with only one string being played) you can usually take the largest peak and say that this is the "note" the instrument is playing. It's analysis like this that drives programs like Rocksmith, which turn the complex analog signal from a guitar into a single number representing the note the player is picking at that time. Picking the loudest constituent isn't an "average" at all, but maybe it's what you really wanted. But so far we've just done simple vibrating systems. Play two strings on that violin, or six on that guitar, or strike eight keys on a piano, then you get 2, or 6 or 8 times as many fundamentals, each with their own family of harmonics. And those vibrators aren't independent (they're not 8 independent vibrating objects each isolated in 8 different rooms) - they're connected to a single shared soundboard/box, where their vibrations reverberate and interfere with one another in complicated ways (that's what differentiates a cigar-box from a Stradivarius). All that adds more complexity, and more peaks in your spectrum - you might easily see 100 spikes. An "average" of these is even less meaningful or useful, and you can no longer safely just pick the loudest peaks and say those are the "notes". And we're still not done, for a general sample of any music - because that's just one instrument. Take a Beatles track - you have the piano, a bass guitar, one or two (or more, with overdubs) lead and rhythm guitar, and one or several people singing - each adding dozens of spikes. (I'll leave off percussion instruments like drums, which are even more complicated). With all these peaks on the spectrogram, an average would be meaningless and the resynthesised instrument you propose wouldn't sound like anything (and wouldn't behave in a way that really resembled the original mp3). Much more sophisticated analysis, which attempts to related peaks to one another (to determine which are harmonics of others) can (in some circumstances) simplify the system and begin to capture the underlying musicality. But in general it's a darn hard problem, and our ability to listen to a full orchestra playing and hear more than just a din is a testament to how sophisticated the human auditory processing system is. For a lot about how real instruments actually behave, I heartily recommend Music, physics, and engineering by Harry F. Olson. -- Finlay McWalterTalk 21:44, 11 January 2013 (UTC)[reply]
Like every other idea ever thought up, a professional academic thought of that same idea first; and also published the provably-optimal way to implement it. Behold, A systematic construction method for spatial-varying FIR filter banks with perfect reconstruction. What you're describing is a sort of dynamic, nonlinear adaptive filter. Finlay's points are all valid: defining "average" frequency is actually a non-trivial task, and it might not be what the original poster actually had in mind. Here's an implementation of peak frequency detection, from the same author as the freely-available PASP book I linked earlier this week. Anyway, this can be done; in fact, for any selection-criteria, we can build a filter algorithm to perform the appropriate frequency discrimination, and a synthesizer to reconstruct the signal per your reconstruction-constraint (in this case, with constant frequency-spectrum over each one-second frame). Nimur (talk) 01:47, 12 January 2013 (UTC)[reply]
If I wanted to do something like what the OP was proposing, I'd simply count the number of zero crossings in the 1 second chunks and say that this is the average frequency. Given what is being proposed, it would provide something like an average and might give an interesting sounding result.--Phil Holmes (talk) 11:32, 12 January 2013 (UTC)[reply]

Here is the entire story behind this request, this will help to explain why I am asking this question. One day I was thinking about colorfield painting, some painting genre I like, and then I started to think about how a musical version of this genre would be. So I started to try to convert the colorfield painting genre characteristics to musical characteristics.
PS:Correct me I am wrong on something here.
The characteristics of the painting genre, from wikipedia "Color Field is characterized primarily by large fields of flat, solid color spread across or stained into the canvas creating areas of unbroken surface and a flat picture plane."
-So, colour is the most basic part of painting. The most basic part of sound would be timbre, amplitude and frequency.
-"fields of flat, solid color", would be converted to, a mono tone, that has the same frenquency, waveform and amplitude the entire time or to pure silence.
-Converting the word large is a hard thing, microsound article talks about sound object time scale starting with 0.1 seconds and sample time scale ending with 0.01 seconds. So sound object is 10 times larger than sample time scale. I decided to get the sound object time scale and multiply by this number (10) and got 1 second.
So, from that how the musical genre version of this painting genre would work: The song would start with a tone that has the same amplitude, waveform and freuqnecy the entire time or silent part, this part would have at least 1 second, then (if the song has more than 1 part) another tone or silence part would start in the same way without fade out, this new part would have at least 1 second, and the song would go on like this. Also, the song would be mono.
I was curious about this idea and i am not a musician, so the idea about this program that would convert normal songs into my genre template came into my mind and I decided to ask here to check if it would be hard to do or not. 177.157.209.171 (talk) 15:26, 14 January 2013 (UTC)[reply]

It's an interesting idea, but it's rather like trying to reproduce a painting by dividing it up into square inches and averaging the colour over that square inch, then reproducing it in inch square blocks. It might sort of work for a few modern pictures such as Hans Hofmann's The Gate, (and for a few pieces of modern music such as John Cage's 4' 33" ), but most detail will be lost and the result will not be at all like the original for the vast majority of pictures or music. In reality, paintings are reproduced electronically by recording the light frequency and intensity at a much smaller scale (typically every 300th of an inch), and music is reproduced in MP3 CD format by recording the sound intensity much more regularly and this single parameter serves to record volume, frequency and timbre.
How well does the average colour method classify genres? I suggest that musical genre would better be classified by recording volume across a series of frequency ranges for samples of one millisecond every second, but I'm not convinced that it would provide a reliable classification. Dbfirs 20:50, 12 January 2013 (UTC)[reply]
I don't care if the stuff doenst end like the original songs, I was justin thinking about this program idea, because I am not a musician, and with this program it would be easier to try the idea. Even if the result is not a perfect or near perfect "insert song name here (colour field remix), its ok, this is not about doing remixes anyway, its just about having something to start with. — Preceding unsigned comment added by 201.78.188.225 (talk) 21:35, 12 January 2013 (UTC)[reply]
Isn't this simply resampling at a much lower frequency, sometimes called downsampling? In the same way, a digital image can be resampled to have much larger pixels, which if done correctly represent some kind of average value for the area covered by that pixel. Astronaut (talk) 14:53, 13 January 2013 (UTC)[reply]
The OP's proposed method of averaging the frequency was very different from downsampling, and, as mentioned above, would produce a very different output. Downsampling works for pixels because averaging hue and saturation produces a lower resolution copy of the original. Sound is different because a MP3 CD-style sample is just the instantaneous intensity of the sound, without looking at frequency or timbre (waveform). This is why sampling for CDs has to be frequent for the result to be anything like the original. My suggestion was to analyse at least two of the three parameters in a short time interval, but I got the interval wrong if the sample is to include parameters other than volume. Perhaps analysing a fiftieth of a second of sound every ten seconds would give a better sample for analysis of genre because this would allow measurement of waveform (as well as volume) across a range of frequencies. Dbfirs 16:23, 13 January 2013 (UTC)[reply]
I don't know what you mean by "an MP3-style sample is just the instantaneous intensity of the sound, without looking at frequency or timbre". MP3 and all other modern lossy compression algorithms work in the frequency domain (using an MDCT). The reason you need a high sample rate for PCM audio is that the human ear is sensitive to frequencies up to around 20,000 Hz, and you need at least 40,000 samples per second to represent that as alternating high and low samples (and in practice you need somewhat more). -- BenRG (talk) 16:40, 14 January 2013 (UTC)[reply]
Sorry, I was getting confused between CD-style sampling (Pulse-code modulation) and MP3 format. I've made alterations above to avoid confusing others. (Thanks, Ben, for the correction; my brain sometimes slips out of gear! ... and I can remember, long ago, when my ear used to be sensitive to frequencies of over 18KHz, but now I struggle to detect 12KHz.) MP3 coding does indeed sample frequencies, but in a much more sophisticated way than either I or the OP was proposing. Dbfirs 22:50, 14 January 2013 (UTC)[reply]