Wikipedia:Reference desk/Archives/Science/2010 August 4

Science desk
< August 3 << Jul | August | Sep >> August 5 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


August 4 edit

Special relativity and Ender's Game edit

I just finished Ender's Game and have a question about the use of ultrahigh speed trips to extend someone's life. Obviously this is a work of science fiction, but assuming the premise of Ender's Game is true, would such a thing work? I tried to read the special relativity article but I didn't get enough information before it went into formulae and the like. If they needed Mazer Rackham (a general who was probably 40-60 years old) to be able to train new generals and they knew they wouldn't need him for 50 years but he would be dead when they did need him, can they send him on a starcruise that will take up 50 years of everyone's time except his own and then when he returns, he's just 2 years older? How does one's speed affect the physiology of one's biology (cells, tissues, organs, etc.) so that they don't shut down when they would ordinarily have (after 80 years, let's say)? Perhaps I'm just asking a basic question on special relativity -- I don't know -- but this book got me thinking about this. DRosenbach (Talk | Contribs) 02:11, 4 August 2010 (UTC)[reply]

The physics of the time dilation is spot on - we would certainly argue that getting a ship up to those speeds would be a spectacularly difficult challenge - but if you could, then this would work as advertised. The deal here is that time for the general seems to happen perfectly normally - it's just like someone pressed the 'fast-forward' button on the rest of the universe. His body chemistry/cells/tissues/organs and every other aspect of his life are perfectly normal. This isn't even a theoretical matter - astronauts have taken sensitive clocks on long space missions and measured this time distortion effect. Of course they aren't moving at anything like the speeds suggested in the book - and the time distortion is on the millisecond scale - not years. But the principle is well understood and quite in line with mainstream physics. SteveBaker (talk) 02:48, 4 August 2010 (UTC)[reply]
I'll add a 'me too' to SteveBaker's (as-usual-spot-on) response. Physicists are wont to say that "there are no privileged frames of reference" — in other words, the little bit of space inside a relativistic starship behaves no differently from a little bit of space at rest relative to the Solar System. There isn't any experiment that I could conduct inside the starship (at least, no experiment that didn't involve looking out the windows) that would tell me how fast I was going, nor how fast or slow my clock inside the ship would appear to an observer outside. On-board starship physics (and chemistry, and biology) are exactly the same as physics everywhere else. The cells in my body age normally, as far as the clocks on board ship are concerned. It's only when an observer looks in the window of my speeding starship and observes my apparently slowed clocks (and all the other sequalae of relativistic travel) that the magic happens.
As a matter of kinematics, I'll add as an aside that one year of acceleration at one gee (9.8 meters per second, per second) would take you just a hair (3%) over the speed of light in one year — if we lived in a universe without relativity. Since there are relativistic considerations, our starship never quite manages to exceed the speed of light, but about a year at one gee acceleration gets us just about close enough for government work. After that year of acceleration Rackham's aging would be slowed to a crawl. (Incidentally, was that a year of acceleration measured from the outside, or a year as seen on shipboard? Those two are actually going to be quite different durations....) TenOfAllTrades(talk) 05:46, 4 August 2010 (UTC)[reply]
... and, agreeing with everything said above, I'll just add that there are serious considerations of energy requirements to maintain one "g" acceleration, especially when one gets within a few percent of the speed of light, but presumably a future technology has almost unlimited energy at its disposal, even on a spaceship. Dbfirs 06:23, 4 August 2010 (UTC)[reply]
For the specific scientific discussion of this, see twin paradox. It has been a classic thought experiment of special relativity since the 1910s, and has since been experimentally verified in a number of different ways. --Mr.98 (talk) 12:08, 4 August 2010 (UTC)[reply]

I haven't read the novel, but I have some fair knowledge in Special Relativity. If we neglect gravitational fields, and if we need the general to age only 2 years while on the earth, 50 years have elapsed, then using the equation for time in SR, we find that his space ship must be travelling for 2 years continuously at the speed of 299552504.116 metres per second. thats just 239954 metres per second lesser than the speed of light. I have given u the idea. Now all you have to do is, to just find a space ship that can go at such a speed for 2 years!!! harish (talk) 15:59, 4 August 2010 (UTC)[reply]

I am pretty sure he aged 8 years, not 2. Googlemeister (talk) 16:31, 4 August 2010 (UTC)[reply]
The acceleration would be pretty killer, but iirc, the book does make vague mentions of technology for manipulating gravity and momentum being stolen from the Buggers in the First War, so that could probably hand-waved well enough for the book's purposes. APL (talk) 16:51, 4 August 2010 (UTC)[reply]

From the topic, I was sure this was going to be about the ansible! APL (talk) 16:51, 4 August 2010 (UTC)[reply]

:I wikilinked your ansible in case someone needs an article that explains what it is. Cuddlyable3 (talk) 21:45, 4 August 2010 (UTC)[reply]
I haven't read Ender's game, but it seems a similar "delaying your aging with relativity" plot device is used in Joe Haldeman's Forever War. The hero's lover spends some time travelling at relativistic speeds so that she doesn't age and die while waiting for him to return from the distant war. Astronaut (talk) 11:46, 6 August 2010 (UTC)[reply]

High levels of elements in the blood related to skin and eye color edit

Just out of curiosity, i recently learned that high levels of copper in the blood causes a copper colored ring around the eyes, and even more dramatic, silver causes not only your eyes, but your skin too to change grey-bue. Are there any other elements, and maybe to a lesser extant compounds, that cause changing in coloration? 99.114.94.169 (talk) 03:05, 4 August 2010 (UTC)[reply]

This is most likely pseudoscience. See Iridology#Criticism. Dolphin (t) 03:11, 4 August 2010 (UTC)[reply]
Well, my question does not concern the actual color of the eye in judging health, but of actual, well documented diseases such as Kayser-Fleischer rings and Argyrosis, and not only including eyes, but skin color as the build up in the body. 99.114.94.169 (talk) 03:36, 4 August 2010 (UTC)[reply]
Carotenemia, Argyria, Chrysiasis, Lycopenodermia. I'm sure there are others. Ariel. (talk) 03:56, 4 August 2010 (UTC)[reply]
Thanks, I should probably just google ever color and see :P, well sense we don't have a wikipedia article on , Lycopenodermia, maybe you can elaborate the condition? 99.114.94.169 (talk) 04:02, 4 August 2010 (UTC)[reply]
Lycopenodermia is just like Carotenemia except it's from lycopene, and the color is more reddish. Ariel. (talk) 04:55, 4 August 2010 (UTC)[reply]

Mysterious movement of sediment in a wine glass edit

While on holiday in France recently, I happened to look into a wine glass which had been emptied some fifteen minutes earlier and not touched since. In doing so, I noticed that the sediment in the small amount of liquid that had gathered in the bottom was moving quite rapidly, in a roughly toroidal pattern (this was probably due to the raised dimple in the centre providing an obstruction). The glass was not in the sun, the table was hefty and not easily moved and there were no obvious sources of vibration anywhere around. My question then is, where was the energy for this movement coming from? At the time I didn't pay more than a few moment's attention, but now I find my curiosity is still piqued. I have tried to research this myself, but where to start?

Does anyone have any ideas, based on the scant information above, what was going on in that glass? The wine was an excellent Bordeaux red by the way!

Thanks Mark David Ward (talk) 08:16, 4 August 2010 (UTC)[reply]

Thanks this sounds like something I can investigate. --Mark David Ward (talk) 12:22, 4 August 2010 (UTC)[reply]

How much smoke and smog would harm an engine? edit

Would regular driving in smoke from forest fires (say, 200 meters visibility - darned heat wave) harm a car's engine? VW 1.9 diesel, mildly turbo'ed if it matters. No one knows when the fires will settle down... East of Borschov 08:32, 4 August 2010 (UTC)[reply]

You might need to replace the air filter ahead of schedule, but otherwise I wouldn't think it would make any difference otherwise. 207.47.164.117 (talk) 10:47, 4 August 2010 (UTC)[reply]
I was close to the bushfires in Victoria, Australia 18 months ago, that took 173 lives and burned for many weeks afterwards. There have naturally been in depth investigations and many words written about them since, and I drove through the smoke to work for a month. I heard nothing at all about possible engine damage. My car is fine. HiLo48 (talk) 12:31, 4 August 2010 (UTC)[reply]
Volcanic Ash -- Effects on Transportation from the USGS suggests frequent oil changes, as well as increasing frequency for other things like seals and gaskets, improving air intake filters, and so forth. Volcanic ash is much denser than your average smog and smoke. Nimur (talk) 18:26, 4 August 2010 (UTC)[reply]
Volcanic ash bears no resemblance to smoke. Your (implied) comparison is like looking at the result of sandblasting your car and extrapolating to figure out what water from a garden hose will do. --Carnildo (talk) 00:36, 5 August 2010 (UTC)[reply]
Yeah - absolutely.
  • Volcanic ash is made of incombustible silicates and is incredible abrasive - when it's sucked into the cylinder and gets heated in a combustion cycle, it melts - when the cylinder pressure drops, it condenses back onto the colder cylinder walls - and you have a nice solid rock coating the inside of your cylinder. It's like you coated your cylinder with sandpaper! That wrecks the piston rings in no time flat! The advice from the USGS to change oil frequently is an effort to flush out any of this sand-like stuff and to give your piston rings the best possible chance. But the only defense is really to have a really good air filter. (Something after-market - not the crappy paper ones that came with your car. Because of the propensity for the air filter to clog - you need to either change it or clean it very frequently.
  • Smoke is mostly water vapor and unburned carbon - which your engine will happily burn up if it gets into the engine. It's also soft, so the engine can clear it out easily. Modern gasoline often contains detergents that are there specifically to help the engine get rid of the carbon it might generate as it burns gasoline less than 100% efficiently - and this helps a lot to clear the particulates from the smoke.
  • Smog is smoke and fog - and we all know that our cars run just fine on a foggy day. (Actually, fog can improve your engine's running because the water mist slows down the combustion and make it more even and complete...but the effect is admittedly minimal).
I suppose that if the nearby fires were able to suck a large enough fraction of the oxygen in the air, then your car's performance might be kinda poor - but it still wouldn't wreck the engine. SteveBaker (talk) 02:25, 5 August 2010 (UTC)[reply]
However, similar to volcanic ash, the smoke from the Russian wildfires has reached the stratosphere. ~AH1(TCU) 15:33, 7 August 2010 (UTC)[reply]

Spark plug disrupter ray edit

During World War II the Japanese scientist Hidetsugu Yagi apparently was working on some kind of "beam ray" (e.g. radio transmitter) that could stop automobile engines by disrupting the firing of their spark plugs. He apparently (according to some Congressional testimony from the 1940s) could get it to work if the hood of the engine was up on some Fords, but when the hood was down it didn't work, presumably because of reflections and so forth. He stopped working on it at that point.

My questions:

1. What's the likely explanation for his "success"? How would this really operate? Would the principle only apply to 1940s cars, or would it work on modern cars?

2. More speculatively, is there any technology or advancements since the 1940s that would lead one to expect us to be able to improve upon this sort of thing today? That is, Yagi found only very limited success at the time. Is there reason to think someone could do better now, in terms of whatever problems Yagi was presumably coming up against?

Any thoughts you had would be appreciated. I am not in the slightest an electrical engineer so any explanations that can be done without reams of equations would be appreciated. --Mr.98 (talk) 13:31, 4 August 2010 (UTC)[reply]

I doubt this was real. However, a strong Electromagnetic_pulse may disrupt modern cars, seeing how much electronics they contain. There's quite a lot to read in that article. EverGreg (talk) 14:31, 4 August 2010 (UTC)[reply]
Well, it was reported to Congress by Karl T. Compton in October 1945, who was part of the investigation into the Japanese scientific work during WWII, so it probably isn't wholly false. It certainly wasn't effective, as is clear from the description above, and was abandoned. It seems unlikely to me that the whole story is fabricated—Compton was no scientific rube, and Yagi was a pretty serious guy himself. (I am getting this information from the transcript itself, not secondhand through some kind of Tesla-nut website.) --Mr.98 (talk) 15:03, 4 August 2010 (UTC)[reply]
Electromagnetic pulse guns like this one get reported in the press from time to time. Disrupting the ignition system on a pre-electronic car would take a lot more energy than crashing the electronics in a modern car. --Heron (talk) 18:29, 4 August 2010 (UTC)[reply]
Cars from that era are incredibly chunky things - I know a guy with a fully restored c.1945 jeep - and the electrical system is so insanely simple - it's hard to believe you could do much to hurt them. I suppose you could induce a large voltage in the ignition coil - but the problem is that the wiring around them is designed to cope with huge voltages - so at best you might just cause a back-fire or some other kind of 'hiccup'. Even a car from the 1960's would be immune to most of those kinds of tricks. It's not until the era of electronic ignition that your could do something nasty to the low-voltage systems and stop the car.SteveBaker (talk) 22:03, 4 August 2010 (UTC)[reply]
Here's the exact quote from Compton:
Well, we talked to Professor Yagi, and this is what we found: Some years ago, in talking to Dr. Coolidge at our General Electric Co., Dr. Yagi had suggested that it might be possible to stop the action of an internal-combustion engine by focusing an intense beam of an electro-magnetic wave, which would cause sparking and interrupt the operation of the spark plugs. When he got back to Japan and he tried it, he said he could make it work on a Ford car if the hood was up, but if the hood was down, unfortunately the metal shielding prevented its work. [At what range did he say he could make it work?] Oh, 30 or 40 yards.
He then goes on to say that Yagi went from there to work on directed energy weapons (e.g. death rays), and could make them work to some degree for killing rabbits, but power consumption was enormous and even though it could kill rabbits, it couldn't kill muskrats. (I am not making this up!) Compton says the idea was dumb because you could use a rifle more effectively than whatever ray they came up with. Compton then says that they doubt they could get the power consumption they claimed they could anyway (they were claiming they could produce an oscillator to produce 80 cm radio waves with 200 kilowatts of continuous power output, but the oscillator manufacture said they could get 40 kw at best).
Any of this sound feasible in the slightest? --Mr.98 (talk) 23:05, 4 August 2010 (UTC)[reply]
H. Grindell "Death ray Matthews" device could supposedly stop motorcycle engines. It appears to have been a Tesla coil discharge which was conducted via an ultraviolet spotlight beam. If so, then it could be defeated by metal shielding. —Preceding unsigned comment added by 128.95.172.173 (talk) 00:47, 7 August 2010 (UTC)[reply]

This story reminds me of the "engine-stopping ray" that Germany was thought to be developing in the late 1930s. Here's the story as told by Reginald Victor Jones in his book Most Secret War:

There was also, incidentally, the story that whatever was in the tower at the summit [of the Brocken, i.e. the Sender Brocken — Gdr] was able to paralyse internal combustion engines. As usually reported, the phenomenon consisted of a tourist driving his car on one of the roads in the vicinity, and the engine suddenly ceasing to operate. A German Air Force sentry would then appear from the side of the road and tell him that it was no use his trying to get the car going again for the time being. The sentry would, however, return and tell him when he would be able to do so. [...]
It was from my contact with one refugee that I found at last the explanation for the stories about the engine-stopping rays. This particular refugee had been an announcer at the Frankfurt radio station, and I therefore wondered whether he might know anything about the work on the nearby Feldberg television tower that was said to be one of the engine-stopping transmitters. When I told him the story he said that he had not heard it, but he could see how it might have happened. When the site for the transmitter was being surveyed, trials were done by placing a transmitter at a promising spot, and then measuring the field strength that it would provide for radio sigals in the areas around it. Since the signals concerned were of very high frequency, the receivers could easily be jammed by the unscreened ignition system of the average motor car. Any car travelling through the area at the time of the trial would cause so much interference as to ruin the test. In Germany, with its authoritarian regime, it was a simple matter to decide that no cars should run in the area at the relevant time, and so sentries were posted on all the roads to stop the cars. After the twenty minutes or so of a test the sentries would then tell the cars that they could proceed. In retailing the incident it only required the driver to transpose the first appearance of the sentry and the stopping of the engine for the story to give rise to the engine-stopping ray.

Gdr 13:24, 5 August 2010 (UTC)[reply]

Cloning Einstein edit

Lets say some time in the future, we have perfected human cloning. Do we have enough of Einstein's DNA to make a clone of him? 148.168.127.10 (talk) 13:35, 4 August 2010 (UTC)[reply]

Apparently not. The book Einstein: his life and universe, says,
... the way Harvey had embalmed [Einstein's] brain made it impossible to extract usable DNA.
Sorry. --Sean 13:47, 4 August 2010 (UTC)[reply]
Apparently he extracted his eyes as well, and put them in a safe deposit box.[1] One wonders how they were preserved. Googling around, there are some people who claim to have locks of Einstein's hair, which is not an ideal place to get DNA from, though. --Mr.98 (talk) 15:09, 4 August 2010 (UTC)[reply]
Remember, an adult is determined by far more than just their DNA. A clone of Einstein wouldn't necessarily be a genius and he wouldn't necessarily have any interest in physics. --Tango (talk) 16:27, 4 August 2010 (UTC)[reply]
It's likely he'd have that hair, though. Staecker (talk) 16:55, 4 August 2010 (UTC)[reply]
Not really. His hair looked like that because he wasn't interested in making it look like anything else. If the clone chose to cut it short or tie it back or straighten it, it would look completely different. --Tango (talk) 17:27, 4 August 2010 (UTC)[reply]
Using modern Polymerase chain reaction methods, you really don't need more than one intact DNA molecule - so this isn't about whether there is "enough" DNA - it's whether there is "any" DNA left. I think it would be surprising if we couldn't track down even the tiniest amount of the stuff. However, I agree with Tango - your Einstein clone might be completely useless at physics. Einstein certainly had a flair for physics - but in almost every other respect of his life, he was a total jerk/loser. It's not that he had general intelligence - it's that he had it all focussed in one incredibly narrow field. If you read his biographies - it's hard not to be horrified at his dealings with wife, kids, relatives, etc. It's incredibly unlikely that there is a "Physics expertise" gene - but it might well be that there is a "single-minded narrow-skill-set obsession gene". Clones of Einstein might just become fanatical stamp collectors for all we know. SteveBaker (talk) 21:56, 4 August 2010 (UTC)[reply]
You need one intact DNA molecule per chromosome. Having some, but not enough, DNA would mean you either have non-intact molecules or some missing chromosomes. --Tango (talk) 00:43, 5 August 2010 (UTC)[reply]
Oh! Yes, of course, silly me! But still - it's not much. I presume it wouldn't have to be intact either - the first thing the gene sequencers do is to chop the stuff up into smaller bits. It only matters that somewhere in your sample of fragmented section you have enough pieces to make up the entire genome - and in long enough sections that you have overlaps that tell you how to stitch them back together again. Even if a few bits were totally missing, the odds are good that those sections wouldn't code for anything interesting about Einstein - so you could probably fill in the bits that were missing with sections of someone else's genes without much risk of significant non-Einstein bits cropping up in the finished clone. SteveBaker (talk) 02:10, 5 August 2010 (UTC)[reply]

Diamonds are forever (?) edit

This problem needs someone who has in-depth knowledge about diamonds.

To make my point clear I have to summarize a story I read lately. Two Englishmen find a diamond, a rather big diamond (so big that they even compare it to Kohinoor). The first one who discovers it knows his stones, i.e. can tell beyond doubt if a diamond is real. He inspects it, firstly with naked eye, then with a lens and confirms it's genuine. The second also falls in line. Then both go to a jeweler who also tells them there is no doubt about it. It is real, it is big. He is willing to buy it for £ 10,000. But one of the sellers becomes stubborn and express his disbelief over the valubillity of the stone. The jeweler says that he is in this business for years and there is no room for doubt. But the idiot insists upon a test. So just to satisfy his whim the jeweler switches on a grinding wheel and rubs the stone on it. Immediately the stone shows "faults". Next moment it is lying there, valueless as dust !

Then the author explains us : though the diamond was real it was lying under fire and pressure for so long that it became "splintered". That is on inspection by an expert it will look real (and strictly speaking is real) but on slightest touch it will come into it's true state, that's nothingness.

Ain't the diamond hardest substance ? Is it like that the writer thinks ? Or is it pseudoscience ?  Jon Ascton  (talk) 14:27, 4 August 2010 (UTC)[reply]

I know nothing about whether faults could show up only after grinding, but as for the hardness of diamonds, I can tell you that tools used for grinding and cutting diamonds are often made from diamond themselves in order to be hard enough to affect the diamond. --Tango (talk) 16:30, 4 August 2010 (UTC)[reply]
Diamond has the highest hardness Moh = 10 of any bulk material. The Wikipedia article Diamond notes that the most time-consuming part of cutting a gemstone is the preliminary analysis of the rough stone. It needs to address a large number of issues (see article) and can last years in case of unique diamonds. It is possible that a major flaw would not be discovered in the initial rough state of a diamond, though the sudden reduction of the value of a Koh-i-Noor-like diamond from £ 10,000 to valueless sounds incredible.
I have found no references to splintered diamonds apart from a diamond trader who uses that name.
Diamonds do not last forever because they can be burned. Cuddlyable3 (talk) 17:20, 4 August 2010 (UTC)[reply]
Sounds like nonsense to me (not pseduscience - nonsense) for a number of reasons:
  • Diamonds are not tested by grinding - that's crazy. They are tested by temperature conduction.
  • Faults in the diamond would show up if you looked at it with a magnifying glass. For the scenario given, the diamond would have to be powder on the inside, with a thin sheet of diamond holding it together - that's just nuts. And would be very visible anyway. It's not like a Prince Rupert's Drop (very cool, watch the video) where there are tremendous internal stresses, covered with glass. Diamonds just aren't like that, glass is amorphous, but diamonds are crystal. An amorphous diamond would be graphite, i.e. black.
  • Was the diamond they found rough or cut? Probably cut since a jeweler was going to buy it. If it was cut that means it already was near a grinding wheel, so why would doing it a second time cause such a catastrophe?
Ariel. (talk) 17:57, 4 August 2010 (UTC)[reply]

What if there were some perchloric acid (or other oxidant impurities) that had somehow gotten into the diamond, and the temperature conduction test sets off a runaway reaction at a low temperature? John Riemann Soong (talk) 19:22, 4 August 2010 (UTC)[reply]

I don't think perchloric acid can oxidize carbon; try Piranha solution. --Chemicalinterest (talk) 19:50, 4 August 2010 (UTC)[reply]
Really? Won't it just make a lot of CO2? (For the metastability of the perchlorate -- maybe it was trapped inside a mineral impurity that gave the diamond a valuable coloured hue.) John Riemann Soong (talk) 20:37, 4 August 2010 (UTC)[reply]
But its CO2 made from diamonds... Wowwwwww... --Chemicalinterest (talk) 21:42, 4 August 2010 (UTC)[reply]

(original question) It's possible that the stone was a 'mash-up' eg made of smaller diamonds glued together with optical paste - that would fall apart .. however I don't think there is an optical paste with the same refractive index of diamond (?) 77.86.119.98 (talk) 19:51, 4 August 2010 (UTC)[reply]

You might consider a 'scratch test' - try to scratch it with something a little less hard than diamond...if you were in a jeweller's shop - you could grab a handy Topaz or Corundum and just try to scratch the diamond with it...or try to scratch a topaz with the supposed diamond. Topaz and Corundum are pretty hard stones in their own right. It's even possible that the grinding wheel in the story would be a corundum wheel (they are pretty common - and a lot cheaper than diamond wheels)- but why turn the thing on? You could just drag the diamond across the wheel and see if it was scratched. I guess the theory in the story was that this diamond had many flaws - which would obviously be weaker and maybe cause it to break...but crumbling to dust just doesn't seem likely. A large diamond that was that massively flawed wouldn't be worth that much anyway. SteveBaker (talk) 21:46, 4 August 2010 (UTC)[reply]
With a disclaimer that I'm far from an expert on diamonds, the story seems ridiculous to me. Any fracture plane inside a diamond will produce internal reflections, so a diamond fractured so extensively would look like crap quartz -- translucent rather than transparent. Looie496 (talk) 01:11, 5 August 2010 (UTC)[reply]
Your story sounds a little bit like The Diamond as Big as the Ritz. ~AH1(TCU) 15:31, 7 August 2010 (UTC)[reply]

No, it is actually a very old anonymous story  Jon Ascton  (talk) 15:39, 7 August 2010 (UTC)[reply]

Eyeglass prescription edit

I know how to tell a nearsighted prescription from a non-nearsighted prescription by looking through the lenses. Is it possible to estimate the prescription (of any kind) of a pair of eyeglasses by looking through the lenses? If so how? —Preceding unsigned comment added by 68.76.147.53 (talk) 15:10, 4 August 2010 (UTC)[reply]

Yeah. The more bigger the number be the more concave the lens will be. The more positive they will be the more convex they will be. I think the near-sighted are always concave (negative) while the farsighted are always convex (positive) -- Jon Ascton  (talk) 15:34, 4 August 2010 (UTC)[reply]
Eyeglass prescription#Lens power describes how the prescription strength (in diopters) relates to the focal length of the lenses. Briefly, the strength in diopters is just one over the lens' focal length in meters. If you are able to estimate the focal length of the lens, then invert to get the prescription's approximate strength. (This assumes spherical lenses with no correction for astigmatism; it gets more complicated if you want to be able to extract a significant cylindrical contribution as well.) TenOfAllTrades(talk) 15:36, 4 August 2010 (UTC)[reply]
Hyperopia is the medical name for longsightedness, the opposite of nearsightedness Myopia. The eyeglass lenses prescribed to correct hyperopis are easily recognized as Magnifying glasses. Cuddlyable3 (talk) 16:48, 4 August 2010 (UTC)[reply]

I don't need to estimate the astigmatism correction but it'd be nice if I could tell if there is some correction for it. how could I do that? Thankx 68.76.147.53 (talk) 15:55, 4 August 2010 (UTC)[reply]

An astigmatic lens has different focal lengths for vertical and horizontal lines. This diagram[2] demonstrates views through the lens. Cuddlyable3 (talk) 16:55, 4 August 2010 (UTC)[reply]
(This only works for farsighted prescriptions.) Using a distant lamp as the source (you can use the sun, but don't burn anything), find the distance from the lens to the wall where the projected image is clear and sharp, in meters. 1 over this number is the power in diopters. Ariel. (talk) 17:31, 4 August 2010 (UTC)[reply]
(original question) Simply put - lens for nearsighted people (Myopia) make things look smaller. For the opposite type "longsightedness" (Hyperopia) the lens is like a magnifying glass - close up to things it will magnify, or further away it will invert (upsidedown) things you look at through it. A lens for a nearsighted person won't do this. For people with weak nearsightedness it may be difficult to tell by looking a the lens.77.86.119.98 (talk) 19:48, 4 August 2010 (UTC)[reply]
For a quick evaluation of eyeglasses: Hold them between you and a wall or desktop and move them back and forth. If the lenses have no correction, the background will not move. If they correct for nearsightedness, the image will move in the same direction you move the lens. If they correct for farsightedness, the image will move in the direction opposite from your movement of the lens. Now to check for astigmatism correction: Look through one lens and rotate it. If the image changes in its vertical and horizontal size with rotation, there is astigmatism correction. In each case, a stronger correction produces a larger effect. Bifocals/trifocals will show a gradation from top to bottom in the lens strength. In side to side movement, if the effect varies from top to bottom in a lens but no abrupt change is seen, then they are "progressive bifocals." For a simple lens correcting for farsightedness, you can form an image on paper of a distant light and measure the distance from the lens to the paper to determine the focal length, then the diopter rating is 1/f where f is the focal length in meters. With such a known positive lens, you can stack a weaker negative (nearsighted correction) lens next to it and from the combined focal length calculate the diopters of the negative lens. Despite what was said above, the curvature of the outside of the lens is largely unrelated to its power, since eyeglasses use meniscus lenses, with the inner and outer surfaces both curved to differing degrees. Edison (talk) 03:05, 5 August 2010 (UTC)[reply]

Positive and Negative Overlap edit

Positive overlap means overlap between two orbital lobes where the wave function is positive right?? Since on one lobe of the px orbital, the wave function is positive and on the other, the wave function is negative, there is equal chance that in two atoms about to bond with each other, the positive lobe of one atomic orbital may overlap with either the positive or negative lobe of the other atom to form positive or negative overlap, right? So is there equal chance for a formation of positive or negative overlap for a given pair of atoms? I am now just in Class eleven and I just know only basic quantum mechanics, so please explain clearly... Thank You. —Preceding unsigned comment added by 117.193.236.184 (talk) 15:39, 4 August 2010 (UTC)[reply]

The actual sign is not a real physical property, it's just an mathematical artifact of the way the orbitals are described. However, the sign does allow one to explain certain bonding types. First, remember that the sign itself isn't a real thing, so the actual "+" or "–" is arbitrary, but the relative sign ("same" vs "opposite") is a viable comparison. You have to be consistent, but you can pick an arbitrary starting point:) So positive overlap (a bonding molecular orbital) means the "same sign" (both positive lobes, or both negative lobes) on the two atoms, whereas an antibonding molecular orbital (a high-energy thing that tends to make the atoms move apart rather than stay bonded) is "opposite signs". So if you're describing bonding, you pick the same signs of the lobes that overlap, and there's your stable electronic state. And once you have that, the "other way" is the unstable state. Sometimes a graphical representation (using arbitrary colors) helps avoid getting misled by the actual signs of the lobes--just pick two colors. See Molecular orbital diagram for some more information, and ask more if you get stuck. DMacks (talk) 19:30, 4 August 2010 (UTC)[reply]


Positive overlap usually means that the lobes have the same sign, so that the combined wavefunction is greater than either, rather than less as when happens when they have different signs.77.86.119.98 (talk) 19:35, 4 August 2010 (UTC)[reply]
In terms of chances - take two H atoms - containing each one atom - the probability of either atom having a given sign of the 1s orbital is normal 50% .. so yes.. However in a magnetic field the different orbitals "+" or "-" sign are split in energy - so that the probability is not 50%.
When the "+" and "-" forms of the same orbitals have the same energy it can interconvert to the opposite sign with no barrier - so effectively creating a 50:50 chance as you describe.
Since the oribitals can change their sign value under normal conditions this means that the 50:50 probability of bonding or anitbonding orbital formation is obscured - since the formation of a bonding orbital is favoured energetically - thus when 2 H atoms meet the probability of forming a bonding orbital is more than 50% (in zero magnetic field the "+" and "-" orbitals are practically equivalent).77.86.119.98 (talk) 19:43, 4 August 2010 (UTC)[reply]


So, finally, it means that the chance of two opposite signed lobes approaching each other is 50-50, but since they are degenerate, they exchange their positions... so the probability that a positive overlap will be formed in H-H atoms close to each other is near 100%, is it?? and is this what we call bonding molecular orbital and anti-bonding molecular orbital? harish (talk) 01:17, 5 August 2010 (UTC)[reply]

Yes - positive overlap results in a bonding orbital .. negative=antibonding. Probability is better than 50% I can't be more specific than that - the product H2 has isomers see Spin_isomers_of_hydrogen - which very slightly complicates things. If two hydrogens with arbitary opposite orbital sign approached each other - they'd bounce off each other and not bond - this is a possibility - someone else may know how to better calculated the chances - but I guarantee it is better than 50% for the reasons given above.77.86.119.98 (talk) 02:17, 5 August 2010 (UTC)[reply]

Expense of oscilloscopes edit

I was wondering why oscilloscopes, even USB oscilloscopes, are so expensive? Is it a case of charging what the market will bear or is the consumer base so small that it is necessary to charge this much in order to make business viable? How complex are they, in comparison to a television? I don't study electronics but just wanted to make my own dynamo-powered bike lights and learn something as I go. ----Seans Potato Business 16:03, 4 August 2010 (UTC)[reply]

See the Wikipedia article Oscilloscope. It is a complex measuring instrument that needs accurate calibration. It is intended to be rugged, portable and to have a longer life than a domestic TV. It has many more user controls. Very few of its components are those that are mass produced for TV production. It contains beam deflection circuits that are capable of operating at a wide range of different scan frequencies compared to a TV that has only 2 scan frequencies. All these factors contribute to its relatively high price, as well as the small production volume that the OP has noted. Cuddlyable3 (talk) 16:38, 4 August 2010 (UTC)[reply]
It's also a much more specialized piece of equipment. Something designed to sell ten million units will be cheaper than something to sell ten thousand units, all else being equal.
However, if your needs are simple, There are pretty cheap scopes. Here is one with the form-factor of an iPod for $99. And here is a kit for $60. You might be able to find these slightly cheaper elsewhere. However notice that they are single-signal and lack advanced features like FFT. APL (talk) 16:45, 4 August 2010 (UTC)[reply]
You can also use the audio input of your computer for nothing, though the top frequency is usuauly about 200kHz (sampling rate), and you have to calibrate it yourself. Inputs shouldn't be over 1V or 0.5 but resistors are cheap - if you know what to do. There are free programs that convert the computer into a simple oscilloscope. eg http://www.zeitnitz.de/Christian/scope_en
On the other hand .. all scientific instruments are expensive - a laboratory sonic bath supplied by a laboratory supply firm will cost typically 10x what an equivalent mass produced sonicator will cost.. the reasons for this are purely economic.Sf5xeplus (talk) 17:39, 4 August 2010 (UTC)[reply]
(ec) A few reasons why scopes (and other test equipment) are expensive. First, compared to consumer devices, test equipment like oscilloscopes are sold at very low-volume. Economically speaking, this means that every unit must be sold at higher margin per each unit, in order to amortize the costs of engineering, manufacturing, and distribution (which are the same or greater than an "equivalently-complex" high-volume consumer device like an HD TV). Furthermore, the costs of designing and manufacturing each unit are much higher than for an "equivalently complex" consumer device. The thing to keep in mind is that for most purposes, an oscilloscope must be extremely high quality. For example, on a consumer device like a music player, if the audio amplifier is out of spec by 0.2 dB, nobody cares, and you adjust the volume knob slightly and move on with life. If a company sells oscilloscopes that are out of spec by 0.2 dB, because it is test equipment, this is totally unacceptable. Every single voltage, every single frequency, every indicator knob, must be exactly to spec out to several decimal places, because the oscilloscope (or the multimeter, or the network analyzer, ...) is the device that is used to calibrate all the other devices. So it has to be the best piece of equipment on the bench. It can't be flakey, it can't have strange voltage spikes or frequency spurs or unwanted parasitics; and particularly for a `scope, it's impossible to "hide" these sorts of parasitics "out of band" because a scope will operate over a very wide range of operating conditions (e.g., voltage, frequency, and even lab conditions like temperature and humidity). The design is therefore much more complicated. You can buy a cheap scope, or build a scope as a hobbyist project - but it will be nigh impossible to match the quality across all ranges of operation of a $1000 or $10,000 Agilent bench scope $50,000 wide-band scope or $350,000 mobile-telephoney network analyzer with protocol-stack decoding. The way I have heard these outrageous product costs explained is something to the effect of "without this equipment, an entire team of 20 RF engineers are useless, and their cumulative salaries are much more expensive than the gear that they require to do their jobs." Nimur (talk) 17:44, 4 August 2010 (UTC)[reply]
That's silly! I need a keyboard to do my job, but I can get one for $10! APL (talk) 19:24, 4 August 2010 (UTC)[reply]
A little simplistic perhaps but the general sentiment surely holds. If there is no advantage to the $350k scope then sure, they should go with the cheaper one. But if the more expensive scope reduce the number of person hours or does things that they need that the other one can't or whatever then the price may be worth it.
Your keyboard example is interesting. For a typist or someone who uses the keyboard a lot, they may find the $10 keyboard doesn't suit them well and slows down their typing (there may also be OSH concerns something which doesn't fit the scope example so well) then getting the $100 (or whatever) keyboard may be worth the extra cost given the increased productivity.
I've seen similar arguments made e.g. for large monitors in the past when they were still relatively expensive (they increase productivity so even if it costs $2000 it may make up for it fairly quick) or software and while I'd agree that the analysis is sometimes a little simplistic I would say there is some validity in the reasoning. Even if you can 'make do' with cheaper equipement the more expensive equipement could make up for it in the long run.
Note that for the keyboard example there may be some cases, e.g. when you need wireless or perhaps in a hospital where you need to worry about contamination or in an extreme environment where the $10 is really not up to the job. I suspect there are some similar cases where the $1000 scope, or even $50k scope simply won't do.
Of course the price of even fancier keyboards is so small relatively speaking that if you're just talking a small number it's not really going to be significant. And the room to innovate on a keyboard is also a lot smaller so the keyboard comparison somewhat falls through. There is I presume a very big difference between the $1000 and $350k scope and the cost difference even for a largish business is not so small. But it seems some business are either very stupid, or do feel the gain of the extra features and associated productivity or capability of their business makes up for the cost.
Nil Einne (talk) 16:33, 5 August 2010 (UTC)[reply]
Slashdot was just talking about this. Here is one for $189 Canadian. Ariel. (talk) 17:46, 4 August 2010 (UTC)[reply]
It really depends on what you need. There are programs for the PC ($0) that use the sound card to capture voltages and display them in an oscilloscope-like form. Sadly, they only work for voltages and frequencies that the sound card can handle - and you still need a 'probe' with the right impedance, etc to do a good job of picking up the signal without disrupting it. Second to that are some hardware gizmos that you can plug into a PC or into an Arduino board with an LCD display that do a somewhat better job. But those things are still going to be limited to maybe 8 to 10 bits of precision and maybe 100kHz of frequency. To see faster signals, there is really no substitute for a proper (and expensive) scope. They are the price they are because they don't sell many of them - and that's the price the market can stand. SteveBaker (talk) 21:27, 4 August 2010 (UTC)[reply]
I don't know if I would go for a sound card, but a pretty cheap solution is to get a basic A/D card and set up a software oscilloscope. Also I expect that there is a strong market in oscopes on Ebay (not that I've actually looked or anything). They've been around for decades and must pop up whenever an electronics repair shop goes out of business. Looie496 (talk) 01:05, 5 August 2010 (UTC)[reply]

"Eco" water bottles have more BPA? edit

At work, some health conscious types have warned me about water bottles. They say the new thinner, squishy "eco" water bottles contain higher amounts of Bisphenol A (BPA) than the older, thicker bottles. They say that eco water bottles are formed by injecting molten plastic (like normal bottles) but they are also filled at the same time. So the plastic molding and water filling take place simultaneously. The rapid cooling of the hot plastic supposedly releases more BPA into the water? Or something like that. They also mention eco water bottles that have been sitting in sunlight also have raised BPA levels due to some reaction with UV light. It sure sounds plausible to a non-science type. Is there any truth to any of this? --70.167.58.6 (talk) 18:45, 4 August 2010 (UTC)[reply]

The presence of BPA does not have anything to do with the "squishiness" of the plastic, what does matter is type of plastic it is. For instance Polyethylene (HDPE, LDPE, PE,etc.) and Polypropylene (PP) are both squishy but should not contain the compound since it's not manufactured with it or combined with it industrially. BPA however is present in Polycarbonate ("PC", the clear nalgene water bottles) since it is a monomer of certain types of this kind of plastic that was unreacted in the production process. It is also added to PVC (faux leather, notebook binder plastics, pipes) as a plasticizer (to soften plastic). Check what type of "eco" water bottle you have to see if it's likely you have something containing larger amount of BPA. -- Sjschen (talk) 20:56, 4 August 2010 (UTC)[reply]

The "squishiness" does matter because BPA is a plasticizer - meaning it's there to make the plastic more squishy. Some plastics need it, others don't. The resin code doesn't automatically mean it does/doesn't have it because it's possible to make the plastic either way, it's just a resonable shorthand for the more common uses. So for example polycarbonate most definitely can be made without BPA, even though it's on the "bad list". As for create and fill, that sounds very unlikely to me. The machines that make bottles are very different from the ones that fill them. Bottles are blown - with air (inflated like a balloon). They need to stay hot, there is no way you could do it with water unless you used hot water, which is very wastful (i.e. expensive) to heat all that water. So there is no way they do create and fill, so that part of what you heard is wrong. Heat will release more BPA. UV probably won't. Baseline: You can't tell by looking. You'll have to trust the manufacturer to give you accurate info (or a lab). For bottles that are made in massive quantities you can use the resin code since that's the more common use. However bottles that are spec made for a particular company you can NOT use the resin code. Ariel. (talk) 23:44, 4 August 2010 (UTC)[reply]
No. BPA is not/should not be used as a plasticiser - it's a monomer in the production of Polycarbonate plastics, and should not be present in anything other than microscopic amounts in polycarbonate plastics (or any other plastics) for the obvious reason of it's toxicity.
It is possible that the bottle are more squisy due to a plasticiser, or they may be using lower molecular weight plastics.
Even this 'pro BPA' site [3] states "Bisphenol A (BPA) is not used as a plasticiser in plastics" - it is a very toxic compound - it is not and could not be used in the quantities required for plasticising plastics.77.86.119.98 (talk) 01:24, 5 August 2010 (UTC)[reply]
My bad, BPA counld be used in plasticizers (as antioxidants), but not as a plasticizer. While in the end it does depend on what the manufacturer decides to add, plasticizer containing plastics are usually not used for water bottles. As well, plasticizers are usually not added for PE plastics, the typically squishy plastics. As such, the "squishiness" of a plastic does not necessarily mean that contains BPA or plasticizers. You should find the resin code on the bottle and decide if you want to trust it. -- Sjschen (talk) 14:40, 5 August 2010 (UTC)[reply]
Hope this doesn't change the discussion too much, but when I said 'water bottle', I meant single-use throw-away bottles of water (like Aquafina, Dasani, Dannon Purelife, Evian, etc), not reusable water containers. I hope my description of simultaneous plastic molding and water filling made this apparent. --69.148.250.94 (talk) 05:07, 6 August 2010 (UTC)[reply]
Anywho. The takeaway conclusion is: one can't tell from the recycle code. But the whole thing is bunk anyway. And even if it wasn't, the 'squishiness' of the bottle has no relation to BPA levels? And 'squishy' is not a new manufacturing process, but just a new plastic formula. Is that the gist?--69.148.250.94 (talk) 05:12, 6 August 2010 (UTC)[reply]
Yes/No. You can tell from the recycle code (ok if it's 7 there are options) - if it's not 7 there should/will be absolutely no BPA in it. In my experience the bottles (disposable) are usually 1 PET , not 7 , but maybe it's different where you are. As to part of the earlier question - filling the bottle as it is blow molded sounds like hydroforming. This seems very unlikely. As far as I know the plastic has to be hot to be molded - any water would instantly cool it - result cracking - cold plastic Polycarbonate is not ductile like , say Aluminium. (If it was done by hydroforming of hot polycarb the likelyhood of leaching is increased. I don't see an immediate reason why a thinner wall would result in increased leaching, unless it's incredibly thin. (Usually code 7 plastics have a letter code molded onto them as well - eg PC is polycarbonate , ABS is acrylonitrilebutadienesytrene - look for the letter code as well.)
As for uv light increasing levels of BPA in polycarbonate plastics - I can't find anything on this - but it is a possibility that can't be rejected.87.102.72.153 (talk) 16:40, 6 August 2010 (UTC)[reply]
A disposable water bottle is likely Polyethylene_terephthalate not Polycarbonate. (unless this differs from country to country)87.102.72.153 (talk) 16:43, 6 August 2010 (UTC)[reply]
In Canada, BPA is supposed to be banned. See Bisphenol A#Canada. ~AH1(TCU) 01:29, 7 August 2010 (UTC)[reply]
You can usually trust the recycle code, but there may be some fears on whether the manufacture of the bottle "follows regulations". PET, the "single-use throw-away bottles" likely has no BPA, but may have other interesting stuff in it used as a catalyst for the resin's production or possibly harmful unreacted monomers. Use resins 1,3, and 6 at your discretion since different manufacturing processing can make the product less benign (though they seem for the most part quite okay). Resin 3 for example, can contain phthalate plasticizers, which some have argued are unsafe. Resins 2,4,and 5 are quite safe, but it depends if you trust the manufacturer or country of manufacture and whether they are selling you what you say they are. 7 is a mix bag, and you pretty much have to read up on them to make a semi-educated decision. -- Sjschen (talk) 04:05, 7 August 2010 (UTC)[reply]

Retrieving sound of the past edit

Does any one have some articles/webs related to studying the possibility of sound signals recovery from the past? I mean to hear some old information like our voices in the past. Sorry for not logging in, (Email4mobile)--89.189.76.246 (talk) 20:07, 4 August 2010 (UTC)[reply]

What, like audio restoration? That's fairly straightforward. Or do you mean some sort of "recover the original sound of Lincoln's Gettysburg Address" thing? That's strictly fiction. — Lomn 20:15, 4 August 2010 (UTC)[reply]
There have been claims that horizontal grooves inscribed on ancient pots while they span on the potter's wheel had inadvertantly recorded (with very low fidelity) words or word fragments being spoken at the time, and that the words had been or might be recovered. This site suggests that some claims were hoaxes, and discusses the provenance of others, as well as giving further links of relevance. 87.81.230.195 (talk) 20:37, 4 August 2010 (UTC)[reply]
The phonautograph from 1857 could transcribe sound to a visible medium, but there was no means to play it back until 2008 using computers. Cuddlyable3 (talk) 21:20, 4 August 2010 (UTC)[reply]
(ec)I'd heard that claim too - but it's pure fiction. Fingers don't vibrate much in response to sound they are too heavy and muscular. The gizmo that (for example) Edison used to make the his phonograph was a delicate little contraption being driven by a large diaphragm - so as to capture the maximum possible amount of sound energy and to focus it to displace the minimum amount of wax by the smallest distance. Plus, a pottery wheel rotates about once a second - so at most you'd only grab one second of audio before the fingers came around again and erased it. Nah - it's a great idea - but it's not gonna work.
Having said that, we do have some recordings from 20 years before Edison invented the phonograph. Edison was the first person to invent a machine that could both record AND play back sound - but Édouard-Léon Scott de Martinville invented a machine that could record sound (but not play it back). Those recordings were first replayed just a couple of years ago - and the audio files that they recovered can be replayed right from within our article! That dates back to 9 April 1860. But that's really as far back as you can go. Aside from that, your only chance is listening to early Edison phonographs - and the oldest known one of those dates to 1888. SteveBaker (talk) 21:22, 4 August 2010 (UTC)[reply]
I was wondering if there were some claims about the possibility of restoring sounds of our owns in our medium (air for example) by analysing the spectrum of vibration frequency and tracing for possible eventhough very weak signals since a fraction of energy will always reflect back and forth in the form of echos.--195.94.11.17 (talk) 22:16, 4 August 2010 (UTC)[reply]
Interesting concept - reverberation is real, and conceivably, even if the level of reverb is inaudible, it may still carry information. But, that information would be dispersed to very low energies, and the signal to noise ratio would make it impractical to reconstruct anything. The straw-man version is to be in a large, empty, echo-y room; obviously, if you shout, you will have an echo that could be recorded and used to estimate the "original" version of the shout. But after even a few seconds, the echo level has died away to such a low volume that the ambient noise would totally override any effort to reconstruct the source. In a non-ideal environment, like outdoors or in a non-echoing room, you'd have essentially no chance. Source estimation is an open research problem in signal processing - trying to figure out exactly what pulse was sent out, when all you have is a recording of the result. You would be setting up a very poorly-constrained source-estimation problem; it's safe to say we have no existing problem that could work reliably. Nimur (talk) 22:53, 4 August 2010 (UTC)[reply]
Also, sound propagates by causing vibrations in the air particles. Moving those particles up and down is an expenditure of energy, and, by the laws of thermodynamics, some of that energy will be lost in the form of heat. That lost energy will be taken from the intensity of the wave. So, the wave will die out in the way from one echoing surface to other.
Also, the echoing surface will absorb a small part of the energy of the sound, unless it's a perfect echoing surface. For example, we have perfect mirrors that are used to create laser rays, but they still don't reflect 100% of the energy. If you leave a laser ray bouncing indefinitely between two perfect mirrors, the ray will eventually get absorbed by the mirrors. A sound echoing between two "perfect" echoing surfaces will also die eventually.
Also, unless the echoing surfaces are perfectly aligned, the sound will echo in a slightly different direction, and it will start reflecting out of other surfaces (other walls in the room, for example). You will eventually have thousands of reflections in many directions, canceling and augmenting each other at places. Since they echo at different places that are at different distances, they won't go back to the origin at the same time. This will cause a shift in the sound, increasing the cancellations and augmentations. In other words, the sound will garble itself.
Also, even if the echoing surfaces are perfectly aligned, the sound doesn't echo back in a straight line, like a laser against a mirror, it echoes in an angle, dispersing the energy of the sound over an area even if the echoing surfaces are perfectly aligned, the echo will appear to come from a different place[4] and the sound will disperse over a wider area. This hastens the weakening of the sound since it vibrates over a bigger area, it has to move more air particles, and each echo will carry less energy in a certain direction. Instead of preserving all the energy in one clear echo, the energy will get spread and any listener will be getting a weaker echo that will last less time before disappearing. And if hastens the chance that the sound garbles itself.
Eventually, the sound will be so weak that it will be the same volume as ambient noise, and it will eventually go under the detection threshold of any instrument. Eventually, the sound won't be strong enough to move any air particle, and it will stop.
Air particles bounce continuously against each other due to Molecular diffusion, so any trace left by the sound will disappear immediately. Once the sound loses all its original energy, you will be unable to recover it from the air (and the sound will lose all of its energy, because you can't fight the laws of thermodynamics). --Enric Naval (talk) 23:56, 4 August 2010 (UTC)[reply]
See Archaeoacoustics, "the discipline that explores acoustic phenomena encoded in ancient artifacts". As the article notes, Mythbusters tried to recover sound recordings from pottery, with poor results. Clarityfiend (talk) 22:54, 4 August 2010 (UTC)[reply]

The TV series "Science Fiction Theatre" from the 1950's told of a ceramic object from Pompeii from which could be played back the sounds of panic in the streets when the volcano irrupted in 79 AD. Edison (talk) 02:57, 5 August 2010 (UTC)[reply]

For another fictional treatment, see The Stone Tape. 87.81.230.195 (talk) 17:05, 5 August 2010 (UTC)[reply]
Thank you very much. I think I've got a starting point to seek after.--Email4mobile (talk) 06:14, 5 August 2010 (UTC)[reply]
Information cannot, by definition, be destroyed. I'm not sure if that applies to information carried within bygone sound, though. ~AH1(TCU) 00:57, 7 August 2010 (UTC)[reply]

Weird Bug Redux edit

 

A while ago I asked a question about a strange insect I saw and someone incorrectly said I had seen the caterpillar of a tussok moth. I finally managed to take a picture of the mystery insect and have uploaded it to flickr. Can anyone help me identify it?

Americanfreedom (talk) 21:52, 4 August 2010 (UTC)[reply]

Looks like a velvet ant. Looie496 (talk) 00:55, 5 August 2010 (UTC)[reply]
I agree. Here are some more pics, which look very similar. Steer clear, Opie! --Sean 14:49, 5 August 2010 (UTC)[reply]

Recent CMEs and auroras edit

Hi. I have two questions related to the recent coronal mass ejections:

  • When was the last coronal mass ejection or equivalent geomagnetic storm (individual or multiple) of this intensity?
  • Would the possible auroras be visible through thin high cloud from, say, 44 north latitude?

Thanks. ~AH1(TCU) 23:01, 4 August 2010 (UTC)[reply]

Geographic latitude doesn't matter for seeing auroras. What matters is your geomagnetic latitude, which measures where you are relative to the magnetic poles. That said, the recent CME wasn't very strong (only a class C3), so it wouldn't have been visible from anywhere with a latitude of only 44 degrees. --Carnildo (talk) 00:56, 5 August 2010 (UTC)[reply]
It was visible from plenty of locations at 45 latitude north, but I wasn't able to see the aurorae from my light-polluted location last night. ~AH1(TCU) 16:32, 5 August 2010 (UTC)[reply]

Number/measurement system edit

With the increasing prominence of computers, why hasn't human society adopted a number and measurement system based on powers of 16 (hexadecimal)? --138.110.206.101 (talk) 23:06, 4 August 2010 (UTC)[reply]

Easy, cause we have 10 digits. --Wirbelwindヴィルヴェルヴィント (talk) 23:27, 4 August 2010 (UTC)[reply]
Computers use Binary numeral system, because it's easy to distinguish between "1" (I'm receiving energy in this cable) and "0" (I'm not receiving energy in this cable). Byte has 8 bits because the most successful computers happened to use 8 (see Byte#Size]), and hexadecimal became popular because it can represent two 8-sized bytes in only two human-readable characters. In other words, hexadecimal exists for arbitrary reasons and not because it's better, and computers don't even use it.
Also, when you interact with a computer, there is layers of abstraction that separates you from how computers really represent their numbers internally (they use weird systems like Two's complement and Floating point, which are very difficult to read for humans). The computer doesn't care what system you use, because the layers of abstraction will always translate it to the internal system before touching it for anything. We should use the system more convenient for humans, since all systems will need a layer of abstraction anyways. --Enric Naval (talk) 00:30, 5 August 2010 (UTC)[reply]
Small quibble: Hex cannot represent two 8-sized bytes in only two human-readable characters, it can represent one 8-sized byte in only two human-readable characters (or 2 in 4, 3 in 6, 4 in 8 - you get my drift ;-). --Stephan Schulz (talk) 09:20, 5 August 2010 (UTC)[reply]
I think your origins for the 8 bit byte are a bit off. 8 bit bytes come about because to represent English-language text conveniently you need 26 uppercase letters plus 26 lowercase plus 10 digits plus at least half a dozen punctuation characters - preferably more. That's annoyingly just a bit more than 64 symbols. So 6 bit 'symbols' aren't really convenient (although there are 6 and even 5 bit character codes that use special 'shift' characters to indicate lowercase, etc). The next handy power of two is 128 - so basic ASCII uses seven bits. The reason we don't have 7 bit bytes is that back when this stuff was being thought out, characters were stored on media like paper tape and magnetic tape - and transmitted over unreliable serial data lines. You needed a way to check that your symbols had been transmitted successfully - so they added an 8th bit (called "parity") which was set in such a way to make sure that the number of '1' bits in the byte would always be an even number. If any single-bit error occurred in the byte, there would be an odd number of '1' bits - and you'd know that something screwed up. Hence 8 bit bytes. A few computers used more or fewer than that - but 8 bit bytes were just so very convenient that they became a de-facto standard. These days, things are much more reliable - and we have more sophisticated ways to do error checking...and 128 (or even 256) characters aren't enough for international character sets. But the 8 bit byte is here to stay.
SteveBaker (talk) 01:25, 5 August 2010 (UTC)[reply]
You are right, I forgot about ASCII. --Enric Naval (talk) 08:48, 5 August 2010 (UTC)[reply]
Please add content (or link to content somewhere else) about that ASCII-origin to Byte#Size. Man, that article's sections are repetitive/overlapping! :( DMacks (talk) 14:24, 5 August 2010 (UTC)[reply]
I'd rather computers adapt to us instead of us adapting to computers. Also the base you use doesn't matter much. (There are couple things that are easier in one base or another, but not many. Base 60 is good because it has lots of integer divisors, balanced ternary is pretty cool. But mostly it doesn't much matter. Ariel. (talk) 23:50, 4 August 2010 (UTC)[reply]
Because we are pedantic on the Reference Desk, I have to correct one thing you wrote: Hex is "base 16", and doesn't use "powers of 16". Comet Tuttle (talk) 00:07, 5 August 2010 (UTC)[reply]
Er... yes it does... The rightmost hex digit is the 1's, the next one is the 16's, the next one is the 162's, etc.. Those are powers of 16. --Tango (talk) 00:46, 5 August 2010 (UTC)[reply]
Computers can easily adapt to any system that humans find convenient - base 16 is mildly useful for programmers - but the rest of the planet shouldn't have to be afflicted with them. Base 16 is also a pain to deal with, mathematically. If we were thinking of changing the base we work in, then base 12 would be much more convenient than either 10 or 16 because it has so many factors: 1,2,3,4 and 6. Much better than 1,2,5 or 1,2,4,8. Being able to divide by three in your head and getting an exact answer without all of those recurring digits would be a blessing!
Also, there is a system called BCD (binary-coded-decimal) that allows computers to do base-10 math with relative ease.
If we're revising the human number system (an inconcievable prospect!) then I'd like to switch to one in which we write only the digits 0,1,2,3,4 and 5 - and write 6,7,8,9 as 14,13,12 and 11. The underscore means 'minus' - but just for this digit. 5 can be written 15 - the two representations are interchangeable, just as 0 and -0 mean the same thing in normal arithmetic. Think of it a bit like roman numerals where IIX means 'ten minus two' - which is what 12 means. Doing this has dramatic effects on doing arithmetic. If you have a long column of numbers to add up - you can cancel matching pairs of 3 and 3 (for example) and you have much less problems with carrying digits. In roman numerals, you can add IIX and XII by cancelling the 'II's that occur either side of the X so IIX+XII=XX. So in this system, 12+12 = 20. Negative numbers are also more naturally represented - and the complications of adding and subtracting positive and negative numbers just goes away, quite naturally. Negative numbers are written like positive numbers - but with underscores flipped over so 0 - 1234 = 1234.
Feel free to spend the next hour trying to figure out what I just said! SteveBaker (talk) 01:30, 5 August 2010 (UTC)[reply]
You're talking about balanced decimal. --Tango (talk) 01:37, 5 August 2010 (UTC)[reply]
And at his point, I can't help but pointing out our nice Duodecimal article, which discusses a base twelve system. It also has a bunch of fun links to other articles related to alternative counting systems. Buddy431 (talk) 02:00, 5 August 2010 (UTC)[reply]
We already do, when it's convenient for us. For example, debuggers show us memory addresses in base 16 because memory is organized in chunks whose size is multiples of large powers of 2. But for day-to-day use, there's no advantage to it. Paul (Stansifer) 02:05, 5 August 2010 (UTC)[reply]
For a substantial fraction of the history of computers, we've used octal (base 8) rather than hex. Radix 50 notation also had a brief following. SteveBaker (talk) 03:27, 5 August 2010 (UTC)[reply]

If we start by setting an example now, we can convert human society by next year 7DBh. Tax returns should be written with income in hexadecimal and deductions in binary notations. Cuddlyable3 (talk) 23:19, 5 August 2010 (UTC)[reply]

My personal rule is that computer programmers are officially entitled to state their ages in hex. I feel much more like a 37 year-old than my decimal age. OTOH, tax returns tend to feel like they are being written in base 1. SteveBaker (talk) 13:50, 6 August 2010 (UTC)[reply]
There are 10 kinds of people in this world. Those who understand Binary and those who don't. ~AH1(TCU) 00:54, 7 August 2010 (UTC)[reply]