Wikipedia:Reference desk/Archives/Science/2009 April 13

Science desk
< April 12 << Mar | April | May >> April 14 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


April 13

edit

Non-spectral blueish green

edit

Purple is perceived when the blue and red cone cells are stimulated but the green ones aren't. Can we perceive another colour in an equivalent way, with the blue and green cones stimulated but not the red ones? Or would it look the same as a spectral blue-green?

Second question: is the range of gradations of the spectrum truly infinite, or is it divided up into Planck lengths or something? 81.131.21.227 (talk) 01:50, 13 April 2009 (UTC)[reply]

Red+Blue = Magenta (a shock-pink/purple)
Red+Green = Yellow
Green+Blue = Cyan (sky-blue)
Red+Green+Blue = White.
Magenta is unique in that it doesn't appear in a rainbow. Because our three color sensors are arranged in order of frequency: Red, Green, Blue and their regions of sensitivity to frequency overlap - colors like yellow and cyan can either be a mixture of two primaries - or a frequency midway between two primaries. Our eyes can't tell the difference. But the color that's midway between red and blue is green - but because we have a green sensor - we see that color as green...red+blue doesn't look anything like the frequency that's midway between red and blue. However, the frequency between red and green - and the frequency that's between green and blue look exactly the same as mixtures of those primaries.
I believe the spectrum is continuous - but I could be wrong about that. Our eyes are not infinitely sensitive though - so there is a limit on the number of colors we can truly distinguish. SteveBaker (talk) 02:13, 13 April 2009 (UTC) SteveBaker (talk) 02:13, 13 April 2009 (UTC)[reply]
The colors that we perceive are really only indirectly related to the receptors we have. We have three types of receptors (red-green-blue), but we perceive colors as combinations of four primitive hues, red-green-blue-yellow. These are arranged in opponent pairs, red-vs-green and blue-vs-yellow. The relationship between physiology and color perception is a fascinating topic, with endless surprises. Looie496 (talk) 02:44, 13 April 2009 (UTC)[reply]
In what sense "as combinations of four..." ? Speak for yourself, perhaps I perceive colours (after interpretation in my mind) as combinations of seven hues, or as pantone swatches. I mean it seems a rather arbitrary thing to say, what do you mean? Where do you spring this yellow from? It's complementary to blue, but so? Are you talking about something that actually happens in the retina involving, in some sense, yellow? (Uh, that last question is probably the best one to answer.)
Edit: come to think of it, orange is usually shown as complementary to blue, and yellow to purple.
81.131.19.13 (talk) 03:53, 13 April 2009 (UTC)[reply]
OK, I found the article on Opponent_process, and I see that, yes, there are such things as P cells which turn the three signals (R, G, B) into two signals: red-green, and (red+green)-blue, which is to say yellow verses blue. Magenta, then, would be blue with some amount of yellow (which is to say, lots of red) on the yellow-blue channel, and red with no green on the red-green channel. My proposed colour would be the same, only with the red-green channel reversed. I see that this is either similar or identical to a spectral cyan (depending on whether that would activate the red cones a bit or not) - not really analogous to purple in terms of being a non-spectral colour.
Interestingly, I see potential for yet more unexpected signal combinations - the yellow-blue channel can say that there isn't any yellow, meaning that there isn't any red or green, while the red-green channel can contradict that. I suppose there's no way to make that happen just by looking at colours. 81.131.19.13 (talk) 05:07, 13 April 2009 (UTC)[reply]
I was supposing that spectral blue-green would activate the red cones a bit. My reasoning is that the green frequency we are sensitive to is relatively close to the red, and the green cones react slightly to light which is nearly blue (allowing us to see it as greenish-blue), therefore the red cones ought to react a similar amount to light a similar distance from red, which would be I suppose more of a blueish green. Removing that small amount of red signal ought to create a non-spectral colour, that's my reasoning. It ought to be a different experience from seeing spectral blueish green. Do the red cones not in fact react to frequencies on the blue side of green?
PS I am the OP, I have an IP range that starts 81.131 apparently 81.131.19.13 (talk) 03:39, 13 April 2009 (UTC)[reply]
I don't understand what Looie is talking about. For what reason is purple and orange excluded from the list? To answer the OP: Adding a little bit of red to the blue-green (or cyan) will not give you a new color. It will simply make it look a little bit more washed (white-ish). Dauto (talk) 05:04, 13 April 2009 (UTC)[reply]
No, I'm talking about removing a little bit of red - which I assume must exist, I mean in terms of signal in the retina - from the blueish green. My assumption may be wrong. 81.131.19.13 (talk) 05:10, 13 April 2009 (UTC)[reply]
I think this is answered here, though: http://en.wikipedia.org/wiki/Imaginary_color#Perception_of_imaginary_colors ... what I'm proposing would be accomplished by tiring out the red cones and then looking at a blueish-green, and it would probably just look like an unusually intense blueish-green. Fair enough. Not the same kind of novel colour as a purple, at all. 81.131.19.13 (talk) 05:28, 13 April 2009 (UTC)[reply]
Removing a little bit of red (if possible at all) should have the oposite effect of adding a little bit of red. It won't give you a new hue. Your brain automatically normalizes the perception to a bi-dimensional chromatic space. For instance lets use the symbols r, g and b to represent how much the red, green and blue cones are being stimulated. Lets say your monochromatic blue-green gave r=2, g=10 and b=10. your brain sees that as (w)hite=2, g=8, and b=8. If you remove the extra red you get r=0, g=10, and b=10. That is more blue-green and less white. A less washed blue-green is still blue-green.read that. Dauto (talk) 05:37, 13 April 2009 (UTC)[reply]
Does this mean that (regarding the opponent process) if you can somehow contrive to have a red or green signal along with a completely blue signal (i.e., not yellow, therefore indicating that there is light, but that it doesn't contain any red or green), that wouldn't create a new colour either, because the brain would just normalize the contradicting signal away? What would you get? 81.131.19.13 (talk) 05:51, 13 April 2009 (UTC)[reply]
...white, I suppose, with a bit of red or green about it, depending. 81.131.19.13 (talk) 06:07, 13 April 2009 (UTC)[reply]
If it contains no red and no green, it is percieved as a deep blue sometimes called violet. See the bottom of the picture I posted. Dauto (talk) 06:08, 13 April 2009 (UTC)[reply]
It wouldn't correspond to any actual wavelength or combination of wavelengths: it would be a set of contradictory nerve signals - in the retina, I suppose, if that's where the P cells are - that say the red and green frequencies both do and don't exist in the colour. 81.131.19.13 (talk) 06:28, 13 April 2009 (UTC)[reply]
Experiments show that across cultures, there are four hues that people perceive as "pure" -- red, green, blue, and yellow. People perceive other colors as mixtures of these, for example orange as red-plus-yellow or purple as red-plus-blue. See opponent process for more information (although it's a pretty sketchy article). Looie496 (talk) 05:23, 13 April 2009 (UTC)[reply]
Looie, I beg to differ. How many 'pure' hues do you see in this picture?

 

I see six. Dauto (talk) 05:48, 13 April 2009 (UTC)[reply]
In that diagram, why is the gamut horseshoe-shaped? Do theoretical colours with greater saturation lie beyond the edge of the horseshoe? Also: "given three real sources, these sources cannot cover the gamut of human vision," because ... something about triangles. I don't understand. We have R, G, and B cones: surely they can be stimulated with R, G and B light sources to make them do anything they are capable of (with the exception that their sensitivities overlap a bit). How is that naive? 81.131.19.13 (talk) 06:20, 13 April 2009 (UTC)[reply]
I'm not sure I should get involved in another color vision thread after what happened in the last one, but here goes. The sensitivities of the three cones overlap a lot, not just a little. The "red" and "green" cones, in particular, have very similar sensitivity curves that are only shifted slightly with respect to each other. They both peak in the yellow-green region and respond noticeably across the whole visible spectrum. Because of this, it isn't possible to stimulate them independently; you always get a mixture. The horseshoe shape reflects that. If the cones could be independently stimulated then the horseshoe would be a triangle instead. That would leave us with inferior color discrimination, though, because for each cone there would be some range of frequencies over which just that cone was significantly stimulated, and over those ranges we couldn't distinguish different frequencies. -- BenRG (talk) 12:41, 13 April 2009 (UTC)[reply]
The problem arises in trying to replicate fully (or near fully) saturated colours that have a wavelength in-between the wavelengths of the three chosen sources, for instance, a strong yellow (assuming RGB sources are used - others are possible). The hue can be reproduced exactly by a suitable mix of red and green, but not the saturation. The green source will also stimulate a lot of blue receptor. It is true that monochromatic yellow would also stimulate some blue, but not as much as the green source does because the green is closer to the blue receptor peak that yellow. The net result is that this is interpreted as yellow plus some white light (ie all three colours) and the colour appears de-saturated. A similar result occurs with blue-green mixtures with red this time being overstimulated. Most colour space diagrams (including this one) are arranged so that linear mixtures of any two sources lie on straight lines between those points. Since such a mixture is de-saturated, it follows that a monochromatic source of light (by definition fully saturated) must lie outside the triangle - hence the curve. SpinningSpark 13:48, 13 April 2009 (UTC)[reply]
 
 
Here are a couple of illustrations. The first shows the responses of the three cone types at different wavelengths (normalized to have the same peak value). If you treat the height of the three curves at a given wavelength as coordinates in an abstract 3D space, and plot those points across all wavelengths, you get the green curve shown in the second image. One of the dimensions of the 3D space is just brightness (twice as much light with the same spectral distribution doubles all the coordinates). If you're only interested in color and not brightness, you can divide out the brightness, which is equivalent to projecting onto a 2D surface like the one shown by the triangle in the second image. That projected shape is the horseshoe. If you have a computer monitor with three phosphors which can be made to glow in various proportions up to a maximum brightness for each one, then the points in the 3D space that you can "reach" with various combinations of the phosphors form a linearly distorted cube (a parallelepiped). When you project that onto the 2D plane, you get a triangle whose vertices are the colors of the individual phosphors (usually red, green, and blue). You can see by inspection that no triangle with vertices inside the horseshoe can cover the whole horseshoe. The approximate locations of the phosphors used in most monitors are shown here. The colors outside that triangle are incorrect since your monitor can't reproduce them. The three prominent radial lines on the horseshoe image (colored yellow, cyan, and magenta) correspond to three of the edges of the RGB color cube (HTML #xxFFFF, #FFxxFF and #FFFFxx). They only show up because the RGB values in the image were normalized to be as bright as possible within the separate 0–255 limits of red, green and blue, which leads to a sharp peak of brightness at the corners. If they had been normalized to be of constant brightness instead, those colors wouldn't look as special. The other edges of the cube project to the boundary of the triangle. -- BenRG (talk) 13:56, 13 April 2009 (UTC)[reply]


The proof of the validity of the tri-color stimulus theory is sitting right in front of you as you read this. Your computer monitor only produces fairly pure red, green and blue light. There are no yellow emitters in there (take a magnifying glass and look closely at some white area of the screen and you'll see the little clusters of red/green/blue emitters). Yet, it does a pretty good job of producing every color you'll ever see. You never take a digital photograph and say "Oh no! All of the orangey-yellow is missing!". There are limits to the intensities of color that the computer can display and the 'gamut' of colors it can produce is a little less than (say) printer's ink in some regions - but there are no hues that it cannot reach. The business of "opponent color theory" lies in that after the rods and cones have captured that red, green and blue light - subsequent processing turns that into 'color difference' signals. That's similar to how broadcast television works (PAL, NTSC, SECAM, etc) - where it is desirable to transmit a monochrome signal that's compatible with old-fashioned black-and-white TV's - and TWO color difference signals - the resulting three signals being called 'YUV'. These are then converted into RGB inside the TV. The cells behind the retina also do various color difference calculations before passing on the results to higher brain functions. They also do things like 'shape-from-shading', 'edge enhancement' and 'motion compensation'...what goes to your higher brain functions is a vastly different description than just a two-dimensional array of pixels. However, all of that complexity doesn't change the fact that all of the color we see can be described in terms of red, green and blue alone - hence the whole digression above is irrelevent to the question and my first answer remains. Fortunately - you don't have to take my word for it - you can use a magnifying glass and a digital camera and find this out for yourself. SteveBaker (talk) 14:00, 13 April 2009 (UTC)[reply]

All above assumes trichromatic vision and that the opsins all have the same action spectrum. Due to x-inactivation some woman can have tetrchromatic vision and a far more discrete discrimination of colour frequencies. See an interesting article on this in Red Herring magazine. David D. (Talk) 17:11, 13 April 2009 (UTC)[reply]

Let me link to David Madore's color faq. It explains things really well. – b_jonas 19:00, 18 April 2009 (UTC)[reply]

Tsunami in this July 2009

edit

I come across the mail recently regarding the Tsunami will going to happen in July 2009. I wonder this is the rumour or truth.Can anyone help me to find out? —Preceding unsigned comment added by 58.27.115.86 (talk) 03:50, 13 April 2009 (UTC)[reply]

No one can predict reliably when or where Tsunamis will hit. Anyone who says otherwise is bullshitting you. --Jayron32.talk.contribs 04:19, 13 April 2009 (UTC)[reply]
At least no one can predict them that far in advance. Once an earthquake or other geologic event which causes a tsunami occurs, the waves of the tsunami can be (and routinely are) predicted. But that is on the order of hours, not months. No one can predict when an earthquake, landslide, or other geological event will occur, so no one knows when a tsunami (caused by those events) will occur. --TeaDrinker (talk) 05:32, 13 April 2009 (UTC)[reply]
OP, your version of the rumour may have originated here. The writer himself calls it "pseudoscience" and says it's based on the idea that the eclipse of the sun (which is genuinely predicted for that date) could cause an earthquake, which could cause a tsunami. He mentions the "original guy who came up with the theory", but doesn't give his name. You can find some links to such theorists in the comments below the blog post. It's amazing how many people come up with correct predictions that they forget to mention until after the event. ;-)
I can think of two errors with this theory, but there are probably more. First, there are many occasions when the sun and moon are almost in line, not close enough to cause an eclipse but close enough to cause almost the same gravitational effect, so we should get earthquakes whenever this happens. Does anybody predict this? I doubt it, because eclipses are newsworthy but near-eclipses aren't. Second, the sun and moon don't just suck up the ground on the part of the Earth that's facing them. It's more complicated than that. The gravitational field causes a tidal effect that deforms the entire Earth. You would be just as likely to get an earthquake on the opposite side of the planet. Try telling that to the lunar gardening people. --Heron (talk) 10:33, 13 April 2009 (UTC)[reply]
Tsunami's are mostly caused by earthquakes - and a few by landslides or meteor impacts - we can't predict any of those things reliably - so we can't predict tsunamis. QED. SteveBaker (talk) 13:46, 13 April 2009 (UTC)[reply]

Gossamer Albatross

edit

Gossamer Albatross

Why no company try to mass produce it? roscoe_x (talk) 05:59, 13 April 2009 (UTC)[reply]

How much would you be willing to pay for one? Looie496 (talk) 06:58, 13 April 2009 (UTC)[reply]
So it's economical problem rather than technical problem. Well I think someone would pay a lot to be able to fly using his own power. I might consider if this thing is for rent, maybe $5 for a short period of time. roscoe_x (talk) 11:05, 13 April 2009 (UTC)[reply]
It can only be successfully operated by cyclists in top condition. That would severely limit the market. -Arch dude (talk) 12:38, 13 April 2009 (UTC)[reply]
It's also a flimsy and delicate piece of equipment that needed repairs (sometimes extensive) after every single flight it ever made. Insurance would be impossible - and the upkeep of the thing would be ruinously expensive. SteveBaker (talk) 13:41, 13 April 2009 (UTC)[reply]
Also, due to its flimsy nature, it can only be flown under ideal weather conditions, with little or no wind, and so is quite impractical for anything but setting a record or proving what can be done. -AndrewDressel (talk) 14:08, 13 April 2009 (UTC)[reply]
As far as I can tell, most Ultralights have engines about 20-50hp. Humans struggle to maintain a quarter of an hp. I mention this to point out how different the Gossamer Albatross is from normal ultralights, It has a huge wingspan and has a very light, delicate construction. Even ultralights that appear to be less substantial than the G.A. require much more engine power than a human can provide.
You could easily pay 5-20 thousand dollars on a commercially produced ultralight, I imagine that a G.A. work-alike would probably be a lot more. APL (talk) 16:31, 13 April 2009 (UTC)[reply]

physics/Crystal growth

edit

respected sir i am doing Phd in physics. and i am interested to work in crystal growth. My question is that is there any crystal ,in which not too many studies are not yet done? —Preceding unsigned comment added by Rmgirish1 (talkcontribs) 06:18, 13 April 2009 (UTC)[reply]

Crystals of many different proteins are not yet done. David D. (Talk) 13:47, 13 April 2009 (UTC)[reply]
You could start off going through the references in our article on X-ray crystallography and looking up some of the authors of those references. If you contact them, they may be able to steer you in the right direction. --Jayron32.talk.contribs 22:57, 13 April 2009 (UTC)[reply]

Question about magnet

edit

http://en.wikipedia.org/wiki/File:Meissner_effect_p1390048.jpg Why the floating magnet does not flip backward and stick to the stuff below? roscoe_x (talk) 08:32, 13 April 2009 (UTC)[reply]

You accidentally posted your question twice, I removed the second one. Also is there anything that Meissner effect and Magnetic levitation doesn't answer? They seem to explain this well enough Nil Einne (talk) 10:22, 13 April 2009 (UTC)[reply]
So it won't work using usual magnet, but will work using of diamagnetic materials or superconduction. Is that true? roscoe_x (talk) 12:45, 14 April 2009 (UTC)[reply]

People are again talking about raising the legal drinking age in Australia to 21 (from 18). I doubt it'll happen, but it lead to me wondering whether there was any good reason for drinking ages to be set where they are. The page linked above was little help - it is but a list. So can anyone here point me to any rational arguments over the age? Is 18 much worse than 21? Should it *really* be 25.. or 16, or different between males and females? --Polysylabic Pseudonym (talk) 11:58, 13 April 2009 (UTC)[reply]

There are arguments and counter-arguments. Some of this is good science, but this remains a socio-political issue rather than a primarily scientific one. --Scray (talk) 12:39, 13 April 2009 (UTC)[reply]
I think the strongest argument is that teenager's brains have not yet finished maturing at age 18. Indeed there have been studies that show that as each source of various hormones matures at a different rate, the brain of an 18 year old is actually less able to cope with self-control and behavior moderation than (say) a typical 16 year old. Since we know that alcohol further dis-inhibits the brain, this can result in kids who have little or no ability to avoid getting into stupid and dangerous situations. I can appreciate that as an 18 year old, you might think this is impossible - but that's precisely because your brain chemistry is all screwed up! I'm tempted to make remarks about my own 18 year old and how he backs up this theory perfectly...but I don't think he'd thank me for that. So - between ages ~16 to ~20 you need to be doubly careful about any decision you make and always err on the side of caution.
Having said that - I don't advocate a 21 year old drinking age. We make too much of the role of alcohol. My wife (being French) has an entirely different cultural attitude - which is that if you bring up your kids thinking that alcohol is really nothing all that special - then when they can legally go out and get drunk as a skunk - they'll be less likely to see that as a novel and exciting thing to do. She claims that French high-schools used to allow kids to drink 'small wine' (wine diluted with water) at school lunchtimes! So we've always allowed our kid to have a small glass of wine or champagne on special occasions. I always offer to grab a beer for him (even though it's not legal in Texas)...because I think it's important that it be "no big deal"...and 99% of the time, he says "No thanks". I never liked the idea that suddenly, you reach age X (where X is an arbitary number chosen by the government) and you can go from never having tasted alcohol to buying it in large quantities and getting smashed out of your skull at your X'th birthday party. If X is 18 - that's a recipe for disaster - so perhaps 21 is a better choice. However, a more gradual approach seems more sane to me - and in that case, it probably doesn't hurt to do it more gradually.
SteveBaker (talk) 13:38, 13 April 2009 (UTC)[reply]
In most European countries I know, there is a staggered approach - you can buy beer (and maybe wine) at 16, but spirits only at 18. Southern European states, with wine being a much more integrated into the culture, are even more relaxed, certainly in practice. --Stephan Schulz (talk) 23:03, 13 April 2009 (UTC)[reply]
I agree with Steve. In the UK, the legal drinking age in a public place is 18, but at home, it's just recently been raised to 5, as opposed to there not being a legal drinking age. People drink at home when they are younger, say at Christmas or other special occasions, and I am sure this would help to alleviate people going out and getting mullered when they get older. However, we still have the minority that go out binge-drinking. Plus, we are now seeing teenagers (and younger!) out on the streets and in the parks with bottles of super-strength cider and vodka. Our government is taking a different step, though. Rather than changing the age, they are raising the prices to ridiculous levels to make it harder for people to buy the alcohol.--KageTora (talk) 08:35, 15 April 2009 (UTC)[reply]
Also, I think the most important thing to note is that the legal age for drinking away from home is, in the UK, a year after the legal age for driving (17).--KageTora (talk) 08:38, 15 April 2009 (UTC)[reply]

Rule

edit

Why is the doomsday rule so-called? 58.165.25.29 (talk) 12:19, 13 April 2009 (UTC)[reply]

Well, some of the references that our article point to state that the day-name (like "Tuesday") of last day in February is known as "Doomsday" for that year (so for this year, Doomsday is Saturday, for next year, it'll be Sunday) - but that just pushes back the question to "Why is the last day in February called 'Doomsday'?"...for which I have no answer. Some sources kinda suggest that Conway (who discovered the algorithm) just needed a name for the day of the week on which "March 0th" fell - and he chose that name arbitrarily. If that's the case then probably someone should ask him why - because otherwise we may never know! SteveBaker (talk) 13:22, 13 April 2009 (UTC)[reply]
Conway loves to give silly names to things. I don't think there's any deeper reason than that. Read Winning Ways and you'll see what I mean. Evil and odious numbers, the Uppitiness Exchange Principle, the atomic weight of lollipops... -- BenRG (talk) 14:43, 13 April 2009 (UTC)[reply]

How to get a women to cum

edit

My girlfriend will not cum during intercourse or oral or masturbation. Is there anything wrong. How can I help her to acheive orgasm? —Preceding unsigned comment added by 97.65.234.3 (talk) 12:36, 13 April 2009 (UTC)[reply]

Have you tried searching this reference desk? Try doing a search using the box directly above here at the top of the page (not the one on the left, but the one directly above) and terms like "orgasm girlfriend" and "orgasm female". You'll find that this question has been asked and answered many times, in a remarkable variety of forms, over the years. Hope that helps. --Scray (talk) 12:46, 13 April 2009 (UTC)[reply]
Try the articles on orgasm, g-spot, clitoris and perhaps List of sex positions--203.214.32.241 (talk) 12:53, 13 April 2009 (UTC)[reply]

Fourth Dimension

edit

In layman's terms, how would you describe the fourth dimension?--Reticuli88 (talk) 13:36, 13 April 2009 (UTC)[reply]

Time? David D. (Talk) 13:43, 13 April 2009 (UTC)[reply]
Guessing that Reticuli88 is referring to a fourth spatial dimension, one definition would be that you have your normal three dimensions, up/down, left/right, forward/backward, each at right angles to the others. The fourth dimension is like those and at right angles to all the others. You can't imagine it. --Polysylabic Pseudonym (talk) 13:49, 13 April 2009 (UTC)[reply]
We have an article on this subject. SpinningSpark 14:16, 13 April 2009 (UTC)[reply]
 
I disagree. Not only can we imagine it - with computer graphics we can actually DO it...for real...without any approximations or assumptions. People like to make this more mysterious than it really is. The truth is we can easily simulate EXACTLY what it would be like to be in a 4D world...the image at right of a spinning hypercube is exactly right in that regard.
There have been many efforts to try to help people visualise a fourth spatial dimension - one is to map it onto time. Thus you could imagine a hyper-sphere as a regular 3D sphere that grows and shrinks. However, I have a better answer. Our eyes do not see things in three dimensions. We have two eyes - each of which sees in only two dimensions. We infer the third dimension (actually, just the distance at right angles to our face) by seeing how the images from our two 2D retinas differ due to parallax - and we note how much we have to distort the lens in order to get the image into focus - and we can get shape information from the way light falls onto flat and curved surfaces. This is why a 2D photograph or a movie or TV screen looks fairly realistic. In a 4D world, things would not be all that different. We'd still only have two eyes and still only 2D retinas. So just as the three dimensions are 'projected' down into two when we do things like 3D computer graphics - we can project four dimensions in the exact same way. We still only have two eyes and 'depth perception' doesn't work any differently. Consequently - all we'd see would be 2D projections of 4D objects - which would look exactly like the picture to the right here...assuming the hypercube were made of a clear substance with opaque edges. The four-dimensionality would perhaps be more noticable when our heads could be rotated around that fourth axis of rotation - so that distances at right angles to our faces would measure something different. All of this can be precisely simulated inside a computer and presented to us as 2D images - and in truth, it's not really all that remarkable.
The problem here is that what you REALLY mean to ask (because it's a tough question) is "What would a 4D world look like if I had 3D retinas?" - and that's something we can't answer. We can't answer it because we have no idea what it would be like to truly see in 3D in a 3D world...let alone 4D! Aside from anything else, we'd probably need a third eye - stuck out into the fourth dimension - in order to get better range information. But once you start to speculate about true 3D vision - you have a non-human brain attached to it and truly all bets are off.
SteveBaker (talk) 14:39, 13 April 2009 (UTC)[reply]
I remember there was a previous thread in which you claimed you could put a 3D eye (with a 2D retina) in a 4D world and everything would just work. But that isn't true—you must be misremembering the program you wrote because the idea of a direct 4D-to-2D projection isn't mathematically or optically sensible. The animated tesseract is projected in two stages, first from 4D to 3D and then from 3D to 2D, with different viewer locations in each stage. (If you disagree then tell me what the direct projection formula is. A 3D-to-2D projection takes (x,y,z) to (x/z, y/z) for appropriately chosen coordinates; what's the 4D-to-2D equivalent?)
I don't think you would need three 4D eyes to get depth perception in four dimensions. A 3D retina gives you three constraints on the object's position and two-eye parallax gives you one; taken together that's enough to locate it in 4D space. -- BenRG (talk) 15:20, 13 April 2009 (UTC)[reply]
BenRG, I agree on Steve Baker.The Successor of Physics 15:54, 14 April 2009 (UTC)[reply]
I don't think you know enough about computer graphics. You are thinking of the interactive kind of 3D graphics based on projecting triangles onto the screen such as one would use for computer games. However, that's not the only graphics algorithm. You can also do ray-traced graphics - which work just fine in 4D...or 5D or a million dimensions. The standard "reverse-ray-tracing" algorithm traces the history of a light ray that ends at your eye - through one pixel on the screen. It repeats this process for each pixel on the screen in turn. For each of those 'reversed' rays, it traces back until the ray hits something solid out there in the virtual world. Typically - you'd then go on to figure out how that ray of light was reflected from that surface towards the eye and deduce how incoming rays must have travelled in order to produce that reflection (or refraction or scattering). For each incoming ray, you repeat the process, following the incoming rays of light back to their sources - until you either reach a light source or the "infinite depths of space" (or the sky or whatever you decide is the edge of your virtual universe). When you know how all of the light rays that contributed to that particular point on the screen originated and were reflected/refracted/scattered - you know what color to assign to that pixel. This process is extremely routine - just about every piece of computer graphics you see on TV and in the movies is done like that. You can do the analogous thing for tracing rays from points on your retina - through the pupil of your eye and out into the world...but the result is exactly the same - and thinking about a 2D "video-screen" with a zero sized eye is easier. That 'tracing' process works just fine in 4D - you can easily compute the 4D intersection between a ray and a hyper-cube or whatever - and this results in an entirely reasonable 4D to 2D projection - which (as far as we could reasonably say) would represent how the photons in a 4D world would appear in a 2D retina. I see no problems with that...and indeed I've done 4D rendering before (although not precisely like that) and it works perfectly. I don't know how the animated hypercube was rendered - but when you do a 4D raytracing - that's exactly how it looks - so whatever they did clearly works OK. SteveBaker (talk) 17:23, 13 April 2009 (UTC)[reply]
I've written a ray tracer and four is my favorite number of dimensions. Ray tracing won't give you an image like the one above, it will give you a slice through the tesseract. Subtract one dimension from everything to make visualization easier. Now we have an opaque circle with a 1D retina and an aperture (pupil), and it's looking at a cube. The rays from the retina through the pupil all lie in the plane of the eye. Only the part of the cube that intersects that plane will be hit by the rays. Other parts of the cube might be seen by reflection, but they won't be seen directly. Also, this kind of ray tracing makes little sense from a physical perspective. The reason you cast rays this way ordinarily is because the only way to the retina is through the pupil. When you have another dimension and the eye is "open from the sides" then that's no longer true. Light will reach every point on the retina from every direction, so you ought to cast rays in every direction from every pixel, which just gives you a uniform blur. That's what the eye would actually see (to the extent that this thought experiment makes any sense in the first place)—a uniform blur, like a camera with a severe light leak problem, which is exactly what it is. -- BenRG (talk) 12:19, 14 April 2009 (UTC)[reply]
BenRG has a point - rays from a 2D retina span a 3D space, they can't span the whole of a 4D space. --Tango (talk) 16:54, 14 April 2009 (UTC)[reply]
No! As I carefully explained - the hypercube imaged above is translucent with opaque edges...and that's how a 4D raytrace comes out. An unshaded opaque hypercube would look like a smoothly colored polygon - much as an opaque spinning 3D cube looks like a hexagon in utterly uniform lighting. The argument about light leaking around the eyeball in the extra dimension is certainly a practical issue...but again, we have to come back to the total impracticality of the experiment in every other way...your blood would simply squirt out of your veins through the "open" 4th dimension. But if you stipulate that you could somehow survive all of those practical problems - then the animation above is an excellent representation of a shiney translucent hypercube imaged on a 2D retina. Maybe you have to wear some opaque 4D headgear to keep the light from escaping around the sides of your eyes and some kind of very skinny 4D spacesuit to keep your insides...inside. (I'm also super-skeptical of BenRG's claim to have tried a 4D raytracer. If he had - he wouldn't have thrown up the entirely bogus claim of mathematical impossiblity of doing a 4D to 2D projection in his post a couple further up this thread - only to completely ignore that objection on the next post after I explain how it is in fact easily possible.) SteveBaker (talk) 23:09, 14 April 2009 (UTC)[reply]
No, you are wrong. Raytracing from a 2D screen spans a 3D space - 2 dimensions of the screen and 1 dimension of a ray makes 3 dimensions. The animation above is not done by simple raytracing. If you are going to have something that works like a human eye in 4D you need a 3D retina. --Tango (talk) 11:43, 15 April 2009 (UTC)[reply]
I agree that the image above may well not have been raytraced in 4D...but it could have been. A conventional, off-the-shelf raytracer doesn't work in 4D - but if you write your own, you can certainly make it do that (I've actually done this - so I know). You position the eyepoint in 4D - you position the corners of the screen in 4D and you do 4D math to compute vector/hyper-solid intersections, reflections and refraction - and it all works just fine. And the results of 4D raytracing a spinning shiney-translucent hyper-cube look pretty much just like that image (mine didn't have the cylindrical rods highlighting the edges of the geometry and I filled the interior of the hyper-cube with a low-opacity "gas" to attenuate the light to give you a better idea of depth). My actual application was totally off-the-wall though...not aimed at 4D visualisation at all - it was to do with data compression of volumetric descriptions of 3D smoke plumes as they evolve over time - 4D raytracing was just one small step in the process. Rendering hypercubes with it was just a part of the testing the actual results of the program were data files - not pretty pictures. But if you follow the 4D light rays through the 4D world - reconstructing their history - then what happens, happens. That's what it would look like - I don't see how you can argue with that. SteveBaker (talk) 18:11, 15 April 2009 (UTC)[reply]
Oh - and as for the rays from the 2D screen only spanning a 3D volume - again, you don't understand what's going on here. The first bunch of rays do indeed stay confined to a 3D cross-section of the 4D world - but once they hit something that's sloping in the 4th dimension, the reflections go off in all directions. Think of looking through a thin, 2D slit into our 3D world. Sure - you only see a 2D slice of the 3D universe...unless, there is a mirror that's tilted at some angle to that 2D 'slice' - now your reflected rays can be anywhere in the 3D world. Add a dimension and the same thing is true. But no matter the limitations...those are the true limitations of a 2D retina in a 4D world...yes - you don't see everything at once unless you turn your head or something. But that doesn't take away the fact that this is a representative view of what (under extremely extenuating circumstances) you "would see". SteveBaker (talk) 18:18, 15 April 2009 (UTC)[reply]
A mirror wouldn't increase the number of dimensions, it just changes which 3D subspace of the 4D space you can see. Raytracing works by projecting a ray (1D) to a point (0D), that's a reduction of one dimension. Since you end up with a 2D image that means you must have started with a 3D space. I don't know what programs you've written, but I do know that 2+1=3. Either your program wasn't actually a raytracer and was doing something more complicated, or it only showed you a 3D subspace of the 4D space (projected to 2D). --Tango (talk) 18:56, 15 April 2009 (UTC)[reply]

Still confused but thanks for the effort. Are there any organisms on our planet that "see" in 3D? --Reticuli88 (talk) 15:01, 13 April 2009 (UTC)[reply]

No - because the retina has to be opaque in order to absorb the light and therefore react to it. If you had a "retina" that was (say) a solid 1" sphere of light-sensitive cells - with some (!?unknown?!) means to project the 3D world onto it - only the cells on the surface would see anything. You'd need to come up with some completely different way to perceive things...and I don't think there is anything like that out there. SteveBaker (talk) 17:27, 13 April 2009 (UTC)[reply]
One fairly comprehensible -- or at least illustrative -- way of doing it would be to grab a piece of paper and a pen. First just draw a dot: that's represents a zero-dimensional space. If you were in there, you wouldn't really have anywhere to go.
Next you extend the dot into a line. That's a one-dimensional space. If you were in there, you could go in either of the two directions, but only in those directions -- it's like a narrow passage that's just wide enough for you to move in. If you encountered an obstacle, you couldn't pass it. (You should understand that we're not talking about an actual corridor, of course! You couldn't climb over something, because that direction -- "up" -- doesn't exist, nor could you give someone room to pass, or even slide a sheet of paper past your body. Your body would be just as one-dimensional as the corridor is.)
Next you take up the entire piece of paper and declare that to be the two-dimensional space. Now you can move not only along the line, but you can also move around the line. It's as if the corridor just became a wide room; now people can pass each other. The ceiling is still awfully low, though, and nothing can be on top of anything else. Everything is like a game of air hockey; things can slide around, but everything is the same height -- or more to the point, nothing has any height, because that dimension doesn't exist. You couldn't even put anything in your pocket, because the very concept of pockets is three-dimensional, and you only have two. The best you could do is push and pull things around.
Third dimension: you can now abandon the piece of paper and look around. You can move in any direction -- up, down, left, right, backward, forward. Before, in two-dimensional space, you couldn't even look up, because there was no up to look at, and such a direction would've been inconceivable to you. Now, though, you can play leapfrog with your friends, climb trees, look up to the stars and avoid open manhole covers. Welcome to real life!
You notice, of course, that every time you add a new dimension, your perceptions change radically, and your freedom of movement increases by such a magnitude that anyone or anything stuck in a space one dimension less would be ridiculously hindered compared to you. That would also be true for the fourth dimension: a two-dimensional being in a two-dimensional world can't see behind a wall, but a third-dimensional being viewing a two-dimensional world can easily do so, because he's looking at the whole thing from above. In the same vein, a four-dimensional being could look at a three-dimensional world and see not only us, but also what's behind and even inside of us.
That's the thing with the fourth dimension: it's pretty freaky. You cannot perceive it, and you cannot even properly comprehend it. If I were to stand next to you, and you suddenly moved in a direction that only exists in the fourth dimension, I couldn't even understand what I'm seeing. The classic novella Flatland explores these concepts in a fun and informative way, and the trailer for the movie version probably helps illustrate these things. Alan Moore also wrote a Hypernaut story in his awesome retro comic book limited series, 1963, in which Hypernaut has to fight a four-dimensional being and, being severely outclassed, has to defeat the enemy with non-physical means. You can find that story online. It's a fun conceptual romp. -- Captain Disdain (talk) 15:04, 13 April 2009 (UTC)[reply]
Honestly, I don't get how that hypercube is 4D. I'll have to read up on the article about it... — The Hand That Feeds You:Bite 15:12, 13 April 2009 (UTC)[reply]
For a lighter treatment of the fourth dimension, don't forget Robert A. Heinlein's short story "—And He Built a Crooked House—". TenOfAllTrades(talk) 15:17, 13 April 2009 (UTC)[reply]
Lighter than the Alan Moore story? Thanks, Disdain, that's marvelous! —Tamfang (talk) 04:39, 14 April 2009 (UTC)[reply]
SteveBaker, In a 4D world we would have eyes with a 3D retina. That's quite different than what you described. Dauto (talk) 15:24, 13 April 2009 (UTC)[reply]
I don't know...why would we? We have 2D retinas in our 3D world. 1D retinas "work" in our 3D world. If you are asking what the 4D world would look like to a normal person - then you have to ask what 2D retinas would do...because that's what we have. SteveBaker (talk) 17:23, 13 April 2009 (UTC)[reply]

Here is a description from Carl Sagan. David D. (Talk) 15:29, 13 April 2009 (UTC)[reply]

He doesn't really says more than what has already been said above. A particular point to infer from what he mentions is that the above animation is what an image of a hypercube viewed from certain different angles looks like in a 3 dimension further projected onto 2 dimensions. - DSachan (talk) 15:53, 13 April 2009 (UTC)[reply]
True, but his demonstration with the apple (3D as seen in a 2D world) as well as the shadow of the cube in a 2D world helped me understand the 4D problem. David D. (Talk) 16:09, 13 April 2009 (UTC)[reply]
Besides, it's Carl Sagan. -- Captain Disdain (talk) 17:02, 13 April 2009 (UTC)[reply]
@The Hand That Feeds You. In a cube the face that is furthest from you becomes larger as the cube is rotated around. Observe in the hypercube how the cube that is furthest away (in dimension 4) from you becomes larger as the hyperecube is rotated. The baffling "turning inside-out" behaviour is explained as the the cube that was "behind" in the fourth dimension is now "in front". This is entirely analogous to the 3D version where the rear face somehow gets in front by going through a third dimension which would be equally baffling to a 2D being. SpinningSpark 17:09, 13 April 2009 (UTC)[reply]

I definitely understand Carl Sagan's explanation. He Da Man! Thanks everyone!--Reticuli88 (talk) 19:46, 13 April 2009 (UTC)[reply]

You really need a 3d retina to do the job properly, the image of a tesseract above is a projection of a wire frame rather than a view of a solid tesseract. With a solid tesseract and using the analogue of our normal vision you'd only see 4 cubes and the centre point where the 4 cubes join is the point nearest to you. If you shade the cubes you wouldn't be able to see the closest point with the projection way of doing things. basically the image above hasn't had the equivalent of hidden line elimination done. The way I like to think of it is to form a 3d image and then have different properies apply to the 3d image where the ra[idly changing areas are solid and the slow changing areas are transparent. This alows the image to be see wih the nearest point surrounded by light coloured transparent solids. Dmcq (talk) 22:01, 13 April 2009 (UTC)[reply]

who made the rotating image above? was it you stevebaker? 94.27.151.13 (talk) 22:40, 13 April 2009 (UTC)[reply]

Click on the picture and you'll get to a page where you find it was made by User:JasonHise Dmcq (talk) 22:50, 13 April 2009 (UTC)[reply]
Yeah - I just grabbed the first picture to come to hand from the "Hypercube" article. But I have used 4D raytracing to do similar things...although the application wasn't exactly that kind of thing. SteveBaker (talk) 02:38, 14 April 2009 (UTC)[reply]
By the way the Fourth dimension article has a sequence of pictures which show pretty well how I imagine the fourth dimension. Those pictures are by User:Tetracube Dmcq (talk) 22:56, 13 April 2009 (UTC)[reply]


So, the definition of a ghost or a spirit, is a 4th dimensional entity? --Reticuli88 (talk) 14:47, 14 April 2009 (UTC)[reply]

That doesn't seem to be mentioned in spirit, but then again I don't know of any reliable evidence saying spirit aren't four dimensional. Dmcq (talk) 18:33, 14 April 2009 (UTC)[reply]
No!! The definition of a ghost or spirit is a figment of some people's imagination. The universe doesn't have four spatial dimensions...it has three (duh!) - so nothing "real" can be a 4th dimensional entity. Please - don't fall into the ridiculous trap of saying "Wow - the 4th dimension! That's weird! Ghosts are weird too! Maybe they're the same thing?"...that's just fuzzy thinking. The sane, rational approach is: "We have no evidence whatever that the universe has a 4th spatial dimension. We have no evidence whatever that there are ghosts. Both are 'unfalsifiable' hypotheses - so we must treat them both as false - and there is absolutely no reason to 'connect' these two ideas." SteveBaker (talk) 22:59, 14 April 2009 (UTC)[reply]
That is not the definition of 'ghost' or 'spirit'. Don't confuse intension with extension. Algebraist 18:18, 15 April 2009 (UTC)[reply]

double slit diffraction

edit

I just finished a lab on light diffraction and interference. For my calculations, I'm supposed to use my data from the double-slit diffraction to determine the slit width and slit spacing. I already know how to calculate the slit spacing (using dsinθ=mλ), but how can I calculate the slit width?--Edge3 (talk) 15:39, 13 April 2009 (UTC)[reply]

I'd suggest looking at Fraunhofer diffraction. See if you can write down an integral representation of the intensity pattern in terms of a transmission function. If you then have a look at how the intensity changes with different transmission functions (I'd recommend looking at perfectly thin double slits and then at a single finite width slit) you might see a way of calculating the double finite-width slits problem (convolution theorem might help there). Once you've got an expression for what you want, you'll be able to see the parts you'd need to measure to experimentally determine the slit width. 163.1.176.253 (talk) 22:18, 13 April 2009 (UTC)[reply]
I never realized this could get so complicated! Could you possibly give me a simpler equation? I'm in a physics class that doesn't use calculus at all (but I still know how to do calculus if it's absolutely necessary). In the double-slit part of the lab, I measured the spacing between bright fringes and the distance of dark fringes from a pre-selected point. For the second part, I picked a bright point on the fringe pattern, and I calculated the distance from that point to three dark fringes on each side of the point. Does that make sense?--Edge3 (talk) 23:54, 13 April 2009 (UTC)[reply]
(This is 163.1... at a different IP) Well that's the beauty of physics, even the simplest of things can be as complex as you want to make it. I wouldn't worry about what I said if you haven't studied much calculus but essentially the same method is used at the end of diffraction formalism to derive the N-slit intensity pattern for slits of finite width. Now if you set N=2 in this expression, we can get an equation relevant to your experiment (It simplifies if you note that sin(2x)=2cos(x)sin(x) in the last factor). I would strongly advise looking at the maths and asking if you have any questions because it's not too complicated and I'd say it is the important part of the experiment; infact, it also justifies the d*sin(θ) = mλ you used earlier.
Now, I didn't really follow what you've measured: lots of maxima and minima and distances between them. If you look at the equation at the end of diffraction formalism, you'll be able to work out when the function is at a maximum and when it is at a minimum (you could try getting a computer to plot it to make that bit easier). If you work out how the distances between these points will change as you vary the slit width, I'm sure you'll find a distance to measure which will give you the slit width. 129.67.116.90 (talk) 11:53, 14 April 2009 (UTC)[reply]
Great! Thanks for the help! So I'm now looking at the equation at the bottom, and I'm wondering whether I can eliminate I0 for the purposes of my calculations. Also, the variable I'm looking for is a, right?--Edge3 (talk) 01:33, 15 April 2009 (UTC)[reply]
Well, I0 is only going to be important if you measured the intensity at each point; from what I gathered, you ony measured the position of the maxima and minima. Now, you can see the general shape of the intensity function: it's a sinusoid, with a sinc(x) envelope function. Somehow you're going to have to identify the envelope function's period as this is what is related to the slit width. Were some maxima dimmer than others? 163.1.176.253 (talk) 11:23, 15 April 2009 (UTC)[reply]
Sorry forgot to say, "a" is the slit width, "d" the slit spacing. 163.1.176.253 (talk) 11:24, 15 April 2009 (UTC)[reply]
Thank you so much!!! That was very helpful! I think I won't worry about the intensity too much, since all of the minima have intensities of zero. With a few organized manipulations of the equation, I think I can get the function to work out. Thanks again!--Edge3 (talk) 00:19, 16 April 2009 (UTC)[reply]

cars

edit

I've always wonder about the new cars which they say its fuel saving

and as i understod they said it work by fuel (petroliuom) and while its moving it charges some

kind of battrey which its going to use later.

so the question is how could this method save fuel like some car can do 40 km consuming

1 liter of fuel thats ( 800 km / 20 liters ) , where korean cars do (300 km / 20 liters)

mabey they manage to cut the loses like wasted heat and such but it could never save that mush.

and if we use the fuel motor to generate elecricity which means turn the kinatic energy from

the tires to charge the battrey then this electricity will generate the same amount of energy

that came from the tires ,,, so whats the point ...? —Preceding unsigned comment added by Mjaafreh2008 (talkcontribs) 16:23, 13 April 2009 (UTC)[reply]

What I think you are asking is whether you could use the engine to charge a battery - and then use the battery to drive the wheels using electric motors. Well, that's exactly what a hybrid car like the Toyota Prius does. The reason is saves energy is a three-way thing:
  • Gasoline powered engines work most efficiently at one particular speed - perhaps 3,000 rpm. If you push them harder than that - or less hard - then they need more gasoline per unit of energy they deliver. With a normal car, the number of rpm's you need depends on what gear you are in and on what speed you are going - but it's only rarely turning the engine at the best speed. In the Prius, the engine only ever runs at this perfect speed - and when the battery is fully charged, the engine shuts off and stays shut off until the battery drains down and needs a recharge.
  • When you push on the brake in a normal car, you are wasting the kinetic energy in the motion of the car to wear down the brake pads and heat up the disks. With a hybrid, the electric motors that normally power the wheels can be used 'backwards' as generators - so you slow the car down by (effectively) using the battery charger to extract energy from the car's motion. Of course you need conventional brakes too - and this process doesn't work when the battery is already fully charged...but still, it makes some significant savings. This is called 'regenerative braking'.
  • Because you don't need that peak power from the gasoline engine when you do a (rare) hard acceleration - you can have a smaller engine. The engine only has to be large enough to provide the AVERAGE amount of power the car needs - not the MAXIMUM amount. Since you (mostly) don't go around accelerating hard all the time - this means that you have a smaller, lighter engine, more fuel-efficient engine - and let the battery provide the power for short bursts of speed.
Having said that, hybrid cars are not the perfect thing some would tell you. Most of the reason the Prius gets such good gas mileage is because it's super-streamlined, it's actually not a very fast car and it has relatively poor air-conditioning and such. If you did all of those things to a conventional car - and DIDN'T have to carry around all of those heavy batteries - you can do just as good as the Prius. The Prius actually gets rather poor miles per gallon on long freeway trips because in that case, the regenerative braking and the average-versus-peak thing doesn't work out too well - and pretty much any decent car, when driven in "overdrive" or topmost gear will have the engine running at it's most efficient rpm's. Hence the Prius has no special advantages in that case. However, for in-town stop/start driving, it works amazingly well.
SteveBaker (talk) 17:01, 13 April 2009 (UTC)[reply]
I test drove Honda's new Insight, their new hybrid, last week. Part of the reason why it is so economical on fuel is that if you are stationary (e.g. in traffic, or at traffic lights) the engine cuts out and cuts back in when you take your foot off the brake to start moving. The battery cuts in to power the drive when you are going slowly in queueing traffic, which again makes it economical. --TammyMoet (talk) 18:22, 13 April 2009 (UTC)[reply]
That's true - but LOTS of non-hybrid cars do that. Most BMW's do it in their european versions. However, there are some weird laws in the US that (I believe) make cars that do that illegal. However, in hybrids, the engine isn't driving the wheels - it's only charging the battery - so under US law, it's not an "engine" so much as an "on-board-generator". However, if the US lawmakers would only get into the 21st century - we'd find that almost all modern cars could do it. The British version of the MINI Cooper (a non-hybrid) can use the cars starter motor to start the car rolling in stop-start traffic - so the engine doesn't even need to start...but again - not legal in the US. SteveBaker (talk) 22:31, 13 April 2009 (UTC)[reply]
That's not exactly how the Prius works. What you're describing is a series hybrid. The Prius is a series-parallel hybrid; the gasoline engine does drive the wheels most of the time. 173.49.18.189 (talk) 08:46, 19 April 2009 (UTC)[reply]

Pain in sides due to excersize

edit

What causes the stabbing/crampy pain you get sometimes due to exercise. I used to think (and was told) it had to do with the spleen but I and several friends can get it on both sides (not only the left). Is there a singular cause? PvT (talk) 16:46, 13 April 2009 (UTC)[reply]

It is commonly called a side stitch -- and we have an article on it. -- kainaw 16:50, 13 April 2009 (UTC)[reply]

paternity testing

edit

can hair sample be used to determine paternity testing —Preceding unsigned comment added by Youseefgaga (talkcontribs) 17:30, 13 April 2009 (UTC)[reply]

Yes, it can. A sample is also needed from the child as well as the possible father. SpinningSpark 17:52, 13 April 2009 (UTC)[reply]
Though only an entire hair, including the follicle, meaning it has to have fallen out or been pulled out. Cut hair, which includes only the shaft, is not sufficient. Rockpocket 23:15, 13 April 2009 (UTC)[reply]
I think they generally like to have a sample from the mother as well, and all the possible fathers. It makes it easier to work out what DNA in the child comes from where. --Tango (talk) 18:30, 14 April 2009 (UTC)[reply]

Constructing 3D Sight

edit

How would you biologically create sight` that is capable of viewing things in 3D? I know the brain will be significantly altered to understand 3D images, however, biologically, what will need to be constructed (even if imagined)?. Do we have AI's that can "view" things in 3D?--Reticuli88 (talk) 17:47, 13 April 2009 (UTC)[reply]

You already have a 3D image of the world. The image in your brain contains depth perception of the objects you see, derived from stereoscopic vision and focus information from the eyes. AI can do this just as well, if not better, than humans. However, the image in your brain is a model of the world quite different from the image projected on to your retina. I think perhaps you are referring to a Steve Baker post in a previous question, which I will go and copy here before answering further. SpinningSpark 18:01, 13 April 2009 (UTC)[reply]
On second thoughts, I won't copy it, the discussion got too long and involved, Steve can post something here himself if he wants. To directly produce a 3D image in the eye (rather than a model in the brain) requires a 3D "screen" in which to form the image. There are many possible technologies for doing this, but that is not the problem. The big problem you have is that the eye must be out of the space of the thing it is trying to see, just as our 2D eyes have to be out the plane of the thing they are trying to see (you cannot see flat things edge on). In other words, your need an eye in the direction of the 4th dimension. This is simply not possible, a 4D body is now required which we do not have. SpinningSpark 18:11, 13 April 2009 (UTC)[reply]
Dolphins and bats probably do this already using their sonar. A dolphin can see inside other creatures, not just the surface. Dmcq (talk) 21:48, 13 April 2009 (UTC)[reply]
Full 3D "sight" would require a lot of brain reorganization. Each part of the early visual system is organized as a 2D layer, matching the topography of the retina. All of this neural circuitry would have to be completely reorganized. Basically you'd be starting from scratch. Looie496 (talk) 22:21, 13 April 2009 (UTC)[reply]
We probably already have some mechanism for imagining it ok though. When I think of something 3d like a building where I know what the rooms are like I feel the whole structure inside myself rather than imagining looking at it. Dmcq (talk) 23:07, 13 April 2009 (UTC)[reply]
If dolphins and bats can see inside objects it's simply because those objects are (semi)transparent to the sound waves they use. It would be like the way we see the inside of a jellyfish, not a fundamentally different form of perception. Our bodies are transparent to light too at many frequencies (like X-rays).
We have various technological means of taking volumetric photographs, like magnetic resonance imaging (awesome animated GIF in the lede), but they require the subject to stay still for long periods of time. Although, come to think of it, so did early photography. -- BenRG (talk) 23:14, 13 April 2009 (UTC)[reply]
Seeing through transparent things is I think a poor man's 3d version of what dolphins can do. With sonar you get a click back for each of the various distances to objects and there is very little masking. The difference is great enough that it probably qualifies as a completely different type of experience. Dmcq (talk) 23:45, 13 April 2009 (UTC)[reply]
Some blind people have done an amazingly good job of teaching themselves to echolocate. We even have an article on it: Human echolocation. Having seen a TV documentary about Ben Underwood accomplishing this and being tested by scientists to measure his abilities - it is simply incredible what he has found himself to be able to do. It wouldn't surprise me at all to discover that he has a rudimentary "3D acoustic vision" as a result of being able to echolocate through softer objects. SteveBaker (talk) 02:30, 14 April 2009 (UTC)[reply]
Hmm, could we view small things in better 3D simply by having a 3rd eye which extended from our foreheads on a stalk? If we held things in front of ourselves, so that our stalk-eye saw the other side of the object to our current eyes, we could see these objects in 3D. There wouldn't be much point though, since in that situation we can 'see' these objects in 3D by feeling them, adding visual detail by turning them to look at different sides. I agree that echolocating seems the best way to 'see' the wider world in 3D. 217.43.141.59 (talk) 22:04, 14 April 2009 (UTC)[reply]

Cont: 4th Dimension and Black Holes

edit

I know that I am beating this to death but:

http://www.scienceblog.com/cms/scientists-predict-how-to-detect-a-fourth-dimension-of-space-10672.html

According to this article, are they saying that there is a possiblilty of universes within universes? If so, does that mean that there can be another universe existing in the exact space, time of where I am right now? --Reticuli88 (talk) 18:25, 13 April 2009 (UTC)[reply]

Not exactly, what they are postulating is that the 3D universe we can see is just a slice of the actual 4D universe. Just like a slice through a 3D object will result in a flat 2D object. This braneworld cosmology as it is known is saying there is a lot more to the universe than we can see, rather than any sort of many-worlds interpretation. It remains to be explained why we have been utterly unable to detect the existance of all this extra universe so far. SpinningSpark 18:40, 13 April 2009 (UTC)[reply]
You may have also been confused by them saying that our observable universe is part of the whole universe, but there's nothing mysterious about that. The observable universe is just the region around an observer where they could, in principle, see light from. Anything outside that region is moving away from the observer (due to the universe expanding) faster than the speed of light, so the light could never arrive at the observer. It's unfortunate that there's not a better term for it, since it confuses so many people. --Sean 19:10, 13 April 2009 (UTC)[reply]
I have re-read my post quite carefully and do not see the phrase "observable universe" anywhere in it. This indeed is something quite different and has nothing to do with the question being asked. SpinningSpark 19:38, 13 April 2009 (UTC)[reply]
I think Sean meant this sentence which is in the second paragraph in the link:
"The theory holds that the visible universe is a membrane (hence "braneworld") embedded within a larger universe"
--Reticuli88 (talk) 19:44, 13 April 2009 (UTC)[reply]
Right. SpinningSpark, your original question was about the possibility of a "universe within a universe". The observable universe is exactly that, and is the only such thing mentioned in the article. --Sean 21:16, 13 April 2009 (UTC)[reply]
Well it wasn't me who asked the question, but my apologies, I now see you were replying to the OP, not to me. I ought to know what the indent means by now! SpinningSpark 21:59, 13 April 2009 (UTC)[reply]
No problem; I haven't even mastered the signature, apparently! --Sean 22:59, 13 April 2009 (UTC)[reply]

musical plants

edit

What type of music will make plants grow fastest? and will talking to seeds only help them grow if u do it for hours?? —Preceding unsigned comment added by 66.237.50.35 (talk) 22:07, 13 April 2009 (UTC)[reply]

I'm skeptical that music could really make plants grow faster. I know a lot of people insist otherwise, but people insist on all sorts of crazy stuff. That said, an episode of Mythbusters seemed to indicate that plants exposed to music and speech grew faster than their control plants, which were kept in a quiet spot. I find the results kind of dubious, and suspect that a more controlled experiment would bring about different results, but them's the breaks. In the episode, the plants that got a constant blast of death metal did best. (As for seeds, talking to them strikes me as even more nonsensical.) -- Captain Disdain (talk) 23:53, 13 April 2009 (UTC)[reply]
It is a known effect that many plants grow stronger in response to varying wind, while if grown in a bell jar will grow spindly and weak. Is it possible that loud death metal music is stimulating this response through mechanical vibration? SpinningSpark 06:33, 14 April 2009 (UTC)[reply]
That would make sense, sure. I really don't see how that would have any effect on the seeds, though -- one of the most interesting characteristics of seeds is that they are kind of dormant, and can remain that way for a long time, until they end up in an environment where they can start growing. -- Captain Disdain (talk) 11:28, 14 April 2009 (UTC)[reply]
Apparently its called Thigmomorphogenesis but I doubt anyone in a death metal band would know that's what they were doing. No answer to the seed question, but it most certainly works on seedlings. SpinningSpark 21:17, 14 April 2009 (UTC)[reply]

RAW MILK and Growing Up with Enzyme Lactase Loss

edit

This question is being discussed as a possible request for medical treatment on Wikipedia talk:Reference Desk. -- kainaw 22:53, 13 April 2009 (UTC)[reply]