Wikipedia:Reference desk/Archives/Science/2009 June 5

Science desk
< June 4 << May | June | Jul >> June 6 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


June 5 edit

Distribution of stellar classes edit

According to the table in Harvard spectral classification, stars of spectral class M comprise over ~76% of all main sequence stars while Sun-like stars in spectral class G comprise ~7.6% of all main sequence stars. This suggests to me that class M stars outnumber class G stars by a ratio of 10:1. My own OR suggests that is roughly true within 20 light years of the Sun, but is that ratio maintained throughout the Milky Way galaxy? Presumably similar ratios can be calculated for other spectral classes as well - I am particularly interested if some parts of the Galaxy are lacking in stars a particular spectral class. Astronaut (talk) 00:10, 5 June 2009 (UTC)[reply]

I think the ratio is maintained on a large scale (presumably universally, although I'm not sure how much evidence there is to support that). I'm not sure about the distribution of different spectral classes throughout the galaxy, but there are certainly differences in the metallicity of stars in different areas (that article explains it fully). --Tango (talk) 01:32, 5 June 2009 (UTC)[reply]
I saw the metallicity article, but I'm not convinced there is a strong link between metallicity and spectral class, and anyway I'm really only interested in stars in the Milky Way disc. The kind of thing I'm trying to find out is if, for example, we observed that the population density of O and B class stars in the spiral arms was typically double that in the regions between the spiral arms, could we assume the same was true of the population density of G, K and M class stars, even though we cannot observe them directly? In other words, can I use the distribution of bright stars that can be observed at great distances, to reliably infer the existance of a much larger population of dim stars at these great distances? Astronaut (talk) 04:39, 5 June 2009 (UTC)[reply]
I don't think you can do that because the O class stars are short lived (less than 10 million years) and won't have enough time to move very far from the star nursery where they are born, while smaller stars can live much longer and will have a more uniforme spacial distribution. Dauto (talk) 06:43, 5 June 2009 (UTC)[reply]
One factor in variation of star population is related to the size of the galactic body they are in. Globular clusters are known to lose stars through a process analogous to (and actually called by some) "evaporation". The lighter stars in the cluster are the most "volatile" since they will gain the highest velocities in energy exchange interactions with other stars. Globular clusters thus tend to have a relative poverty of K and M-type stars since these will be the first to achieve escape velocities. SpinningSpark 13:08, 5 June 2009 (UTC)[reply]
Two important keywords for this type of question are the initial mass function (IMF) and the star formation history, and it's the latter that chiefly determines the current distribution of stellar types in a given stellar population. Starting from the IMF, an assumption about the star formation history and using stellar evolution models, one can try to model the stellar populations in various environments of the Milky Way, but also in other galaxies. Obviously, while in the Milky Way one can make a tally of individual stars, in other galaxies one only has a limited amount of information, like the colours of galaxies or integrated spectra. What a population of stars in a given area looks like at a given point in time thus depends on both the IMF and the star formation history. From modelling this sort of information one can then try to reconstruct these two factors. I'm far from being an expert in this, but my impression is that the IMF is fairly universal. There are indeed different versions that have been tried and which vary in particular at the low mass end, i.e. at late spectral types, but that controversy seems to be due more to our ignorance rather than variation in different environments. The star formation history is much more variable. Early-type stars, O and B, say, are very short-lived and can only exist in environments where stars are currently being formed. If you switch off star formation (e.g. by removing cold gas) these stars will disappear very quickly and you'll be left with a population that is dominated by late-type stars, main-sequence stars dominating by number, and red giants dominating the light output. The neighbourhood of the sun is in the disk of the Milky Way which is still forming stars, hence we might expect some early-type stars (typically forming OB associations). The bulge forms much less stars and is therefore dominated by later-type stars. The extreme cases would be "red and dead" elliptical galaxies. --Wrongfilter (talk) 16:24, 5 June 2009 (UTC)[reply]

Q about lovebirds edit

Is it true that it's impossible to keep one lovebird as a pet? Is it also true that if you have two lovebirds and one of them dies, then the other one will become depressed, stop eating and die soon after? Thanks. --84.67.12.110 (talk) 00:13, 5 June 2009 (UTC)[reply]

I've looked around and read a bunch of likely pages - and I can't find anyone that says this is true. However, it's clear that they are extremely active, intelligent, social birds and might get depressed because they don't have anything to keep them amused when their mate dies. I suspect that the myth of one bird dying and it's mate dying "of a broken heart" soon after has it's roots in the fact that if you buy two birds of the same age on the same day - keep them in the same cage and feed them the same food - then the odds are pretty good that whatever kills one of them will soon kill the other...especially if it's a bit depressed because it's bored. This would seem a lot like death from a broken heart - but in all likelyhood it's probably just that whatever killed the first one also kills the second. As for keeping just one of them - according to most of the sites I've read, it'll do OK PROVIDING you give it tons and tons of attention and keep it busy and interested in life. Having two of them relieves you of some of this huge commitment in that (to some extent) they'll keep each other amused when you aren't around. SteveBaker (talk) 03:50, 5 June 2009 (UTC)[reply]
This song recounts a field observation of catabythismus incidental to amorous dysfunction by Petroica macrocephala, the Miromiro or New Zealand Tit, a bird of the Petroicidae (Australasian robin) family colloquially known as a tom-tit. Cuddlyable3 (talk) 08:39, 5 June 2009 (UTC)[reply]

largest possible Black Hole edit

Is there a radius at which the density of a Black Hole is so low that it is no longer a Black Hole? --71.100.6.71 (talk) 02:50, 5 June 2009 (UTC)[reply]

No. You are correct that the density of a black hole (ie. the average density within the event horizon as viewed from a distance) decreases as the black hole grows, but there is no minimum density required to remain a black hole. --Tango (talk) 03:15, 5 June 2009 (UTC)[reply]
Density of Supermassive black holes can be surprisingly low. You will not even notice you entered it. You will just be unable to get out. --131.188.3.20 (talk) 12:51, 7 June 2009 (UTC)[reply]

zinc cloride edit

why zinc cloride is not weighed while it is still hot? —Preceding unsigned comment added by 115.134.149.71 (talk) 03:51, 5 June 2009 (UTC)[reply]

Nothing should be weighed while its temperature is different from the ambient temperature. If the substance it hotter than the air it will generate convection currents which can easily disrupt a sensitive balance and give an inaccurate reading. If you are going to weigh something, let it cool to room temperature so that such convection currents do not screw with the balance. Of course, you already knew all of this, because you paid attention when your chemistry teacher explained all of this to you. Right? --Jayron32.talk.contribs 04:02, 5 June 2009 (UTC)[reply]

Lame flight data recorders edit

The flight data recorders presently in use on jumbo jets send out a beacon signal detectable to a reported one mile, but the planes may crash in ocean depths of up to 4 miles, such as the recent Air France jet crash. Would the acoustic beacon power have to increase as the square or the cube to be detectable at 4 times the present distance, so there would be a great likelihood of recovery? This could mean 16 times the radiated acoustic signal or 64 times, respectively. News articles also say the batteries will fail after 30 days of beeping. Why couldn't the flight data recorder have a transponder which sends out a very powerful signal only when a powerful search ship probe signal triggers it, meaning it could still be found long after the crash? Edison (talk) 05:26, 5 June 2009 (UTC)[reply]

I think the deal with FDRs is that they are built to an old standard, and the standard has never been updated, because having one standard is preferable in most cases than having 100 different types of FDRs. If they are all identical, then one always knows what to look for. Actually, the better standard would seem to instead broadcast the information via a digital data stream over a satelite system. The problem is that the FDR is attached to the plane. If the flight data information were stored outside the plane, it would greatly reduce the need to recover the box. The black box could still exist as a backup. Of course, the technology exists to do exactly that; the problem would be retrofitting the worlds airplane fleet and relevent ground locations with the system, it would be a daunting task. But the technology certainly exists to devise a better system. --Jayron32.talk.contribs 05:33, 5 June 2009 (UTC)[reply]
I don't know how much data they record, but surely it's enough to make every plane in the air constantly streaming it over expensive satellite bandwidth impractical. --Sean 14:11, 5 June 2009 (UTC)[reply]
Also, couldn't bad weather (like the recent Air France crash occurred in) interrupt the signal? TastyCakes (talk) 14:25, 5 June 2009 (UTC)[reply]
In an simple isotropic medium it would generally be distance squared, but there is a correction for scattering so it is really more like  , where λ is about 1 km in the ocean. Which means that at 4 km you get about 1/300th the power as at 1 km. A further effect is the diffraction around the sofar channel in the ocean, which makes it nearly impossible for deep accoustic signals to ever reach the surface anyway, so even with a more powerful transmitter you'd still have to dangle a deep hydrophone to hear it. Dragons flight (talk) 16:44, 5 June 2009 (UTC)[reply]
Transmitting 4 miles would require more like 2000 times the power as transmitting 1 mile, given the formula above. Dragons flight (talk) 17:04, 5 June 2009 (UTC)[reply]
The Mary Sears was unable to locate the black boxes from Adam Air Flight 574 using hull mounted sonar. After returning to port and equiping a Towed Pinger Locator she detected both at a depth of 1700m. According to Jane's a TPL can detect the signal from a max depth of 20,000 ft.—eric 19:31, 5 June 2009 (UTC)[reply]
According to the Adam Air accident report they were found at 2000 and 1900 meters, and the Mary Sears was required to pass within 500m of a beacon to locate them.—eric 20:08, 5 June 2009 (UTC)[reply]

Radiocarbon and DNA evidence on Voynich Manuscript edit

Has the Voynich Manuscript been subjected to radiometric dating to determine the age of its materials, or to DNA testing to determine the region of their origin? NeonMerlin 05:41, 5 June 2009 (UTC)[reply]

According to this Yale journal (scroll down to "Shelf Life"), dated February 2009, "two outside specialists are analyzing the pigments in its ink and carbon dating a tiny sample of its vellum." I haven't come across any reports of their findings, though. Deor (talk) 15:02, 5 June 2009 (UTC)[reply]
Is it a coincidence that this question came today? Jørgen (talk) 17:02, 5 June 2009 (UTC)[reply]
My guess is that Merlin's inquiry was sparked after reading Mr. Munroe's comic this morning, and not by chance. 17:07, 5 June 2009 (UTC)

Electrical power usage of computer edit

Hi, in this green age I guess it's relevant to ask this question. I've always been told to leave my computer on all the time, as repeated on/off switching could not only damage the switch, it causes electrical spikes which could damage the motherboard(?) over time. However, now my latest concern is that leaving it on consumes at least 600W per hour (that is the size of the power supply unit with the fan). Is this true? When the computer goes into power saving mode (turn off monitor and hard disk) - how much less does it use? Would it use even less if I tell the computer to hibernate during times of no usage? Sandman30s (talk) 09:24, 5 June 2009 (UTC)[reply]

While there is certainly some increased wear and tear from turning it on and off, I don't think it is much to worry about. If you're just stepping away for 10 minutes it probably isn't worth turning it off, but if it is going to be several hours it would be good to. It's difficult to say how much power it uses at different times, it varies from computer to computer, but the 600W figure would be a maximum (plus whatever your monitor uses and any other equipment that has its own mains plug), it won't use that all the time. You can get a power meter that goes inbetween the plug and the mains socket that will tell you exactly what it is using, I think most high street electrical stores will have them. Hibernation is actually the same as turning it off completely, the only difference is that it saves its current state to the hard drive so it can load back to the same point when it turn it back on. It will probably still use some power when switched off unless you unplug it or switch it off at the mains (see Wall wart). Leaving it on standby will use a significant amount of power. --Tango (talk) 12:37, 5 June 2009 (UTC)[reply]
Thanks, I think I'll do some tests with a power meter. Sandman30s (talk) 13:27, 5 June 2009 (UTC)[reply]
Also, note that in the winter, if you heat your house with electrical heating, you can just leave your computer on as all "wasted" energy contributes to heating your house anyway. (and conversely, if you use AC in the summer, you waste "double" the energy by leaving the computer on (I'd think?)) Jørgen (talk) 13:54, 5 June 2009 (UTC)[reply]
Both of your estimates are accurate - in data centers, it is estimated that total electric bills are equal to 2 times the power consumption of the electronics - this is a very common rule of thumb for estimating costs. This means that for each watt of useful electronics, an extra watt is necessary for air conditioners. It's hard to grasp this heating/air-conditioning concept intuitively - you "know" your computer is warm, but it's not that big... If you have the chance to visit a server-room, get permission to walk BEHIND one of the rack cabinets - behind a rack of 40 high-performance systems - you'll be blasted by the hot-air fan output which might reach a hundred kilowatts - it's like a furnace! Nimur (talk) 14:53, 5 June 2009 (UTC)[reply]
Thanks for the info! This should mean that environmentally friendly data centers should be placed in the basements of large offices / residential buildings in cold areas, so that the excess heat can be used to heat the building. I think I've read some articles on large data processing centers being located in places where hydro power is prevalent, or where there is a river available for cooling, both of which would be fine, but I think that placing them in cold areas - pumping cold air in (in some controlled process, of course) to cool the components and using the excess heat for heating - would be an even better approach? Now, I know the logistics of this might be hard, but I'm sort of surprised that no fringe-targeted "environmentally friendly cluster" has marketed this yet. (Oh, and sorry for hijacking the question) Jørgen (talk) 15:08, 5 June 2009 (UTC)[reply]
The use of cold, outside air as a source of cooling for data centers is already an accepted solution in some areas. Care must be taken that outside air does not introduce fine particulates that might damage equipment, and the temperature and humidity of incoming air must be tuned; these challenges and concerns have delayed the adoption of so-called air-side economizers or outside air economizers. Nevertheless, some more recent studies (by Intel and some electrical utilities; linked from our articles) have allayed fears over these risks. Here's an article about a Canadian firm that gets about 65% of their data center cooling from outside air: [1]. TenOfAllTrades(talk) 16:51, 5 June 2009 (UTC)[reply]
Thanks! Good to see that good ideas are put to use. Jørgen (talk) 17:00, 5 June 2009 (UTC)[reply]
In this interview with an IT manager working at the South Pole, he says that at their old data center somebody had just cut a hole in the wall to cool everything inside. Tempshill (talk) 17:33, 5 June 2009 (UTC)[reply]
Actually that's only likely the case if you have central heating and heat your house 24/7 to a constant temperature. In fact even then I'm not convinced it's going to be equal unless every space and passive part in your house is at a constant temperature which I would expect unlikely. Also if you are using a heat pump then even if it is really the case, you're still going to usually spend more energy then you would have otherwise because of the better coefficient of performance for the heat generated (may be a bit confusing in the context because not all of the energy input is going to make heat but even that which is, can basically be considered akin to an electrical heater). At least though computers are usually on the ground, not near the ceiling like those who try to argue about lights. Also it is unlikely all energy input ends up as heat, even when idle. Computers do do processing etc and some of the wasted energy would likely be in sound, perhaps em etc which is not going to end up as heat (at least not in your house). If you are using a modern computer with Cool'n'quiet, Speedstep or a similar power saving technology and an efficient PSU, I would say you would still often use at least 100W when idle and even presuming 80% ends up as heat (which I expect is a bit high), and all that heat is useful for you (which as I've explained I think is unlikely) that's still 20W. Over 90 days that's 43.2kWh. That's easily a few dollars/pounds/Euros/whatevers (depending on energy costs in your area) and equivalent to leaving a 1000W heater on in the middle of nowhere for 43.2 hours for no purpose. Of course in practice your computer is not going to be off 24/7 either but it's entirely resonable to turn off your computer if you're not going to need it for an hour or more. If you need a fast start, you could always enable some standy mode. Ultimately, it's always to let specialised equipment do what they were designed to do. Don't use a computer or a light as your heater (well heat lights are designed to generate heat and light but that's a different issue altogether). Use a heater. Yes it does mean the wasted energy calculation is not as simple as it seems but it doesn't mean you shouldn't consider wasted energy because it's all just heat and you want heat. The heat has to be in the right place and the right time otherwise it's still wasted. Nil Einne (talk) 07:54, 7 June 2009 (UTC)[reply]
As I mentioned above, it makes sense to turn off your computer but it's highly unlikely your computer uses 600W even at full load. Firstly power supplies for computers are always more then the computer ever needs. If it's a professional designed computer, e.g. from Dell etc, then the power supply maximum output would be closer to the maximum expected requirement but still higher. If it's a noname or assembled by a friend then there's a good chance the power supply is way more then the computer ever needs (this isn't a good thing but it's common). Commonly people add up the maximum possible draw of every component in the computer, except that this situation usually never actually occurs in practice. Secondly most cheap, no name power supplies are rated much higher then they can actually reliably supply. In other words, even if you power supply is rated 600W, it will often not be able to reliably draw close to that before having major problems. Thirdly as I have already hinted at, this is as maximum load. If you have a modern CPU and a modern GPU and even to some extent a modern motherboard, these are designed so they will draw a lot less when idle or close to idle (old ones to but not to such an extent). With CPUs, make sure the powersaving technology (probably SpeedStep for Intel and Cool'n'quiet for AMD) is probably enabled and working. With GPUs try and get an updated driver from the manufacturer (Nvidia or AMD/ATI) and look for any powersaving options. Note that if you have an IGP these don't use much but there may sitll be some power saving options. If everything is set up right, I would expect even for quite a good computer, you can expect it to use about 100W or so when idle (a very rough estimate) and 200-300W at maximum load. Even with something like SLI/Crossfire and quad core 500W would be a high maximum. In other words, the power used is likely less then you may think (but it doesn't mean it makes sense to waste it). As has already been mentioned if you're really interested, you can buy a power meter, and plug your computer into this and see how much it draws. (This is the only reliable way to work out how much a computer uses, anyone who tries to estimate it by adding up components is coming up with a highly speculatory number, and if looking at reviews, I would ignore such figures and may be even the whole review.) The only other thing I can say is you may want to consider a new PSU. As I've stated, it's likely more then you need. If it's some no name POS (as I've suggested if it's a decent branded computer, e.g. Dell, HP-Compaq etc then the PSU is probably an okay one), this is highly problematic because many crappy PSUs are extremely inefficient particularly at low loads which is probably the case for you presuming your computer is idle most of the time (browsing the internet, running Office is usually close to idle). A high quality good brand PSU, which meets the 80 Plus Certification would likely save quite a bit of energy. Plus cheap POS PSUs are notorious for dying and taking some of the components with them so it may save you some headaches there too. It may cost a bit, but I would look into whether you really need a 600W PSU. As I've said, ignore anyone who tells you to add up the maximum theoretical load of each component. A good place to start would be this SilentPC article (sadly a bit outdated) and the forums. Obviously if you get a power meter, then you will have a rather accurate idea of how much your PCs draws and while you'd want some headroom, I wouldn't give it too much. This is one of the examples where bigger is not necessarily better. (This isn't perhaps quite as much an issue nowadays as it was before the 80 Plus days but it's still the case that you don't really want to be only using a tiny percentage of your PSUs maximum most o
Leaving your computer on while you don't even use it 75% of the time probably won't save energy compared to turning it off. Computers usually have a program where you can shut it off either without pressing the power button, or it will say that it's safe to turn off your computer. I even turn off my computer straight with the power button every time, and with no ill effects. ~AH1(TCU) 19:48, 7 June 2009 (UTC)[reply]
Also, some computers will consume several watts of power while they are turned off. ~AH1(TCU) 19:49, 7 June 2009 (UTC)[reply]

Satellite photos of London from today (5th June 2009) edit

Hi,

Is there any way of getting these? I realise live satellite feeds probably aren't available but what about very quickly archived pictures? --Rixxin (talk) 11:26, 5 June 2009 (UTC)[reply]

Photos from weather satellites, maybe. What kind of resolution are you looking for? --Tango (talk) 13:28, 5 June 2009 (UTC)[reply]
Honestly? This is sure gonna sound lame. I was interested to see if there was a big 'ol dark cloud over The Palace of Westminster today, 'cos it sure felt like there should be!--Rixxin (talk) 16:43, 5 June 2009 (UTC)[reply]
Well, there was a big dark cloud over the whole of England, pretty much... See here for visible light satellite images of London and the South East of England over today (one per hour). I can't find a close up of Westminster, but the weather was pretty consistent so it doesn't really matter. --Tango (talk) 18:51, 5 June 2009 (UTC)[reply]
The day will undoubtedly come when we will be able to view any part of the earth in real time, and zoom in on places of interest. Wars could be watched as they took place. Imagine what it would have been like to watch the D-Day invasion in real time.. – GlowWorm. —Preceding unsigned comment added by 98.21.107.157 (talk) 18:44, 5 June 2009 (UTC)[reply]
Actually, you could have watched it in real-time, with the right equipment. --98.217.14.211 (talk) 19:57, 5 June 2009 (UTC)[reply]
It's very unlikely that such a thing would ever happen, however such a nice idea it may be. Think about it. If satellite pictures of battles were freely available on the internet as the battles were taking place, they would be available to both sides and would present serious problems (and of course, advantages) to both sides. --KageTora - (영호 (影虎)) (talk) 19:11, 5 June 2009 (UTC)[reply]
You would need millions of satellites, and much more bandwith the entire Internet has today.. --131.188.3.20 (talk) 22:34, 5 June 2009 (UTC)[reply]
Half a century ago it would have sounded ridiculous that everyone was walking around with their own personal mobile phone. You could not build enough transmitters, and where would all the bandwidth come from? Technology will continue to improve apace and if you cannot think of a fundamental theoretical reason to stop such things happening, then they probably will if people want them to. SpinningSpark 00:04, 6 June 2009 (UTC)[reply]
Indeed. If you are willing to wait 50 years, almost anything is possible unless it is explicitly impossible (and maybe even then!). I believe three satellites in geostationary orbit with cameras with amazing resolution could cover most of the Earth, all it needs is for someone to invent the cameras and that could easy happen in the next 50 years. --Tango (talk) 00:08, 6 June 2009 (UTC)[reply]
It's not the satellites or the cameras - providing whole-earth, realtime coverage (as we've discussed recently) requires insane amounts of hardware - but it's not disallowed by the laws of physics. HOWEVER - what is disallowed is the bandwidth you need to get all of that data back down to earth. 500 million square kilometers - 500 trillion square meters. 50 quadrillion pixels. About 500 quadrillion bits with lossy data compression. For a 'realtime' update - you need to send that between maybe 10 and 60 times a second. So 5 quintillion bits per second. That's a truly outrageous amount of data - it would fill a million hard disk drives every second. It's more data than is available on the whole of the internet...every second. This is not merely an engineering difficulty - it's a serious "fundamental laws of physics" kind of a problem. If we had a million satellites - all with tight-beam laser links to the ground so they didn't interfere with each other - then it would be pretty much impossible - but to do it with three satellites! That's just impossible. SteveBaker (talk) 02:52, 6 June 2009 (UTC)[reply]
a) What law of physics are you suggesting is being violated? b) you're making the same mistake you made last time - there is no need to transmit all the data, just the data that someone wants to receive and c) you are making meaningless comparisons - what do hard drives have to do with anything? No-one is asking you to store the data. Really, Steve, you should know better than to cavalierly claim something is physically impossible. --Tango (talk) 12:29, 6 June 2009 (UTC)[reply]
Nanotechnology could be a way to do it. Millions of tiny satellites could be put into orbit. Each satellite would consist of a group of molecules forming a camera and a radio transmitter having a broad beam, Each satellite would be solar powered, with a rechargeable battery for nights. Each camera would be aimed at a certain place on the earth and thus would need only a narrow transmitting bandwidth. The camera would be kept in its earth-aim by viewing certain stars and automatically making aim correction using solar power. The viewer on earth would select the radio frequency of the satellite aimed at the part of the world he is interested in. He would do this by selecting a square on a grid map on his monitor. He could enlarge the picture received, or combine the adjacent pictures of several satellites to cover a larger area. The satellites would not have to be in geocentric orbits, thus preventing overcrowding of that orbit, and permitting good near-polar coverage by using polar orbits with some satellites. Automatic switching would transfer reception from satellite to satellite to maintain the selected view of the earth as the satellites move. Automatic viewing-angle correction would also take place to provide an overhead view or angle view. The satellites could be manufactured cheaply by the millions in a nanofactory. They would be so small and light that they could be put into orbit and replaced at low cost. – GlowWorm.
Steve is being a bit negative here. Updates 60 times a second are not necessary for still pictures to be considered real-time, that's only necessary if you want video. One a minute might do, which would take three or four orders of magnitude off Steve's figures. Even if the aggregate data rate really is ~1018 b/s, each individual user does not want all of that at the same time on one channel, nor is each individual satellite supplying all of that on one channel. This is what networks are for, they are composed of many channels. As for the number of hard drives required to store the data, well you only need them for the data you want to store, you don't have to store any of it if you don't want to. SpinningSpark 12:27, 6 June 2009 (UTC)[reply]
Hm, what about not archiving everything, just send the data on demand. For example, for a city of 1000 you don't need 499500 phone cables just to connect them with each other, because they will never use the system all at the same time, talking to everyone at the same time. In fact, around 30 or so lines are enough to assure a 99.9% service. Not every person on Earth will look at the video at the same time, and even if they did, they won't see the whole surface of the Earth at the same time. Even so, we will require a ridiculous amount of satellites, bandwidth and storage space by today's standards, but significantly less than the numbers discussed so far. --131.188.3.21 (talk) 15:02, 6 June 2009 (UTC)[reply]
The MODIS Rapid Response website provides "near real time" (perhaps a pass per day or so) satellite coverage of most of the planet. The website is here. It takes some work to find what you want, and odds are good it will be cloudy. The images are fairly raw with edge distortion and the like, and the maximum resolution is 250 m per pixel. Still, the OP didn't specify the resolution required, and these are near daily images, so... Pfly (talk) 08:51, 7 June 2009 (UTC)[reply]
Yes - but that's hardly what the OP is asking for. Photos of "London" would be little more than a grey blur with a pixellated river running through it. 250m resolution means that streets and buildings cannot be seen - perhaps the larger parks could be - but there would be very little point in updating those pictures that frequently since nothing much changes at that scale from day to day - or even month to month!
This is London at 25meters per pixel:
 
But this is all you see at 250meters per pixel:
 
Pretty much a grey/green blur with a suggestion of a river.
SteveBaker (talk) 18:01, 7 June 2009 (UTC)[reply]
Also, the USGS Global Visualization Viewer gets you images from different satellites near specific geographical coordinates. However, it's unlikely to allow you to see images in much resolution, although it does usually have recent (few months to weeks) images, depending on cloud cover. ~AH1(TCU) 19:42, 7 June 2009 (UTC)[reply]

Is it really true that Planes which cruise at high altitude get hit by lightning? edit

Hi All,

I just keep wondering if the latest French Aeroplane's accident is really due to lightning. As I understand, at the hights of over 28k feet, there are no rainbearing clouds, in fact (again within my limited knowledge)these clouds are much below. As such there shouldn't be lightning too. Am I correct in my understanding? Would appreciate if people could enlighten me on the same.

Warm Regards, Rainlover_9 Rainlover 9 (talk) 14:22, 5 June 2009 (UTC)[reply]

See upper-atmospheric lightning. Gandalf61 (talk) 14:26, 5 June 2009 (UTC)[reply]
Planes DO get hit by lightning - but not "Upper Atmospheric Lightning)!! Upper-atmospheric lightning is WAY above where a plane flies. A plane will fly at about 35,000 feet - only about 10 km above the ground. The phenomena referenced in upper-atmospheric lightning are more appropriately called "Transient Luminous Events" because they are neither lightning, nor do they take place in the "atmosphere" (largely being above the stratosphere and in the lower boundary layers of the ionosphere). There is no way even a military aircraft would fly at those altitudes (80 or 100 km above the ground). Airplanes can and do get hit by tropospheric lightning when they fly close to or through a storm - and some cumulonimbus clouds might have storm tops as high as 60,000 feet above ground. It is this type of lightning - typically a cloud-to-cloud strike - that results in the occasional airplane strike. Nimur (talk) 14:59, 5 June 2009 (UTC)[reply]
I have seen a photo of lightning striking up from a cloud as well, so that could be a possibility. Also, an airplane flying along might build up its own static charge, and may become a lightning target by itself, similar to how lightning will go from cloud to cloud. 65.121.141.34 (talk) 15:03, 5 June 2009 (UTC)[reply]
As I understand it, lightning has been ruled out as a cause. Planes, in theory, should be safe from lightning for the same reason cars are safe from lightning: the metal skin conducts the charge safely around the passengers and sensitive electronics. This is not always the case, as I know lightning has brought down planes in the past, but I'm fairly certain that all modern planes have protection against lightning. (ref: http://blog.seattlepi.com/aerospace/archives/169983.asp) -RunningOnBrains(talk page) 18:15, 6 June 2009 (UTC)[reply]
I don't think lightning has been ruled out entirely, but it is pretty clear that more than one thing went wrong. These kinds of accidents always have a sequence of causes. There are all kinds of safety features on modern places so they can survive pretty much anything that can happen, but sometimes they can't survive multiple such things happening at once. --Tango (talk) 18:33, 6 June 2009 (UTC)[reply]
The deadliest plane crash caused by lightning was LANSA Flight 508, with 91 deaths. Planes today have better safety features, but that does not completely prevent them from being downed by lightning. Some features include needle-like structures on the back of the wings and tail to repel the static charge away from the aircraft. ~AH1(TCU) 19:38, 7 June 2009 (UTC)[reply]

Is 2 + 2 = 4 in other universes edit

Hello, while wandering about on the Internet I came across this interesting question on Yahoo Answers:

"Is 2 + 2 = 4 in other universes? Assuming that parallel universes exist, would their mathematics be the same as ours? I.e. could there be a universe in which, for example, 2 + 2 = 5?"

People may be interested in the answers posted: http://au.answers.yahoo.com/question/index;_ylt=AvNVaIojr7pEw1Bn_IyyzxoJ5wt.;_ylv=3?qid=20090603213159AAKkQK1

Since the theory of multiple universes was derived using the maths that we understand, then it follows that the maths on parallel universes MUST be the same. Otherwise, it would make the theory inconsistent. That's my guess, I was just wondering what others might think about this.

203.206.250.131 (talk) 14:29, 5 June 2009 (UTC)[reply]

Unless they use different numbers, then it's probaly going to be the same. If you have 2 things and add 2 more it's going to be 4 everytime. --Abce2|AccessDenied 14:31, 5 June 2009 (UTC)[reply]

Actually I would disagree and say there definitely could be universes where 2+2=5 or even 2=3. The whole idea behind a theorectical multiverse is there are an infinite number of universes in which the laws of physics, logic and math can be different. Time could run backwards, accelerate, or not exist at all. Keep in mind math has never been proven to be (maybe once we get a grand unification theory it will be)the basis of reality but simply a representation which we've created. But either way, even with a G.U.T. this would only prove math is immutable in OUR universe, others could be entirely different. TheFutureAwaits (talk) 14:56, 5 June 2009 (UTC)[reply]

A better way to phrase this question is, "Suppose a hypothetical universe existed where 2+2 != 4. What would be the consequences?" I do not think there are any verifiable consequences of such a statement, so it is a non-scientific question. I think I'll leave my response at that, and we can just wait for SteveBaker to show up and remind everyone that needless philosophizing is wasting quarks... Nimur (talk) 15:03, 5 June 2009 (UTC)[reply]
(ec)Personally i think maths evolved from what you see. If you have two balls and someone else gives you two more balls, you find out that you have four balls. This process is called addition. So, maybe, just maybe, there might be a universe where if you have two balls already and i give you two more, another one automatically materializes and you get 5. This is a physical law in the universe that whenever you give two more balls, an additional one should materialize. In this particular universe, i am inclined to believe that 2+2 will indeed be 5, and hence all our mathematics breaks down. So, if there can be universes with different physical laws, this might well be possible, so i think it cannot be said with certainty that 2+2 is always 4. It can just as well be any other number, though it is hard to visualize in this example how it can be a fraction(half a ball? pi times a ball?) I am assuming decimal systems used in all universes. Rkr1991 (talk) 15:09, 5 June 2009 (UTC)[reply]
It depends on your definitions of 2, +, = and 4. If you define them the same way as we define them, which is not dependant on the universe, then you'll get the same answers. However, life in other universes may well define them differently if different definitions are useful for them. Mathematics and universes are separate things linked by models. We model parts of the universe on mathematical things. To use Rkr1991's example, balls are well modelled in our universe by natural numbers, they might not be well modelled by natural numbers in other universes. That doesn't mean natural numbers are any different, just that they wouldn't be used. There are all kinds of possible number systems in mathematics but we generally only use the ones which are useful for describing our universe. --Tango (talk) 15:20, 5 June 2009 (UTC)[reply]
I disagree with Tango. Even if you define 2, +, = and 4 exactly the same way in our system, the problem posed in my example remains unresolved. As i said earlier, i think maths was derived from what we observe, not hard solid facts. Maths and our universe cannot be separated, and it cannot exist independently. So if we see 2 balls and 2 balls make 5 balls in a universe, then i think our maths indeed breaks down there. Rkr1991 (talk) 15:38, 5 June 2009 (UTC)[reply]
I agree with Tango. If we see 2 balls and 2 balls make 5 balls, we will reasonably deduce that our definitions of 2, +, = and 4 are no longer useful, and we will redefine one of them. But unless we do that, if instead we keep 2 defined (as is most common) as the set {{}, {{}}} and + through Peano arithmetic, 2 + 2 cannot suddenly become 5. It's not like mathematics defines 2 + 2 as "the number of physical objects in the collection resulting from putting two physical objects together with two other physical objects". Correspondence with that physical result is of course the reason for our definitions (that's why Peano chose the axioms he did, etc), but it is not the definition. —JAOTC 15:54, 5 June 2009 (UTC)[reply]
Our definitions of arithmetic are axiomatic, not empirical. Observations don't come into it other than to determine whether those definitions are useful. --Tango (talk) 16:29, 5 June 2009 (UTC)[reply]
Um, no. The formal axioms of arithmetic are chosen because they're true, not the other way around. How we come to know this truth, and whether that can be called "empirical", is a difficult question. But you can't seriously maintain that there's no notion of the truth of a statement like 2+2=4 apart from its provability in a formal system, when discussing provability in that system requires understanding the truth or falsity of more complicated statements. --Trovatore (talk) 16:48, 6 June 2009 (UTC)[reply]
We use those particular axioms for things like counting everyday items because they are useful axioms for doing that - they accurately model the real world (you can call that "truth" if you like, I prefer to avoid the word because it has so many different meanings). We could still consider the axioms in some universe where they didn't accurately model the real world, but it would be a largely academic exercise. If we chose to carry out that exercise, though, we would still get 2+2=4. --Tango (talk) 03:07, 7 June 2009 (UTC)[reply]
We use those axioms because they're true. They accurately describe the behavior of the mathematical objects they are intended to describe, which are part of the "real world", yes, but not the physical world.
The claim in your penultimate sentence relies essentially on the existence of real, though abstract, mathematical objects, which are the same in the hypothesized other universe. If you don't have those objects, you have no way of even formulating what it would mean to "carry out that exercise" in that other universe. --Trovatore (talk) 06:27, 7 June 2009 (UTC)[reply]
It occurs to me that I should say a little more here, to head off a possible misunderstanding of my point. My point is that the axioms themselves are abstract mathematical objects, and the syntactic manipulations of them by which one codes proofs are more complicated relationships between those abstract objects, than the relationship between the abstract objects 2 and 4.
If you don't accept abstract objects of even this simplicity, then you have no grounds whatsoever to claim that it's even meaningful to discuss "carrying out the exercise" in a hypothetical universe in which the physical objects may bear no relationship whatsoever to those we're used to. Hope that makes my point clearer, or at least not less clear. --Trovatore (talk) 06:32, 7 June 2009 (UTC)[reply]
I'm afraid you have a misunderstanding of how mathematical logic works. There is no single mathematical object called "2". Anything which satisfies the axioms for being "2" can be consider to be "2". The axioms come first, then you find a model which satisfies those axioms. The most common model for the natural numbers is to let zero be the empty set ( ) and then the successor of each number is the set containing all the numbers that come before it, so 2 would be " ", but there are various other models you could use all of which have an object that can legitimately be called "2". The axioms were chosen because they accurately model the way items in the real world behave (there is an obvious real world concept of there being none of something, so we need a zero, then you can always place another item in a group and get a new number of items, so every number needs a successor, etc. See Peano axioms for details). --Tango (talk) 16:11, 7 June 2009 (UTC)[reply]
My Ph.D. is in mathematical logic, which of course does not in itself refute your claim that I misunderstand it (in fact, I would even agree that it is practically guaranteed that I misunderstand it), but at least recognize that such misunderstandings as I have are not ones uninformed by knowledge of the currently-understood alternatives.
There are two separate points to be addressed here — structuralism versus unique representation on one axis, and axiomatics versus realism on the other. The bit about "no single mathematical object called 2" addresses only the structuralism-related part, and that part I'm not interested in arguing. Sure, any object could be thought of as "2", if it bears the correct relationship to other objects thought of as other naturals. But that is not the same as saying the naturals are characterized by axioms, only that they are characterized by the relationship to other objects. These cannot be captured by (first-order) axioms.
The claim about axiomatics being primary in arithmetic, though, is simply false. Just plain not true. This is easily seen — the ancients had no axioms for the naturals at all, but they still knew what they were. --Trovatore (talk) 20:58, 7 June 2009 (UTC)[reply]
(e/c) Math was originally derived from experimentation, but this is no longer the case. A mathematical theory is made by a group of axioms and is supposed to be interesting. People will find parallels between the mathematical theory and the universe, and use that to make scientific theories. You could have a universe where the strength of gravity is not inversely proportional to the square of the distance, but that just means gravity will be modeled with different mathematics. All that being said, there could be universes where it's possible to build a hypercomputer, and it would be possible to do more sophisticated math. It's not that they'd get different theorems for the same theory (2+2=4 would still be true), it's that they'd be able to use axiom schemas that we cannot. For example, they might be able to find out if every even number above four is the sum of two odd primes by checking every case, something physically impossible in our universe. — DanielLC 16:03, 5 June 2009 (UTC)[reply]
I would like to point out that there are things that appear axiomatic that are in fact dependent on unstated assumptions. Euclid teaches, and people agreed for quite some time, that the interior angles of a triangle always add up to 180º. That's been a basic theorem of geometry for hundreds of years, and is as easy to demonstrate as 2+2=4. Of course, unstated is that only works if you are talking about a triangle on a 2-D plane—if you are sketching the triangle on a 3-D surface, it isn't true at all. A triangle on a sphere can have interior angles that add up to more than 180º. There's an unstated limiting factor in Euclidean geometry—that it only applies to 2-D surfaces—that becomes clear if you actually start, say, drawing large triangles on the ground (and you happen to live on a sphere).
I don't know enough about number theory to say whether there could be something analogous with arithmetic, but it's worth considering. --98.217.14.211 (talk) 17:21, 5 June 2009 (UTC)[reply]
No good mathematician would leave those assumptions unstated, at least in formal work. Everything in mathematics is based on assumptions. Results are always of the form "A implies B" never simply "B". One of the key things that is drilled into any mathematics student, however, is to always state your assumptions. --Tango (talk) 18:01, 5 June 2009 (UTC)[reply]
If 2+2 *always* equalled 5, then the alternate universe's laws of mathematics would likely look a lot like ours (though perhaps a bit more cumbersome). However, consider a universe in which 2+2 sometimes was 4, sometimes 3, sometimes 5, etc. In a sense, we live in such a universe now. The math we use to deal with this is statistics. Two random variables, when added together, can give you different answers each time you observe them. Wikiant (talk) 18:06, 5 June 2009 (UTC)[reply]
In which case they wouldn't model items as numbers but rather as random variables. Mathematics includes all kinds of tools, you need to use the right one for the right job. Just because one tool isn't right for a particular job doesn't mean that it is wrong mathematically. --Tango (talk) 18:10, 5 June 2009 (UTC)[reply]
I would like to leap to the defence of Euclid on this one. Euclid realised perfectly well that the angles in a triangle summing to 180º was not axiomatic but was dependant on the parallel postulate and clearly states this as a postulate in his Elements (postulate 5). It was later mathematicians who through the centuries refused to believe that the parallel postulate could not somehow be proved from other basic postulates. It was only finally admitted that Euclid was right when Einstein discovered that the universe was not, in fact, Euclidean and that other geometries were possible in the real world as well as in the minds of mathematicians. SpinningSpark 23:06, 5 June 2009 (UTC)[reply]
Einstein used non-Euclidean geometry, he didn't invent it. The geometry Einstein used was mostly constructed by Bernhard Riemann, I believe, although several people had done work on the subject before him. Non-Euclidean geometry was known to be relevant to the real world before Einstein came on the scene - we live on the surface of a sphere, which in non-Euclidean. It doesn't matter anyway, you don't need a real world application to prove that the parallel postulate is independent of the preceding four. --Tango (talk) 12:08, 6 June 2009 (UTC)[reply]
I didn't say Einstein invented non-Euclidean geometry, I quite agree, it was around long before that. What his work did do was put an end to the cottage industry of trying to prove the parallel postulate, which at one time occupied the place that attempts to invent a perpetual motion machine occupy now. Very little of the mathematics of relativity are due to Einstein himself, but relativity established once and for all that as well as being unprovable, the parallel postulate was not even true in the real universe (at least on a large scale). SpinningSpark 14:21, 6 June 2009 (UTC)[reply]
The discovery of non-Euclidean geometries put an end to attempts (by legitimate mathematicians) to prove the 5th postulate from the other 4. The real world has nothing to do with mathematics. --Tango (talk) 14:28, 6 June 2009 (UTC)[reply]
Ooooh, well that would depend when you date the discovery of non-euclidean geometries. Some would credit it to Omar Khayyam, in which case do the attempted proofs of Giordano Vitale (1680), Girolamo Saccheri (1773) and Adrien-Marie Legendre (1794) render them not legitimate mathematicians? I think not, and if you define the beginning of non-Euclidean geometry as the point at which attempts to prove the 5th postulate ceased, well then you have a circular argument. SpinningSpark 15:12, 6 June 2009 (UTC)[reply]
I believe Khayyam contributed some of the initial ideas, but he didn't have a rigorous model which obeyed the first four axioms but not the fifth, which is what was required to prove the fifth was independent. I've not studied the history of non-Euclidean geometry in depth, so I'm not sure exactly who had the first such rigorous proof, but I do know if was a long time before Einstein. --Tango (talk) 15:55, 6 June 2009 (UTC)[reply]
We like to think of these discoveries as a sudden revelation and credit them to some one genius in particular. The reality is that it is a gradual process to which many contribute. As Wittgenstein said, the real change in beliefs comes not at the point of this or that discovery but at the point of a paridgm shift. It is made clear in this book for instance, that Saccheri would have been credited with the discovery of non-Euclidean geometry if only he had not continued to publish "proofs" of the 5th postulate. This is an untenable logical position, he discovered non-Euclidean geometry, according to the author, but his discovery is invalidated by his later work. If all his later work was destroyed so we did not know of it, he would be the discoverer, so if I mentally imagine I have not read of his later works he becomes the discover? In any case, as I have already said, I do not dispute this all happened long before Einstein, my point is that it was Einstein that triggered the paradigm shift in belief, and this was because Einsteins work related to the real universe, which is where reality comes in to it. This makes no difference at all to the mathematics, but it does change quite radically the way people are thinking. SpinningSpark 18:36, 6 June 2009 (UTC)[reply]
Yes, there is some confusion over exactly when the first rigorous proof was accepted by the mathematical community, but it was certainly a long time before Einstein. Einstein's discovery that Riemannian geometry was a good model for the universe has nothing to do with mathematics. Mathematicians make decisions based on logic, not empiricism. And, as I've already said, there were aspects of real life that have been modelled on non-Euclidean geometry before Einstein, such as the surface of the Earth. --Tango (talk) 01:19, 7 June 2009 (UTC)[reply]
Are we really debating this? If I have two apples on a table, and place two more apples on the same table, I now have four apples on the table. If I lived in a universe where an extra apple materialized merely because I placed two groups of two next to each other, such a universe would violate the principle of causality so as to be entirely unexplainable at any level. Seriously, why is this even being debated to this level? --Jayron32.talk.contribs 19:00, 5 June 2009 (UTC)[reply]
Your example only violates causality if it is true that 2+2=4. So, your argument boils down to the circular "2+2 must equal 4 because if it didn't, 2+2 wouldn't equal 4." For a possible counter-example, consider this universe wherein subatomic particles and anti-particles spontaneously emerge. This could be interpreted to suggest that 0+0=2. Wikiant (talk) 21:05, 5 June 2009 (UTC)[reply]
Why should another universe obey our universe's laws of causality? It isn't difficult to imagine a universe with two timelike dimensions, which would have very different principles of causality. (I think I read a paper once about such a universe and the conclusion was that complex life almost certainly couldn't evolve in it, but that's not important.) --Tango (talk) 21:15, 5 June 2009 (UTC)[reply]
This reminds me of David Deutsch's concept of "cantgotu" environments in The Fabric of Reality, but I can't remember what if anything he said about the idea of 2+2 not equaling 4. He deals with this general kind of idea, though. 213.122.1.200 (talk) 22:01, 5 June 2009 (UTC)[reply]
No, no, no. You can't use things popping into existance or whatever to envisage a scenario when 2+2 doesn't equal 4. In our universe, 0.9 times the speed of light plus 0.9 times the speed of light equals 1.8 times the speed of light (yes, it really does). However, if you're travelling at 0.9 times the speed of light relative to me and you toss a ball out in front of you at 0.9 times the speed of light, it won't be moving at 1.8 times the speed of light relative to me - it'll be more like 0.99. That's not because 0.9+0.9 doesn't equal 1.8 - it's because the arithmetic operator '+' doesn't apply to speeds as we humans always thought it did. This really shouldn't be a surprise, the '+' operator doesn't apply to a while lot of things!
So if two subatomic particles pop out of nowhere, that doesn't prove that 0+0=2 - it proves that the total number of particles in some volume of space is not a constant - so addition is (again) not an appropriate operation to apply to them.
The point is that things like '+', '2', '=' and '4' are symbols that stand for concepts that have definitions that humans have applied to them. Our definition says that 2+2=4 - not because that necessarily represents something "real" but because that's what we've defined those symbols to mean.
Suppose I were to define the symbol '#' to mean "the sum of two numbers - except when there is a full moon, when it's the product of two numbers" - then 2#2=4 happens to be a true statement...but 3#3=6 isn't always true and 7#9=5 is never true. There isn't any deep inner meaning in what I just said - it's just words and definitions. There is probably no physical property of our universe for which the '#' operator is applicable - but my definition is still a perfectly reasonable one and 2#2=4 no matter what. Perhaps in another universe, placing apples on tables follows the '#' operator - but the '+' operator is kinda useless and considered weird to the people who live there, (who are constantly checking their almanacs to see when the next full moon will be!)
So this universe does whatever it does - the other universe does whatever it does - and whether our definitions for the symbols '2', '+', '=' and '4' are applicable to apples placed on tables in that other universe (or to the way speeds are combined) is a matter that we can't answer. But our definitions still apply - you can't invalidate something that's axiomatic in the system of mathematics you choose to use.
Hence, the answer is a clear "yes" - 2+2=4 everywhere - because we happen to have defined it that way. SteveBaker (talk) 22:27, 5 June 2009 (UTC)[reply]
I'm going to expand upon what's been said above, because hopefully it'll sort out some confusion. Back off a bit - say we're in a parallel universe which doesn't have the same laws of math. First let's start with a quantity. Names are arbitrary, so let's call it "@". That's going to be boring by itself, so we'll make another quantity, and we'll call it "!". Independent quantities by themselves aren't interesting, so let's add an operation: "%". How does this operation behave? It might be nice to have an identity, that is, if we 'percent' a quantity with the identity, we get the other number back. It's arbitrary which one we choose, as they are the same, so let's pick '@'. So we have '@ % @' => '@' and '! % @' => '!'. Okay, but now what about '! % !'? Well, we could say that '! % !' => '@', or even '! % !' => '!', but we can also introduce a new symbol, so let's do that and call it '&'. So '&' is defined as '! % !'. Now what is '& % !'? Keeping it open ended we define it to be '$'. And '$ % !' => '#'. Now, let's figure out what '& % &' is. Since '&' is the same as '! % !'; we see that '& % &' is '(! % !) % (! % !)'. If we say that order in which we 'percent' doesn't matter (associative property), we can rewrite that as '((! % !) % !) % !', or '(& % !) % !' or '$ % !', which we defined earlier as '#'. As long as '! % !' => '&', '& % !' => '$', '$ % !' => '#', and the '%' operation has the associative property, '& % &' => '#'. Hopefully you can see where I'm going with this; the names I gave them were arbitrary. I could have easily have said 0 instead of @, 1 for !, 2/&, 3/$, 4/# and +/%, and you have your situation. If we have defined 1+1=2, 2+1=3, 3+1=4, and addition as associative, then 2+2=4. If 2+2=5, then either 3+1=5, 1+1≠2, or addition is not associative. We certainly could call 3+1, '5' if we wanted, but it would behave exactly the same way 4 does now. It would be the equivalent of writing 'IV' instead of '4' - the name changes, but the properties stay the same. Conversely, with addition being non-associative, why call it "addition"? Associativity is part of what defines addition - if you change that, you have something else.
"Non-standard" arithmetic are used all the time, however. That case earlier, where '!%!'=>'@' (er, 1+1=0)? That's modulo 2 arithmetic. We also have the case where '!%!'=>'!', except we call that multiplication, and we substitute 1 for @ and 0 for ! instead of vice-versa. We also frequently encounter physical situations where 2+2≠4. Take, for example, adding two liters of water to two liters of sand. 2+2≠4 in that case. Mix 2 cups vinegar to 2 cups baking soda, and the result takes up much more than 4 cups. Physical reality doesn't have to match with abstract mathematical constructs - however, instead of redefining 2+2=3 because the sand and water don't measure 4 liters afterward, or saying that 2+2=8 because of the carbon dioxide gas evolved with baking soda and water, we leave mathematics as a "pure" ideal, with 2+2=4, and realize normal addition doesn't apply to those situations. Likewise, the fact that interior angles of a triangle don't add up to 180 on the surface of the earth didn't invalidate Euclidean geometry - it still exists theoretically, we just realize that when doing surveying, we need to use a non-Euclidean geometry. -- 128.104.112.106 (talk) 01:28, 6 June 2009 (UTC)[reply]
Physicist Max Tegmark has proposed a Mathematical universe hypothesis which proposes that every mathematical structure exists physically in some universe. But from a mathematical point of view, it seems to go a bit overboard (ask in the math RD for more explanation). Note, non-standard arithmetic (in mathematical logic) means something different than what 128.104.112.106 seems to think. 207.241.239.70 (talk) 03:41, 6 June 2009 (UTC)[reply]
I believe the anon put "non-standard" in quotes in order to clarify that they weren't using it in the technical sense but just in the dictionary sense. --Tango (talk) 12:14, 6 June 2009 (UTC)[reply]
I thing I can clarify this by answering a similar question: Is there are universe in which you cannot cut a ball into five pieces and reassemble them into two balls identical to the first? Of course there is. You can't do that in this universe. It's not that the math is wrong. It's simply that our universe isn't set theory. It has parallels with it, but if you want to make a perfect model of our universe, it's going to be a bit more complex than that. 67.182.169.172 (talk) 17:21, 6 June 2009 (UTC)[reply]
That depends on your definition of "is"... but we are well into the realms of philosophy there, and we know how Steve feels about that! --Tango (talk) 18:31, 6 June 2009 (UTC)[reply]

Peacock and Peahen edit

Somebody told me that Peahen gets her eggs fertilized by orally shallowing the semen of the peacock.Is it true? —Preceding unsigned comment added by 202.70.74.155 (talk) 15:32, 5 June 2009 (UTC)[reply]

No. --Stephan Schulz (talk) 16:11, 5 June 2009 (UTC)[reply]
Fowlatio? Edison (talk) 01:59, 6 June 2009 (UTC)[reply]

Egg protein edit

Since eating raw eggs is not better than eating cooked eggs, what way should i cook them so that minimum protein is lost in the process of coooking (i.e. boil, fry etc.)? —Preceding unsigned comment added by 116.71.42.218 (talk) 18:00, 5 June 2009 (UTC)[reply]

This is a confusing question! If raw is not better than cooked, then why are you concerned about loss during cooking? As for outright loss during cooking, everything is still there except for maybe some small amounts that get stuck to the pan or leached out into the grease or other cooking medium, so something like hard-boiled would keep everything still in there. DMacks (talk) 18:04, 5 June 2009 (UTC)[reply]
The only protein loss during cooking would be the little bits that brown along where the egg meets the pan (see Maillard reaction). These are likely so small as to be insignificant, so you would be safe frying the eggs. As far as I am concerned, hard boiled eggs serve little purpose except perhaps to make deviled eggs or egg salad. I find them pretty bland and unpalatable. But if your concern is making sure you get every molecule of protein out that was there originally, taste be damned, then go with the hard-boiled method. --Jayron32.talk.contribs 18:28, 5 June 2009 (UTC)[reply]
Cooking will denature protein, but you will not lose the protein (specifically the amino acids). -- kainaw 18:33, 5 June 2009 (UTC)[reply]

To DMacks: I asked a question a couple of weeks back if raw eggs have any advantage over cooked ones and I was told no. And so if I work out and want max protein from the eggs I should hard boil them? I dont care about taste. —Preceding unsigned comment added by 116.71.59.87 (talk) 19:45, 5 June 2009 (UTC)[reply]

Kainaw already answered the above question, didn't he? Tempshill (talk) 22:10, 5 June 2009 (UTC)[reply]

Has anyone actually got any for the Euro elections? —Preceding unsigned comment added by 86.128.217.5 (talk) 22:43, 5 June 2009 (UTC)[reply]

I'm guessing you're interested in UK results: European Parliament election, 2009 (United Kingdom) should gradually accumulate that information - our editors usually get results up within minutes of it being announced...for an encyclopedia, we make a better news site than most proper news sites! There seem to be some results and recent polls there - but you'll probably want to hit reload every now and again. On the remote off-chance that it's not the slowly unfolding spectacular train wreck in the UK that you're interested in, there are links for results in the other countries at European Parliament election, 2009. My brother in law (Tony Goldson) just got elected to the Suffolk County Council with a landslide result - but no sign of that result here! SteveBaker (talk) 02:40, 6 June 2009 (UTC)[reply]
They are avoiding publishing any results (which seems to include exit polls) until Sunday evening to avoid influencing the votes in other countries (the Netherlands, for the second time in a row, has chosen not to do this, and may end up in a bit of trouble over it...). --Tango (talk) 13:17, 6 June 2009 (UTC)[reply]
Just out of interest, who is the "they" who are avoiding publishing, what, if anything, is enforcing this, and under what statutes, European or Dutch, may the Netherlands end up in trouble? Tonywalton Talk 22:35, 6 June 2009 (UTC)[reply]
You're not supposed to ask questions I don't know the answers too... There are some kind of European laws about how these elections are supposed to take place. Of course, the Netherlands is a sovereign country, so there isn't a great deal anyone can do about it, but they'll get told off by the European Commission and that sort of thing. I don't know if those laws apply to the media (who handle the exit polls), but I would guess it's just a gentleman's agreement that stops the exit polls being published. --Tango (talk) 23:27, 6 June 2009 (UTC)[reply]
So what's the point of the exit polls? Axl ¤ [Talk] 18:16, 7 June 2009 (UTC)[reply]
They are quicker, so you can get results out straight away once the embargo is removed, rather than having to wait for the count to finish. I'm not sure there were any exit polls in the UK, though. --Tango (talk) 00:51, 8 June 2009 (UTC)[reply]

work edit

Why would work be defined as a scalar rather than a vector? —Preceding unsigned comment added by 70.52.45.55 (talk) 23:29, 5 June 2009 (UTC)[reply]

Work done is synonymous with energy. The energy required to do something is equal to the sum of the energies required to do each part, they don't cancel out. --Tango (talk) 23:39, 5 June 2009 (UTC)[reply]
It does seem weird - if your "work done" is pushing a block up an inclined plane or something you kinda feel like it ought to be a vector...but remember that you could be doing work to increase the air pressure in a balloon - what would the direction be in that case? Suppose it was electrical work - charging a battery or something - again, no direction. SteveBaker (talk) 02:22, 6 June 2009 (UTC)[reply]
Lets just assume Work to be a vector. Say you're pushing a block of wood up an inclined plane. But the catch us that you're not applying the force along the inclined plane. Say you're applying the force directly horizontally, but the block still moves up. SO now, what would be the direction of your work? Would it be along the inclined plane, where the block actually moves, or horizontally, in the direction where you actually pushed? If we assign a direction for Work, it results in more confusions, and essentially loss of physical significance as being equivalent to Energy. So, it is best to keep Work as a Scalar. Rkr1991 (talk) 04:26, 6 June 2009 (UTC)[reply]
Or as one of my college professors would put it: "Because that's the way the math works out". Work defined is the dot product of two vectors, which always results in a scalar. -RunningOnBrains(talk page) 18:20, 6 June 2009 (UTC)[reply]
Yes, but that just changes the question to "why is it defined as the scalar product?". --Tango (talk) 18:29, 6 June 2009 (UTC)[reply]
If work (and by extension, energy) was a vector, there'd be no law of conservation of energy. When you fire a gun, more energy goes with the bullet than the gun, so there would be energy in the direction of the bullet. If you used the same gunpowder in a grenade, the energy would move in all directions and cancel itself out. Thus, more energy is extracted from the gunpowder one way than another way. Also, heat involves particles moving in all directions, so it would have zero energy. That would mean that anything that turns energy to heat destroys energy. — DanielLC 15:36, 7 June 2009 (UTC)[reply]