Wikipedia:Reference desk/Archives/Science/2012 March 6

Science desk
< March 5 << Feb | March | Apr >> March 7 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


March 6

edit

Closest possible planetary alignment?

edit

What's the closest possible planetary alignment of the 6 naked-eye planets? What if we include all 8? --140.180.6.168 (talk) 00:06, 6 March 2012 (UTC)[reply]

Do you mean like if Mercury, Venus, Earth, Mars, etc., all lay along a straight line with respect to the sun? That would be the "closest", I would think. As to whether it could happen... well, find the least common multiple of each planet's "year" in earth years, and that will tell you how often it could maybe possibly happen. ←Baseball Bugs What's up, Doc? carrots→ 00:13, 6 March 2012 (UTC)[reply]
This is a very common question here and it's a little more complicated then just saying "straight line", it really depends on how close to the line you allow to be "close enough". The general term for astronomical alignments is syzygy. also there are many articles online regarding the subject, like this and this. Vespine (talk) 00:34, 6 March 2012 (UTC)[reply]
I wonder what the answers were the other times it was asked? Anyway, as your article citation points out, it's virtually impossible for the planets to be all lined up as viewed from the sun, due to the fairly eccentric orbit of Pluto (and possibly of Eris, but I don't know). But in any case, the OP needs to clarify his question a bit. ←Baseball Bugs What's up, Doc? carrots→ 01:32, 6 March 2012 (UTC)[reply]
Or looked at another way, planetary alignments are much easier now that Pluto isn't a planet anymore. Dragons flight (talk) 02:13, 6 March 2012 (UTC)[reply]
Looked at another way, all the planets have different orbital inclinations, so three planets hardly ever lay along a true straight line. (Syzergy conveniently ignores celestial latitude.) In any case, how do you measure "closest"? In miles? In angular displacement? And from what point of view? A true alignment is called an occultation. There is an interesting chart showing some planetary occultations in that article which might go some way towards answering your question.--Shantavira|feed me 08:38, 6 March 2012 (UTC)[reply]
I'm certainly not a scientist, but doing a search on "Interactive Solar System Map" yielded SolarSystemScope; and if you "fast forward" to July of 2020, the six planets closest to the sun come into pretty good alignment.... Kingsfold (Quack quack!) 18:20, 6 March 2012 (UTC)[reply]
Wikipedia is a bit sparse on the subject, but the term in astronomy is Syzygy (astronomy). --Jayron32 19:56, 6 March 2012 (UTC)[reply]

Destruction of planets by another

edit

How long could an Earth sized terestrial planet remain intact after plunging into a Saturn or Jupiter sized gas giant. SkyMachine (++) 01:50, 6 March 2012 (UTC)[reply]

1)How fast is it going? That would have a strong influence on how long it remained intact. 2)How much distortion of the planet, and how much china jarred off shelves, cracked plaster, broken watermains, earthquakes, volcanic eruptions, tsunamis,and inundations of continents are allowed before you no longer consider it "intact?"Edison (talk) 01:58, 6 March 2012 (UTC)[reply]
The out layers would go immediatly, what about the mantle & core? As for speed assume a comet like velocity or lower for Earth into a Jupiter orbital period velocity. SkyMachine (++) 02:23, 6 March 2012 (UTC)[reply]
I suspect the answer is it wouldn't. A picture like this might put things into perspective and Roche limit is a relavant reference. I believe the earth would be torn to shreds before it even reached the "surface" of Jupiter. Vespine (talk) 02:32, 6 March 2012 (UTC)[reply]
Agreed. It might even break up before it hit the atmosphere, due to gravitational effects, like Comet Shoemaker–Levy 9. StuRat (talk) 02:36, 6 March 2012 (UTC)[reply]
So you expect it to break up and be vaporized in the upper atmosphere? Would the cores not merge together then? I am also thinking here of the suspected binary of Betelgeuse orbiting within that star's stellar envelope. From the Roche limit article: "If the primary is less than half as dense as the satellite, the rigid-body Roche Limit is less than the primary's radius, and the two bodies may collide before the Roche limit is reached" SkyMachine (++) 02:46, 6 March 2012 (UTC)[reply]
Break up, yes. Be vaporized, no. I'd expected the chunks to create massive explosions as they struck Jupiter, flinging smaller chunks all over Jupiter (and perhaps some into orbit). Most of the cores would eventually merge, yes. StuRat (talk) 03:40, 6 March 2012 (UTC)[reply]
Actually, for the scenario proposed above they don't break up either. The original poster specifically suggested an Earth-like planet plunging into a gas giant. The Roche limit depends on the ratio of the densities of the two objects. If the plunging object is a rigid-body more than twice as dense as the primary object then it doesn't cross its Roche limit before hitting the surface of the primary. In other words a dense planet like Earth would be expected to hit the atmosphere of a low density planet like Saturn while still intact. Dragons flight (talk) 04:54, 6 March 2012 (UTC)[reply]
That's what I thought it was saying, but aren't ALL of Jupiter's moons "at least twice as dense" as Jupiter, including the ones which have been pulvarized into rings? and they still have a roche limit, so what am I missing? this page says for an asteroid of density 3 (Jupiter's density is only 1.33) the roche limit is 133,000km. I'm pretty hopeless at maths so I might have this wrong but I've tried sticking in Earth's density of 5.52 into the eqiation on that page and I get a roche limit for Jupter / Earth of 108,000km. The combined radius of the two is under 78,000km. The difference is still over twice the diameter of the earth.. Vespine (talk) 21:50, 6 March 2012 (UTC)[reply]
Perhaps those rings never were moons, and the limits are different for forming new moons than for breaking up existing moons ? StuRat (talk) 21:57, 6 March 2012 (UTC)[reply]
5.52 is the average density of the Earth, but the Earth is differentiated, its core density is higher and perhaps could survive untill it impacts despite the less dense layers being torn away. SkyMachine (++) 22:37, 6 March 2012 (UTC)[reply]
The formula on that page is for a fluid body, the "twice the density" rule is for a rigid body. Our article, Roche limit, explains the difference. --Tango (talk) 22:58, 6 March 2012 (UTC)[reply]
Sooo, maybe the answer is not so certain, since Most real satellites would lie somewhere between these two extremes, with tensile strength rendering the satellite neither perfectly rigid nor perfectly fluid. I think it's safe to say the earth can't be defined as a "perfectly rigid body", in which case whether it breaks up or not really depends on its tensile strengh. Vespine (talk) 23:28, 6 March 2012 (UTC)[reply]
How much more rigid do they get then the Earth? SkyMachine (++) 00:07, 7 March 2012 (UTC) [reply]
Quite bit more, since the Earth is largely molten magma. StuRat (talk) 00:39, 7 March 2012 (UTC)[reply]
As Dragons Flight pointed out in the question above about a tunnel through the Earth, only a very small portion of Earth is magma as the mantle is kept solid by the pressures at most depths. The core is molten, though it's iron not magma. 203.27.72.5 (talk) 02:08, 7 March 2012 (UTC)[reply]
I don't think it is even the "molten" bit that makes it non rigid. I mean, by our definition, CLIFFS are rigid right? Except mountain ranges tell us that they aren't quite as rigid as they appear because they are literally bent into shape. The "apparent" planetary rigidity of the earth seems to me to be mainly a factor of the gravity (and therefore pressure) exherted onto its inner parts. But start a tug of war on those parts with another planet which has over 300 times the mass and I reckon I put my money on Jupiter to win. Vespine (talk) 01:19, 7 March 2012 (UTC)[reply]
Maybe what we CAN say with some certainty is that WE would be stuffed well before the earth hit Jupiter. Vespine (talk) 23:30, 6 March 2012 (UTC)[reply]
So to sidestep these Roche limit questions, assume an Iron planet the size of the Earth plunging into a Jupiter or Saturn. What is the fate of it? Does it explode and vaporize in the atmosphere or spiral down and collide/merge as a single body (or broken up as smaller bodies) with the denser inner layers and core of the gas giant. SkyMachine (++) 00:07, 7 March 2012 (UTC)[reply]

http://qntm.org/destroy#sec3 See section 11. Seems like throwing Earth into Jupiter is the most effective way to destroy the planet. 24.186.108.251 (talk) 01:04, 7 March 2012 (UTC)[reply]

As for a earth sized iron planet impacting Jupiter, I think you'd need some pretty fancy modelling to work out exactly what would happen. Given the above discussion I think you could assume it could remain intact to impact Jupiter, but then what happens next is probably quite complicated. What I do not doubt is that such an event would be spectacular! Just read what Comet_Shoemaker-Levy_9 did to Jupiter and it was only about 5km across before it broke up. Vespine (talk) 05:45, 7 March 2012 (UTC)[reply]

Stovetop power

edit

I'm contemplating the purchase of a new electric range. I'm looking at two otherwise identical models. The sole difference is that one has a feature called "turbo boil" that boils water faster. It does this by operating at 3200W rather than 3000W. How quickly will each stove boil a liter of water? I don't remember the math or the physics. May as well assume STP; our well water is sometimes quite close to freezing... --jpgordon::==( o ) 05:03, 6 March 2012 (UTC)[reply]

The specific heat capacity of water is about 4 joules per gram per degree Celsius. If 1 L (1000 grams) of water starts at room temperature (20°C) and is brought to a boil, you'll need a minimum of 4 J/g/°C * 1000 g * 80°C = 320 kJ of heat. 3200 W = 3200 J/s = 3.2 kJ/s; so a minimum of 100 seconds to reach a boil with the 'turbo' element, and a bit longer (about 7 seconds more) for the 'regular' element. Times will be roughly 25% longer for water that starts at 0°C instead of room temperature.
In practice, of course, the process will take a lot longer than the ideal with either element. Energy is required to heat the element itself, and much heat escapes to warm up the stovetop and the air in the kitchen. An element that consumes 7% more power (and therefore releases 7% more heat) may not be as important as an element that conducts that heat efficiently into the pot of water sitting on top. TenOfAllTrades(talk) 05:20, 6 March 2012 (UTC)[reply]
And, no matter what, electric stoves are never as quick as gas, since they need time to warm the element before they can warm the pot, while the gas is full temperature as soon as it ignites. Then there's the issue of the element only touching the pot in a few points, while the flames touch the pot over a much wider area, and can also heat the sides of the pot. For these reasons, you won't see many electric stoves in professional kitchens. StuRat (talk) 06:10, 6 March 2012 (UTC)[reply]
An induction element doesn't need time to warm the element and is a type of electric stove. (It does need time to warm the pot, but so to does gas.) Nil Einne (talk) 15:41, 6 March 2012 (UTC)[reply]
Your choice of cookware will probably make more difference than a less-than-ten-percent change in wattage. We have two steamers: a cheap, flimsy aluminum steamer and a heavy All-Clad steamer. The All-Clad gets off to a slower start, but quickly catches up and is a much faster performer overall. Of course a full suite of All-Clad costs more than a lot of ranges, so we content ourselves with a couple of pieces. Acroterion (talk) 15:53, 6 March 2012 (UTC)[reply]
100 seconds vs. 107 seconds to boil a liter of water...a minute longer or so for a gallon...certainly not worth the $400 difference between the two appliances. About what I'd guessed; if I'm in that much of a hurry to boil water, then I probably need to recalibrate my life. (As far as time to warm the element is concerned, interesting point; I've never seen any numbers for that. I've also never seen any study of the physics of having flame go up the side of the pot; seems a tad wasteful to me.) --jpgordon::==( o ) 20:13, 6 March 2012 (UTC)[reply]
If you really want to boil the water fast, go for an induction element. I've seen them boil water in just a few seconds, and they're very safe, since the "hot plate" doesn't actually get hot; only the pot does. Of course you have to buy special cookware to work with it. 203.27.72.5 (talk) 21:17, 6 March 2012 (UTC)[reply]
As a clarification, you need cookware that is ferromagnetic, generally if a fridge magnet stick to the bottom it should be fine. (The reasoning is explained in our article.) While only subset of cookware will meet this requirement, you may find some of your existing cookware will work. Generally cast-iron cookware and some forms of stainless steel ones will, but copper and aluminium will not (presuming this includes the base). Pyrex and similar are obviously out. Some cookware may be designed to work with induction cooking, some may simply work because of design. I believe induction cooking is more popular in Europe and parts of Asia so it's more likely cookware will be designed to work there but the OP appears to come from the US. Here in NZ, when someone I know got an induction cooker about half of their existing stovetop cookware worked. Nil Einne (talk) 04:14, 7 March 2012 (UTC)[reply]
In the UK, we boil water in 3000W electric kettles - much faster than any saucepan on a stove (just very slightly longer than the theoretical 107 seconds Ten calculated). I usually put the stove on with just a tiny a bit of water in the bottom of the pan so the pan can get hot while the kettle is boiling, that way you can end up with a pan full of boiling water very quickly. So, my advice: move to a country with decent power sockets (we have 230V at 13A)! --Tango (talk) 23:04, 6 March 2012 (UTC)[reply]
If you want to bring a given quantity of water to a boil, I don't see how starting with less than that quantity will speed things up. To the contrary, the cooling from boiling off the smaller amount should cool the system down more than if you waited to bring the entire quantity to a boil together. The only reason for starting with less than the total quantity of water is if the smaller amount of boiling water is useful to you. For example, when making cocoa it's helpful to put a little boiling water into the mug first, into which you can then dissolve the cocoa, and later add the remaining boiling water. StuRat (talk) 23:09, 6 March 2012 (UTC)[reply]
I think the unsated premise causing the confusion here is this method works when you need a pan full of boiling water for cooking, not when you are making a cup of tea. I do the same thing. Vespine (talk) 23:52, 6 March 2012 (UTC)[reply]
Um, yes. The kettle method is faster - and probably more efficient too, given that the element is in the water, whereas a (non-induction) stove inevitably wastes a lot of power in general radiant heating of everything in the vicinity, regardless of whether you want it to or not. AndyTheGrump (talk) 04:23, 7 March 2012 (UTC)[reply]
Indeed, if you are making a cup of tea then you don't involve a pan at all. The reason for putting a little water in the pan isn't because it is quicker to boil that water in the pan, it's because putting a dry pan on a stove will destroy the pan (a modern stove might cut off before it got that hot, but that's still a bad thing). If you pour the water from the kettle into a cold pan then it will come off the boil, so you might as well heat the pan up while you are waiting for the kettle to boil. --Tango (talk) 12:33, 7 March 2012 (UTC)[reply]

Sodium vs. salt

edit

Thus might be a dumb question... Since salt is a compound consisting of sodium and chlorine, I've always been a bit confused by how medical experts will refer to salt intake as being directly equivalent to sodium intake. Granted, I'm not sure whether the two elements are maybe split (?) by metabolism during digestion and enter the bloodstream that way, but is it actually more accurate to refer to sodium intake in connection with salt than it is to refer to hydrogen and oxygen intake in connection with water? Thanks in advance. Evanh2008 (talk) (contribs) 09:16, 6 March 2012 (UTC)[reply]

The problem is the sodium, which exists as a free ion in the bloodstream. There are other sources of sodium and chloride besides salt. For example, baking soda contains sodium, but no chloride, and salt-substitues contain chloride, but no sodium. Dominus Vobisdu (talk) 09:20, 6 March 2012 (UTC)[reply]
Right, I get that. I'm just trying to get a handle on exactly what biological mechanisms take salt and break it down into its constituent elements (sodium being the relevant one here) in order for the sodium to actually start existing in the bloodstream. Basically, my thought is -- you can't oxygenate your blood by drinking water (a compound which contains oxygen), so how do you get sodium into your blood when you eat salt (a compound which contains sodium)? Or does sodium actually act as sodium in the compound form of salt, unlike other elements which generally lose most of their unique properties when bonded to other atoms in a compound molecule? Evanh2008 (talk) (contribs) 09:33, 6 March 2012 (UTC)[reply]
The sodium and choride dissociate into ions whenever salt is dissolved in water. And it's not sodium metal that is dissolved in the blood, but sodium ions. No offense, but this is all extremely basic introductory secondary-school chemistry, and it will be difficult to explain to you if you don't understand the basic principles like ions and covalent bonds. I'm afraid that our articles here on Wikipedia presume at least a basic understanding of chemistry, so they are unlikely to be of help. Any highschool chemistry book will answer your questions. Dominus Vobisdu (talk) 09:50, 6 March 2012 (UTC)[reply]
I know what ions are (did quite well in high school chemistry, actually). Anyway, I think the first sentence of your reply answers my question perfectly. Thanks. :) Evanh2008 (talk) (contribs) 10:03, 6 March 2012 (UTC)[reply]
Medical experts are not the same as chemistry experts, who would object to equating salt intake with sodium intake for several reasons:
1) The number of grams of table salt does not equal the number of grams of sodium, since the chlorine adds significant weight to table salt.
2) Table salt must then be distinguished from other salts, like potassium chloride. Just calling them all "salt" isn't good enough.
3) As noted above, you can also get sodium from other sources, like sodium bicarbonate. StuRat (talk) 22:56, 6 March 2012 (UTC)[reply]
In case anyone misreads StuRat's answer, I'd just like to point out that any medical professional is well aware that 'Salt', 'Sodium' and 'Sodium Chloride' are different things. For an example, see our article Recommended Daily Intake. However, the component of table salt that confers the risk of hypertension is the sodium ions it contains, and the easiest way to communicate the message to the layman is to reccommend reducing salt intake. See this example from the Mayo clinic, which uses the wording that "Too much salt (sodium) in your diet [is a risk factor for hypertension]". --NorwegianBlue talk 20:39, 7 March 2012 (UTC)[reply]
To be clear, the reported sodium content of foods will indeed be the weight of the sodium, not the sodium+chloride. While it is obviously very important whether this is sodium in a +0 (metallic) oxidation state or a +1 (ion) oxidation state or some other state not generally (ever?) seen, in practice no one expects to find metallic sodium in food, so the total sodium content is equal to the sodium ion content. (Technically, the absence of the electron makes a tiny difference in mass, but this I am sure is neglected by all people in the context of nutrition, especially as you can't really consume the ion without getting the electron one way or another) For other metals this distinction is much more serious - for example, finding out your total chromium intake is much less useful than knowing your chromium(III) and chromium(VI) intakes individually! Wnt (talk) 22:57, 7 March 2012 (UTC)[reply]
Na is somewhat stable and several salts of it are known. Freaky, eh? DMacks (talk) 14:52, 8 March 2012 (UTC)[reply]

Followup question

edit

... from someone who didn't listen during secondary school chemistry: so if I put a spoon of table salt into a pan of water, there isn't any more "salt", all there is in the water are sodium ions and chlorin ions? And if I boil away the water, what is left at the bottom of the pan are the ions, or do they get back together and make "salt" again? Thanks in advance for your patient answer -Lgriot (talk) 10:01, 7 March 2012 (UTC)[reply]

I think both "what is left at the bottom of the pan are the ions" and "they get back together and make 'salt' again" are more or less true statements. Salt is just a bunch of ions. The ions stick together because some are negatively charged (anions) and some are positively charged (cations), in this case Cl- and Na+. Dissolved in water, the ions are free to float around separately from each other because water itself can split into H3O+ and OH- ions (see Self-ionization of water). The cations in the salt can hang out with OH- and the anions with H3O+, instead of having to stay stuck to one another. Once the water is gone again, they have no other ions of the opposite charge to be with but each other. Rckrone (talk) 15:53, 7 March 2012 (UTC)[reply]
With common sodium chloride, all the water goes back out and you get NaCl again. With other salts, sometimes the last of the water is particularly difficult to get rid of, and is known as water of hydration, forming some regular crystal structure with the ions. The key thing to recognize is that usually there is some sort of regular crystal structure, which has the property of keeping the + and - quite well balanced out; the water doesn't usually leave haphazardly one molecule at a time throughout the substance after it is no longer present as a solvent in large excess. (In other words, things don't have unlimited solubility in water; they precipitate) Wnt (talk) 03:45, 8 March 2012 (UTC)[reply]

how can bleach survive?

edit

I realize I might have some misconceptions or this may be a strange question, but...how can bleach survive? (itself). I'm asking because, bleach will remove just about anything, so how can it survive? If not bleach, is there anything so strong that it remove even itself, so if you synthesize any of it or it gets created, it just removes itself?

Sorry if I misunderstand something, but I'm genuinely curious. Obviously there are even stronger acids that will burn anything off, so are there any that are so strong they don't survive? (themselves). --80.99.254.208 (talk) 09:29, 6 March 2012 (UTC)[reply]

Bleach does actually decompose over time. There plenty of corrosive substances that are inherently unstable in that respect. Plasmic Physics (talk) 10:10, 6 March 2012 (UTC)[reply]
Rhetorical Question: If you put bleach solution on a shirt does the shirt disapear? SkyMachine (++) 09:08, 7 March 2012 (UTC)[reply]
The simple answer is that any decomposing agent acts by illiciting a chemical reaction between itself and its target (this reaction can be catalytic and last a very long time, such as between catalase and hydrogen peroxide, but let's ignore that). In the case of bleach, the polyatomic ion hypochlorite is the main damage dealer. It functions by oxidizing target molecules, hopefully converting them to a form that can just be rinsed away or at least not impact the color of what's being cleaned. The reason bleach doesn't simply clean itself away is that bleach is not a great chemical target for itself. Hypochlorite does indeed react with itself, destroying itself, but it only does this relatively slowly. In the case of a simple acid, you have a similar situation. The acid reacts with target molecules to destroy them, but won't react with itself. It's important to note this is not some magical property of cleaning agents, just chemistry and human selection. There are dangerous chemicals that decompose rapidly through self-reactions, and those are not found in the cleaning products isle of your favorite grocery store because they suck at such a task. Someguy1221 (talk) 10:51, 6 March 2012 (UTC)[reply]
They may well suck, but a cleaning agent you have 30 seconds to use before it destroys itself and whatever dirt or scum you're trying to eliminate would be pure awesome. Kamikaze Kleaner. aiiiiiiiiiiiiiiiiiiiiiiiieeeee!!!!!! 78.92.82.6 (talk) 15:20, 6 March 2012 (UTC)[reply]
For even more awesome, a universal solvent would also dissolve the container you carry it around in... Comet Tuttle (talk) 18:25, 6 March 2012 (UTC)[reply]
No dude it comes in TWO tubes that you mix together INTO awesome. The question is how you get the awesome to come together only once it's not touching the apparatus. Like, it would be, one tube drops A, waits, other tube drops B, waits, first tube dropa A again, and you have to be 10 centimeters away from the surface, like this
A
  B
A
  B
A 
  B

with the a's and b's at sufficient distance that the awesome can't swim up the BA stream. then they drop

  B

then

AB

then

 B
AB

then

AB
AB

and so on, until you have a pool of awesome that eradicates dirt and anything except porcelein, whool, dyes, or fine cashmere, finally eradicating itself. I need to go into materials science STAT. --80.99.254.208 (talk) 19:53, 6 March 2012 (UTC)[reply]

"I would imagine the inside of a bottle of cleaning fluid is f*ing clean." -Mitch Hedberg --Rallette (talk) 11:18, 7 March 2012 (UTC)[reply]

Except for all that gloop. SkyMachine (++) 12:25, 7 March 2012 (UTC)[reply]

This question deserves a better answer. Understand that bleach is sodium hypochlorite, which is the salt of hypochlorous acid, which is like a "hybrid" of hydrogen peroxide and chlorine gas. (i.e. mix and match Cl-Cl and HO-OH to get HO-Cl) Now what these compounds have is that they are strong oxidizers, the opposite of reducing agents. As I recall, how bleach works is to add oxygen to aromatic compounds and other unsaturated carbons, and the oxygen makes them more water soluble - pretty much the same approach the body takes using hydrogen peroxide and other oxygen-based compounds when the immune system attacks an invader or the cytochrome P450 system breaks down an organic compound that has been ingested. Now bleach can't bleach itself, because you can't really oxidize an oxidizer. The breakdown of bleach is more of a slow rearrangement of its parts, though I'm not presently quite sure of which mechanism prevails. Wnt (talk) 23:53, 7 March 2012 (UTC)[reply]

Do gravitational microlensing studies rule out compact objects as dark matter?

edit

Recently I noticed that there is quite a controversy about the composition of dark matter (see Talk:Dark matter#Draft table for instance) and while looking in to it, I found that there is some disagreement about the extent to which gravitational microlensing studies have ruled out compact objects as dark matter. I noticed that [1] specifically says that small numbers of microlensing events observed by the many searches "does not allow us to draw definite conclusions on the content of compact halo objects" as dark matter. That paper cites [2] which is by authors famous for mapping dark matter in the universe. It has this to say:

"There have been extensive and sustained efforts to characterise the number of MACHOs in the halo of the Milky Way, its satellites the Large and Small Magellanic Clouds, and our neighbouring galaxy Andromeda (M31). Even though MACHOs are not visible themselves, whenever one passes in front of a star its gravitational microlensing briefly brightens the star. Since the volume of space along lines of sight that would cause microlensing is tiny, many millions of stars need to be continually monitored. Looking towards 12 million stars in the Magellanic Clouds for 5.7 years, the MACHO survey [306] found only 13–17 microlensing events (and some of these have been challenged as supernovae or variable stars). At 95% confidence, this rules out a model in which all of the Milky Way’s dark matter halo is (uniformly distributed) MACHOs. However, if all events are real, the rate is still ∼ 3 times larger than that expected from a purely stellar population, indicating either that they contribute up to 20% of the Milky Way halo’s mass [307], or a larger fraction of the Magellanic Cloud halo, in less massive bodies [308]. Also looking towards the Magellanic Clouds, the Experience pour la Recherche d’Objets Sombres (EROS) project [309] found only 1 event in 6.7 years of monitoring 7 million stars, compared to the 39 expected were local dark matter composed entirely of 0.6 × 10−7–15 M⊙ MACHOs. Looking towards the Magellanic Clouds and the densely populated central bulge of the Milky Way, the Optical Gravitational Lensing Experiment (OGLE) [310, 311, 312] detected only 2 microlensing events in 16 years, and even these events are consistent with self-lensing by stars, rather than MACHOs [313, 310]. The OGLE results conclude that at most 19% of the mass of the Milky Way halo is in objects of more than 0.4 M⊙, and that at most 10% is in objects of 0.01–0.2 M⊙. The POINT-AGAPE experiment [314, 47] observed unresolved (pixel) microlensing in the more distant Andromeda galaxy, and found that at most 20% of its dark matter halo is in 0.5–1.0 M⊙ mass objects (at 95% confidence)."

Those seem like relatively narrow ranges, but my question is about conclusions regarding mass ranges which are open-ended upwards. If dark matter was composed of objects which were, say, 100,000 M⊙ on average, wouldn't that result in far fewer microlensing events -- because there would be so many fewer total MACHOs -- than could have ever expected to be observed in those studies? What is the reasoning involved in ruling out any compact objects larger on average than a couple dozen solar masses with any of these studies? Npmay (talk) 20:54, 6 March 2012 (UTC)[reply]

Off the top of my head, if the (or some of the) hypothetical MACHOs were dozens of solar masses or more, it's difficult to see what they could be other than stars (in which case they wouldn't be dark – we'd see at least some of them) or black holes, in which case one might expect to see some (perhaps intermittent) recognisable radiation from any infalling gas and/or dust they surely would encounter occasionally. {The poster formerly known as 87.81.230.195} 90.197.66.254 (talk) 02:22, 7 March 2012 (UTC)[reply]
Does anyone know where the numbers on the expected radiation from infalling gas radiation are compared to observation? Certainly accretion disks do not last forever, and once a black hole has cleared out its immediate vicinity, it is not clear to me how often matter would likely wander in to replenish it. Surely someone has calculated this? Npmay (talk) 02:49, 7 March 2012 (UTC)[reply]

For black hole MACHOs, Carr et. al. is a good start. For baryonic MACHOs, the strongest constraints come from light element abundances etc. that rule out baryonic dark matter of any type. Waleswatcher (talk) 16:42, 7 March 2012 (UTC)[reply]

I'm looking through the references cited by Carr et al which on page 3 says, "there are no constraints excluding PBHs in the sublunar range ... or intermediate mass range 102 M⊙ < M < 104 M⊙" but where can I find a discussion of light element abundances? I don't see why hiding matter in black holes or any other kind of MACHOs would imply a change in nucleosynthesis ratios, but I don't know where to start to read about that. Npmay (talk) 22:11, 7 March 2012 (UTC)[reply]
If you change the ratio of non-baryonic, non-electromagnetically interacting dark matter to baryonic matter, you mess up all sorts of things in early universe cosmology. You can read about that in any modern cosmology textbook, like Weinberg or Mukhanov. Basically, those constraints rule out the possibility that dark matter is baryonic. As for primordial black holes in those particular mass ranges, I'm not sure what the other constraints might be. One thing I do know is that there is no plausible mechanism to create them. Also, if they get too light they would have evaporated by now. Waleswatcher (talk) 05:14, 8 March 2012 (UTC)[reply]
Where should I look in Mukhanov (2005) for this? The discussion on page 70 ("The CMB fluctuations imply that at present the total energy density is equal to the critical density. This means that the largest fraction of the energy density of the universe is dark and nonbaryonic") doesn't exactly explain what would get messed up with baryons. What is the mechanism by which supermassive black holes are thought to have been created? Npmay (talk) 06:34, 8 March 2012 (UTC)[reply]
Deuterium abundances are probably the place to start. That's very sensitive to the baryon density (because a proton and a neutron have to collide to produce it), and its measured value rules out the hypothesis that there are enough baryons to be dark matter. As for supermassive BHs, are you asking an entirely new question now (about the black holes at the center of galaxies), or about MACHO dark matter? If it's the latter, the answer is there isn't one. Waleswatcher (talk) 02:16, 9 March 2012 (UTC)[reply]
I took a close look at that section, and honestly I am having difficulty finding the line of reasoning which eliminates the possibility of greater numbers of baryons in the same ratios. If you could walk me through that or point me to the page numbers, I'd appreciate it. My question about black holes is: Since it is widely accepted that supermassive black holes exist in about the same abundance as galaxies, are there any reasons that smaller black holes comprising some substantial portion of dark matter may have also formed in the same manner? Also, since we don't have any information about the composition of black holes, how do we know they are baryonic at all, and not mostly composed of electrons or mesons, or perhaps even the missing proportion of antimatter? Supermassive_black_hole#Formation cites [3] supporting the existence of a large population of IMBHs. Npmay (talk) 10:55, 9 March 2012 (UTC)[reply]
Walking you through is too much to ask. The basic reasoning is simple - light elements form at a certain phase of the universe (when it's at the right temperature). The density of protons at that time determines their abundance (the greater the proton density, the more collisions and therefore the more elements other than hydrogen). So the abundance tells you the density of protons at that time, and it's much too small to account for the total matter density as measured today. Waleswatcher (talk) 15:25, 10 March 2012 (UTC)[reply]
As for supermassive black holes in galactic centers, there are two theories.

1) Many individual mass concentrations (or density fluctuations, if you like) in the very first formations of clouds eventually becoming galaxies resulted in rather massive, however stellar-size primordial black holes, which collided while the galaxy was still in its formation. 2) After the big bang (or rather in the course of it), a significant amount of dark matter (theory according to supersymmetry, sparticles) formed in the first instances of the creation of the first particles (before protons and neutrons were formed) and joined to form both supermassive black holes as well as the first ever structure filaments of the universe. There is a string-theoretical description of black holes, named "fuzzballs". When you look down the article and look at the sources, there are four lectures on the subject by Samir Mathur (held at CERN), the third being the calculation of a stellar black hole, the fourth an extrapolation of these findings to the big bang and the likely development of the universe in string-terms. Hope this helps! 87.184.27.249 (talk) 13:37, 9 March 2012 (UTC)[reply]

Can aluminium under go combustion by itself in the presence of oxygen?

edit

Topic says it all. ScienceApe (talk) 20:57, 6 March 2012 (UTC)[reply]

Yes, if it has sufficient surface area and is under sufficient partial pressures of oxygen.[4][5] Npmay (talk) 21:06, 6 March 2012 (UTC)[reply]
Yes. In fact it can be pyrophoric if it's finely powdered. 203.27.72.5 (talk) 21:10, 6 March 2012 (UTC)[reply]
Interesting. What's the formula for the chemical reaction? ScienceApe (talk) 21:13, 6 March 2012 (UTC)[reply]
4Al + 3O2 = 2Al2O3. Atmospheric carbon dioxide and water vapor may also provide for a minor amount of combustion with aluminum, compared with diatomic oxygen. Powdered aluminum sprayed into 100% oxygen would be the most explosive, with ozone (O3) being even more reactive than regular O2. StuRat (talk) 21:22, 6 March 2012 (UTC)[reply]
The net product is alumina 4Al + 3O2 → 2Al2O3. Al2O is also produced in aluminium combustion but decomposes after combustion if the temperature is allowed to drop. 203.27.72.5 (talk) 21:31, 6 March 2012 (UTC)[reply]
With even stronger oxidizers than air, atomized aluminum is well appreciated by pyrotechnics aficionados, for purposes such as flash powder. Wnt (talk) 23:56, 7 March 2012 (UTC)[reply]

Light dimmers

edit

In my bedroom, I have a dimmer switch that controls the brightness of the lamp using Pulse-width modulation (PWM). This works with either traditional incandescent bulbs or halogen bulbs.

  1. Does using the switch usually set the switch to below the maximum power significantly reduce the lifetime of the bulbs (as compared to the case when there was no PWM switch and possibly lower power bulbs)? Does it matter in this respect whether I use traditional or halogen bulbs?
  2. Does using the switch always dimmed use significantly more electrical power compared to the case when I had no such switch but provided the same brightness with lower power bulbs?
  3. Can such a switch be used for fluorescent tubes and LED lighting as well? (I don't mean to change the setup in my room, rather asking about whether such a system can be installed to a new place.)

Thanks for your help, – b_jonas 21:32, 6 March 2012 (UTC)[reply]

3) I don't think it would work well with traditional fluorescent tubes. They either wouldn't light at all, or would flicker in an extremely annoying manner. StuRat (talk) 21:53, 6 March 2012 (UTC)[reply]
I don't believe a PWM dimmer would reduce the life of a incandescent bulb significantly. The life reduction comes from heating and cooling the filament, but dimming circuits typically work at 100hz or so which is far too quick for the filament to cool down significantly between pulses.
As for the power, I can't see how. If you imagine the "wave form" of either scenario, the power is the area underneath the waveform, whether you have a continous line at half, or a 50% pwm wave at full, the "power" is the same..
Lastly, there are specific dimming circuits designed for CFL bulbs, and I have now seen CFL bulbs sold advertising specifically to work with "older" dimming circuits, so there is no one answer that fits all circumstances. Vespine (talk) 23:03, 6 March 2012 (UTC)[reply]
Using a dimmer, whether PWM or conventional phase chopper, to obtain a low light output from a larger than necessary incadenscent bulb is always less efficient than using a just large enough bulb at full brightness. This is because you are operating the large bulb at a lower filament temperature. As filament temperature is lowered, light output drops faster than the waste heat output. You'll find that with typical 60W globes, you get just about zero light output when the lectrical power input is still around 10 to 15W. The traditional phase chopper dimmers were always hard on bulbs at low settings, but PWM increases bulb life as flickering is eleiminated. Keit121.215.37.114 (talk) 23:49, 6 March 2012 (UTC)[reply]

(edit conflict):::I agree with the above answers, and would add that, years, ago, I ruined a sophisticated dimmer switch by connecting it to an early CFL. Both traditional and halogen bulbs are designed to give optimum light at a particular average power. Reducing the power supplied to them with any dimmer circuitry (PWM or purely resitive) will tend to reduce the efficiency in that a greater proportion of the power will be dissipated as heat rather than light. Reducing the power will also increase, not reduce, the lifetime of the bulb, for the reason mentioned by Vespine. In the case of LED circuits, the behaviour will depend on the circuitry controlling the LEDs. For some constant current circuits, the LEDs will not be dimmed at all, but will just switch off at a certain dim level. If the LEDs are connected through just a rectifier, then they should dim in a similar way to incandescent bulbs, but this would be an unusual control circuit. In general, connecting two pulsed circuits (such as PWM dimmers or switch-mode transformers) together can have strange and unpredictable effects. Dbfirs 23:52, 6 March 2012 (UTC)[reply]

Just one final point, LED dimming IS also done with PWM, however for smooth LED lighting you need PWM closer to thousands of Hz. Even at a hundred Hz you can easily see LED flicker, especially at <50% duty cycle and especially if the LEDs move through your field of vision.(EDIT, that's also neglecting that a LED bulb is NOT just going to be a bunch of leds plugged straight into a socket, it will have some sort of "control electronics" which it might not possible to predict how it would behave if you fed it a PWM pulse) Vespine (talk) 01:05, 7 March 2012 (UTC)[reply]
Okay, thanks for the answers to all. – b_jonas 11:58, 7 March 2012 (UTC)[reply]