Wikipedia:Reference desk/Archives/Science/2012 April 1

Science desk
< March 31 << Mar | April | May >> April 2 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


April 1 edit

exponential power of 2 and number of ancestors 30 generations ago edit

I'm sure the answer to this is simple, but for some reason I just can't think of it.

A person has 2 parents, 4 grandparents, 8 great-grandparents, and so on. So it's 2 to the power of the number of generations. Yet it wasn't until 1800ish that there were a billion people on earth. All you have to do is go back 30 generations for 2-to-the-30th to equal a billion; that's at least 600 years, conservatively. So how could you have a billion great-great-great-to the thirtieth-power-grandparents, when there weren't that many people on earth?

I will feel really stupid when someone explains this, because it must be obvious.

It's March 31st where I am, so, no, this isn't some kind of April Fool's trick.76.218.9.50 (talk) 00:48, 1 April 2012 (UTC)[reply]

Inbreeding. At some point in the past, the same person occupies multiple places in your family tree; i.e. one man is your great-9x-grandfather multiple ways (i.e. you decend via his son on one side of your family, and via his daughter on the other side). The further back you go, the more likely this is to happen. It is unlikely that this would work for 3-4 generations, where people generally know each other and try to avoid marrying close cousins. But when you get back 9-10 generations, there's no way that you know who you are related to, so it is quite likely for your parents to have shared some many-centuries-old ancestor, likely several. --Jayron32 00:52, 1 April 2012 (UTC)[reply]
Actually, I thought of this right after I posted -- by taking the question from the other direction. I thought, let's say in year A.D. XXXX there were 100 million people on earth. By definition, everyone alive now descends from those 100 million; yet there are many times more "slots" in the family tree than 100 million; therefore, many people occupy many different slots in the genealogical table. Somehow, this is still pretty mind-boggling, but it basically comes down to lots of people marrying their own (say) third- or fourth- (or tenth- or (gasp) first-)cousin. It would be interesting to know at what point the number of people on earth equaled exactly as many as, on average, we all have "slots" in our tree AS OF that date; clearly, it was sometime between 30 generations ago and 1800ish. That would be the tipping point toward less fourth-cousin-marriage, wouldn't it?76.218.9.50 (talk) 01:02, 1 April 2012 (UTC)[reply]
Jayron, your link led me to Pedigree collapse. Very interesting!76.218.9.50 (talk) 01:07, 1 April 2012 (UTC)[reply]
The classic example of pedigree collapse is Charles II of Spain, who I believe only had two great-great-grandparents (i.e. all 16 possible positions in the family tree are taken up by one man and one woman.) --Jayron32 01:09, 1 April 2012 (UTC)[reply]
 
Family tree of Charles II of Spain
Our article says "The inbreeding was so widespread in his case that all of his eight great-grandparents were descendants of Joanna of Aragon and Duke Phillip of Austria." That doesn't mean he only had two great-great-grandparents, though. His inbreeding involved a lot of uncle-niece marriages, rather than just cousin-cousin marriages, so the generations are all messed up. Joanna and Phillip will have occupied a lot of slots, but they won't all have been in the same generation (they were his great-great-great-grandparents and his great-great-great-great-great-grandparents, I haven't checked the other lines). This is his family tree. --Tango (talk) 01:44, 1 April 2012 (UTC)[reply]
In otherwords, FUBAR. --Jayron32 01:50, 1 April 2012 (UTC)[reply]
(edit conflict) All his eight great-grandparents were descendants, not children, of Joanna of Aragon and Duke Phillip of Austria; in fact, none were their children, and Charles II had 14 distinct great-great-grandparents. By my best count, Joanna of Aragon and Duke Phillip of Austria did each occupy 14 positions in his family tree: as 2 of his possible g-g-g-grandparents, 6 of his g-g-g-g-grandparents, and 6 of his g-g-g-g-g-grandparents. (Is there an easy of doing this, say by counting the cycles in the tree?)-- ToE 02:17, 1 April 2012 (UTC) (corrected from 2, 6, 5, & 1. -- ToE 02:35, 1 April 2012 (UTC))[reply]
You might also enjoy most recent common ancestor. Dragons flight (talk) 01:10, 1 April 2012 (UTC)[reply]
If you go back 29 generations, you have 2^29=537 million "slots". If we assume 20 years per generation (ie. women on average have children when they are aged 20, which is a fairly standard assumption and is about right until recently), that is 29*20=580 years ago, or about 1430. According to World population was around 450 million in 1400, but dropped significantly due to the Black Death and didn't recover for 200 years. That means the world population was more than 2^28 and less than 2^29 for around 300 years or so, which means 29 generations is the closest we'll get. I don't think there is anything particularly significant about that, though, especially since that 450 million people includes people in multiple generations - you have to fill slots in the 29th, 30th and some of the 31st out of people alive at that time (and don't forget generations will be different lengths in different parts of your family tree, so someone 20 generations above you and someone 25 generations above you could easily have been alive at the same time). It's an interesting bit of trivia, but that's all! --Tango (talk) 01:30, 1 April 2012 (UTC)[reply]
Also significant is that, until quite recently, there was very little mixing between populations. The actual number of generations to actual pedigree collapse needs to be adjudged within a specific population, and not across all humanity. Until fairly recently, for example, it would be unlikely that an Australian Aborigine and an Aztec would have any chance to share an ancestor closer than about 10,000 years to either; so any calculations made across all humanity likely overshoot the actual result by many generations. --Jayron32 01:40, 1 April 2012 (UTC)[reply]
I suspect the average generation is greater than 20 years, since women tended to have lots of kids, so their "average" age of childbirth would probably have been somewhat higher. But to return to my question: "It would be interesting to know at what point the number of people on earth equaled exactly as many as, on average, we all have 'slots' in our tree AS OF that date; clearly, it was sometime between 30 generations ago and 1800ish. That would be the tipping point toward less fourth-cousin-marriage, wouldn't it?" It is interesting to think that the "coefficient of inbreeding" (so to speak) would be decreasing with each passing moment from some date in the past, particularly since the invention of the car plus decent roads. This must have some sort of implication greater than trivia? And particularly so after a few more centuries? A good thing, I'd guess, for those of us who love our common race's diversity.76.218.9.50 (talk) 01:57, 1 April 2012 (UTC)[reply]
Pedigree collapse is indeed the term that's used. If you know your tree back a ways, you may be surprised at how many instances you'll discover of first-cousins marrying, for example - a common occurrence, at least until around 1900 when genetic issues finally came to be noticed. ←Baseball Bugs What's up, Doc? carrots→ 01:14, 1 April 2012 (UTC)[reply]

I was kidding when I used the phrase "coefficient of inbreeding" above -- but it turns out that's exactly what it's called! (Talk about successfully faking it.) Check this out, about Charles II of Spain -- an article that should be added to several different WP articles, but I'm too lazy: http://blogs.discovermagazine.com/gnxp/2009/04/inbreeding-the-downfall-of-the-spanish-hapsburgs76.218.9.50 (talk) 02:26, 1 April 2012 (UTC)[reply]

From that link: Charles II's coefficient of inbreeding was (F = 0.254), where the higher the number, the more inbreeding. "What [you are] doing here is summing up through all of the distinct paths to common ancestors. You weight this by the number of individuals between the common ancestor and the individual whose inbreeding coefficient you are calculating (note that, for example, some of Charles’ lines up ancestry back up to his common ancestors have different numbers of generations). Finally you have to include in the inbreeding coefficient of that common ancestor. Let’s play this out for a brother-sister mating. Assuming that the grandparents, of whom there are only two, are unrelated. So FA = 0. Then, it simplifies to: FI = (0.5)3 X ( 1 ) + (0.5)3 X ( 1 ) = 0.25 In other words, Charles II was moderately more inbred than the average among the offspring from brother-sister matings!" Please, somebody, start the inbreeding coefficient article! (And again, this isn't a joke -- just a coincidence; I didn't notice the April 1 thing till after I first posted above, and the link is a real one.) Start here!: http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0005174 76.218.9.50 (talk) 02:38, 1 April 2012 (UTC)[reply]
I don't think anyone needs to start a new article: see Coefficient of relationship - Nunh-huh 03:14, 1 April 2012 (UTC)[reply]
... to which Inbreeding coefficient redirects. -- ToE 03:19, 1 April 2012 (UTC)[reply]
Well, thanks for patronizing me ... that's always fun. But where exactly in that article do you find THIS?: FI = Σ (0.5)i X (1 + FA) 76.218.9.50 (talk) 03:28, 1 April 2012 (UTC)[reply]
Nowhere, until you add it. Wikipedia never gets better until you make it better. --Jayron32 15:24, 1 April 2012 (UTC)[reply]
Our current theory of Common descent basically means that if you go back far enough, all life can trace its ancestry back to just one common progenitor. That article actually touches on the OP's question. Vespine (talk) 04:48, 2 April 2012 (UTC)[reply]

fine structure edit

why is it important to know the fine structure of an element when its wavelengths are unique to identify it. — Preceding unsigned comment added by 197.255.118.206 (talk) 01:30, 1 April 2012 (UTC)[reply]

Fine structure has technical details, but basically it allows one to probe the detailed organization of the electrons in an atom, in general all of the quantum numbers more specific than the principal quantum number. In other words, though you can identify the element by its gross structure, you can know how the electrons of that specific element are organized via the fine structure. Fine structure allows one to know specific information about a specific state of that element, for example, to distinguish between triplet oxygen and singlet oxygen. --Jayron32 01:32, 1 April 2012 (UTC)[reply]

In Situ Generation of Acid Catalyst in Esterification edit

Hello. Why does the acid catalyst in esterifications need to be generated in situ? If I directly add the acid, what side reactions can occur? Thanks in advance. --Mayfare (talk) 03:58, 1 April 2012 (UTC)[reply]

It doesn't need to be generated in situ. What makes you think it does? 203.27.72.5 (talk) 04:18, 1 April 2012 (UTC)[reply]
(edit conflict) Most acid catalysts used for things like Fischer esterification are common mineral acids, like sulfuric acid. Those acids are significantly hygroscopic to the point where the water they bring to the party is enough to mess up the slow, and highly equilibrium dependent, esterification process. Remember that ester hydrolysis directly competes with esterification: they both function in acidic environments, and the factor which controls the equilibrium shift is the relative amounts of alcohol to water; too much water and the reaction system won't produce good yields of ester. Indeed, in esterification you need to find ways to deal with the fact that it produces water as a byproduct, and that water itself has to be dealt with. Introducing additional water that comes with, say, the water always present in a bottle of sulfuric acid which is sitting on the shelf (so-called "concentrated sulfuric acid" is 98% acid and still has 2% water in it; lower water concentration spontaneously lose excess SO3 until the concentration settles back to a 98/2 ratio). You can use enough dessicant in the reaction to deal with the produced water, and I suppose that can also deal with the water present in the acid; but generation in situ removes one of the sources of water, which helps drive the reaction to make more product more efficiently. --Jayron32 04:22, 1 April 2012 (UTC)[reply]
Considering how exothermic the reaction of concentrated sulfuric with water is, I doubt using sulfuric acid is going to add any significant free water to the reacting mixture. According to Esterification#Preparation, sulfuric acid helps the reaction by sequestering water. 203.27.72.5 (talk) 04:51, 1 April 2012 (UTC)[reply]
Indeed. Good point. --Jayron32 04:53, 1 April 2012 (UTC)[reply]
So coming back to the OP's question, you can use externally sourced acids provided they are either also a dehydrating agent or aren't in aqeuous solution. Sometimes those acid aren't suitable e.g. if acid sensitive functional groups are present, so in that case you can get a mild acid without adding any aqeuous solution by in situ generation. If you want to get an idea of the side reactions, look at the reactants' functional groups. 203.27.72.5 (talk) 05:57, 1 April 2012 (UTC)[reply]

Say I had a simple carboxylic acid and a simple alcohol. --Mayfare (talk) 17:08, 1 April 2012 (UTC)[reply]

Regarding Common Emitter BJT configuration. edit

I recently came across a paragraph in an engineering textbook regarding BJT transistor configuration. According to the text, when a BJT is in common emitter configuration, and operating within the active region of the output characteristics curve , the current at the output or collector terminal is not affected by any resistor placed in series with the output. Instead, the collector current is only affected by a change in the current at the base terminal. However, according to my logic, if the resistance at the collector is increased to a large enough value or a small enough value, the collector current should change. I am really confused on this one and I just don't know how changing resistance level of a pathway doesn't change the current in the pathway. Please help. — Preceding unsigned comment added by 210.4.65.52 (talk) 09:11, 1 April 2012 (UTC)[reply]

In the active region, the transistor collector is a high impedance terminal and thus acts like a current source. The collector current will be relatively constant (assuming Vbe is constant) as long as the collector resistor does not go too high. The collector resistor and the output resistance at the collector (in h parameter terms: hoe) can be considered to be in parallel as far as the signal is concerned so there will actually be a division of current between the two resistances, but as long as Rc is small compared with hoe, essentailly all the current from the internal current source will flow through the load resistor. If the load resistor goes high enough, then the transistor will enter the saturation region. In the case of very low resistances down to zero, most of the current from the internal current source will go through the load as it is much smaller than hoe.--92.28.78.70 (talk) 13:26, 1 April 2012 (UTC)[reply]
FYI, the above user 92-something has been blocked as a sock of a banned user. ←Baseball Bugs What's up, Doc? carrots→ 23:47, 1 April 2012 (UTC)[reply]
Is the information he provided accurate? It sounds plausible if memory serves. Edison (talk) 00:09, 3 April 2012 (UTC)[reply]
Banned users are not allowed to edit, regardless of the alleged quality of their edits. In the case of 92-something, he won't be back under the current address for awhile. ←Baseball Bugs What's up, Doc? carrots→ 03:05, 3 April 2012 (UTC)[reply]

Canning timing edit

Hi. I'm making a mango-lime chutney. The recipe fills 8 jars. When it comes to the processing part of the recipe, I can only fit 5 jars into my big pot of boiling water. The processing takes 15 minutes. I need to find out if it's ok to fill the other three jars at once, then wait 15 minutes for their turn to process, or wait and fill and process them, or something else. I need a source that answers this question. Thank you for your help.

I would fill 4 jars, process them and keep the chutney warm while that's happening. Then process the others. However, I wouldn't bother processing a chutney mix. When I make chutney, I sterilise the jars using a microwave method and fill them, seal and store. The vinegar and sugar in the mix preserves and prevents mould forming. My guide in all things preserving is Marguerite Patten's books. You must have a recipe there for your chutney: if it's in a book, what does the book say? --TammyMoet (talk) 12:43, 1 April 2012 (UTC)[reply]

Thank you for your advice. I appreciate that it is sourced. I have no book; the recipe was in my aunt's papers.

I would appreciate any other sources that I could read. Thank you again. — Preceding unsigned comment added by 184.147.123.69 (talk) 17:53, 1 April 2012 (UTC)[reply]

This might help, looks worth reading the linked page and other linked articles. --TammyMoet (talk) 20:09, 1 April 2012 (UTC)[reply]

Thank you for your efforts! It seems this is not an issue that comes up often. Maybe other people have bigger pots. — Preceding unsigned comment added by 184.147.123.69 (talk) 11:43, 2 April 2012 (UTC)[reply]

Metallicity of stars edit

Will the increasing metalicity of stars mean the formation of earthlike planets, and eventually life, is inevivitable.? — Preceding unsigned comment added by 99.146.124.35 (talk) 14:27, 1 April 2012 (UTC)[reply]

It has already happened once. Depending on how you read the Drake equation, it is may have already happened many times. --Jayron32 15:22, 1 April 2012 (UTC)[reply]

Sound c ards edit

Are we really to believe that an ordinary sound card costing say $100 with some cheap spectrum analyser software, can out perform an expensive Audio analyser like the Audio Precision 1?--92.28.78.70 (talk) 16:24, 1 April 2012 (UTC)[reply]

As always, one needs specific data and details of the experiment to support any claim. Look at exactly what was compared and what the specific results were to see if "out perform" is a valid conclusion or is specific to certain types of performance in certain situations. But note well that cost is not a good measure of quality (or more precisely, that only expensive things can be good or that cheaper things cannot also be good). To a certain approximation, RedHat's made a fortune selling software that is available for free:) DMacks (talk) 16:57, 1 April 2012 (UTC)[reply]
Another important point to keep in mind: audio electronics are quite simple, compared to other domains of electronic engineering. Audio signals have very narrow bandwidths - only a hundred kilohertz will provide very sufficient oversampling. The signal center-frequency is at 0 hertz... when digitized, a low bit-rate is sufficient... even in the analog domain, the signal to noise ratio need not be any better than the perceptual noise sensitivity of the human ear - only maybe 40 dB (about 10 or 12 bits, for the digitally-minded; though CDs and others use 14- or 16-bit samples)... so from this standpoint, every specification is trivial to design. It doesn't take cutting-edge analog or digital hardware; and with today's computer horsepower, even a naively-implemented software can trivially process the entire data representation of the human audible range. A general-purpose, low-end $200 oscilloscope will probably outperform an "audio analyzer" on most technical specifications.
The "value-add" of professional audio equipment tends to come in the form of other intangible goods: reliability; brand-recognition, and the consequent perceived status for the bearer. In the case of audio processing systems, usability and user interface may be better on a high-end system. While I know how to perform any arbitrary signal processing on my audio-card in C, most people find the process a bit convoluted, and prefer user-interfaces with things like sampling loops, sliders, and one-button effects processors. So, there's a market for expensive audio hardware. Even I have been known to spend more money than I should on a tube-amp, whose complicated, "nonlinear" analog system-response is easily modeled by a cheap DSP; I bought my tube-amp "because it was awesome," not really because its tone was any better or different than my mini DSP-driven practice amp. It's 2012; the MOS transistor is, for all purposes, perfectly ideal in the kilohertz range; and the microprocessor can run arbitrarily complex software on each sample; so these facts really invalidate the old-fogey claims about tube tone quality. The same logic applies to sound-cards. Nimur (talk) 17:50, 1 April 2012 (UTC)[reply]
FYI, the OP has been blocked as a sock of a banned user. ←Baseball Bugs What's up, Doc? carrots→ 23:44, 1 April 2012 (UTC)[reply]

Dogs who howl at the phone edit

Why do dogs sometimes have a tendency to howl when the phone rings? 98.235.166.47 (talk) 22:14, 1 April 2012 (UTC)[reply]

Because it startles them. --Jayron32 22:55, 1 April 2012 (UTC)[reply]
Cats and dogs have much better hearing than humans, and as annoying and ear-piercing the phone may be to us, it can be downright scary to the animal whose ears it has assaulted. ←Baseball Bugs What's up, Doc? carrots→ 23:45, 1 April 2012 (UTC)[reply]
According to an urban legend, some dogs who are tied to (or standing in a puddle of water adjacent to) telecommunications poles, will bark or yelp when the phone is ringing to indicate an incoming call, because the electricity involved stimulates them in a painful way. In theory, this should only be the case if the telecommunications equipment is faulty or unsafe in some way. --Demiurge1000 (talk) 23:50, 1 April 2012 (UTC)[reply]
I have a cat who hates when my wife's cell phone is on speaker. I had a dog who went absolutely berserk if the smoke alarm went off. There's something about those sounds that the animals react to. Humans have similar reactions such as fingernails on a chalkboard. Some people hate while it doesn't matter to other people. Dismas|(talk) 10:56, 2 April 2012 (UTC)[reply]
As this is a Reference Desk, here is a reference from Canada's Pet Information Centre; "Sometimes dogs will howl when they hear sirens or other loud higher pitched sounds like clarinets and flutes. These sounds may even come from a television set. Dogs do this as an instinctive response to hearing what they interpret to be another howl (dog in the distance). They are not doing this because it hurts their ears." This site, ALL ABOUT DOGS and CATS - Resource Center for Canine & Feline Lovers, agrees; "Howling may be triggered by sirens, singing or other noises the dog finds similar to howling, says Dan Estep, Ph.D., a certified applied animal behaviorist in Colorado and co-author of Help! I'm Barking and I Can't Be Quiet (Island Dog Press 2006)". Alansplodge (talk) 19:04, 2 April 2012 (UTC)[reply]

Subjective biking effort edit

Is there any rough metric of the subjective effort required to bike X meters up a slope of Y degrees? From experience, I know the metric must be highly nonlinear in Y, because a 20 degree slope feels much more than twice as hard as a 10 degree slope. --140.180.39.146 (talk) 22:36, 1 April 2012 (UTC)[reply]

Subjective effort is unique and subjective, by definition. It sounds like you want us to quantify your unique experience. Simple trigonometry can be used to calculate the effect of different slopes on the objective (i.e. actual) force you need to exert in order to move your bike a certain speed. It sounds like your asking for the effect on how hard it feels like to you; quantifying your "feelings" is only something you can do, since this is a highly individualized effect. --Jayron32 22:54, 1 April 2012 (UTC)[reply]
It's partly a function of gearing. My experience is that with a normally geared road bike, I can handle a slope of up to 6 degrees with the same effort as flat riding, by going slow enough. If the slope is steeper, I can't go slow enough to equal the effort level of flat riding without having trouble staying upright. A slope of 10 degrees is hard work, and 15 degrees takes major fitness to sustain for more than a few minutes -- 20 degrees is deadly. With a mountain bike, though, the equation changes, because the gearing is different and it's easier to stay upright. Looie496 (talk) 00:31, 2 April 2012 (UTC)[reply]
The question as you posed it, "the subjective effort required to bike X meters up a slope of Y degrees", can't be answered, because said subjective effort depends on speed and gearing. A much more answerable question would be, what is the subjective effort (or time to exhaustion) required to bike at a cadence X RPM at the power output of Y% of your "functional threshold power".
You can compute the power output needed to go at a given speed up a given slope using this calculator. Your functional threshold power can be estimated using a stationary bike or the same calculator. For an untrained adult male it should be on the order of 150 W. At high percentages of FTP, subjective effort is minimized at a cadence of 70 to 90 RPM. Power output below 100% of FTP at a cadence in this range can be sustained for many hours (essentially, until you deplete your carbohydrate reserves.) If the cadence is too low or the attempted power output is too high, you eventually run into exhaustion.
Road bicycles are typically geared in such a way that an untrained, non-overweight rider can pedal on the grade of up to ~5% without getting out of the optimal region. Any steeper than that, and there's no gear low enough to maintain even 70 RPM. Mountain bicycles have more generous gearing.--Itinerant1 (talk) 08:39, 2 April 2012 (UTC)[reply]