Wikipedia:Reference desk/Archives/Science/2007 November 8

Science desk
< November 7 << Oct | November | Dec >> November 9 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


November 8 edit

cells edit

what would happen if you place a blood cell in a hypertonic solution —Preceding unsigned comment added by Dmx123 (talkcontribs) 00:37, 8 November 2007 (UTC)[reply]

The reference desk it not for answering homework questions, but if you can't figure this one out on your own, you should read osmotic pressure. Someguy1221 00:49, 8 November 2007 (UTC)[reply]
... or Osmosis or Diffusion. --slakrtalk / 04:35, 8 November 2007 (UTC)[reply]
... or better yet, hypertonic. (The article answers this very question.) -- 20:21, 8 November 2007 (UTC) —Preceding unsigned comment added by 128.104.112.105 (talk)

navel edit

hey how come for some people when they're fingered in their belly button, it hurts and for others it tickled them?Jwking 01:00, 8 November 2007 (UTC)[reply]

Because some people don't know how hard to poke, and just stab you with their finger. HYENASTE 01:17, 8 November 2007 (UTC)[reply]

Reception of off air frequency standards edit

May be a stupid question, but why, when receiving, do you need a local oscillator to phase lock to the incoming signal? Only reason I can think is that the transmitted signal is not constant in amplitude. Why cant you use the incoming frequency dorectly? Also, why do you need a quartz Xtal osc to be locked to the incoming frequency, won't a normal vco do? —Preceding unsigned comment added by 79.76.246.62 (talk) 01:42, 8 November 2007 (UTC)[reply]

It's for FM reception. The radio signal is varying slightly up or down in frequency depending on the amplitude of the sound wave it's trying to transmit to you (Frequency Modulation). You tune the radio's local oscillator to the nominal center frequency and it's easy to produce an audio signal that's proportional to the difference in frequency of the local oscillator and the radio signal. SteveBaker 01:59, 8 November 2007 (UTC)[reply]
Er, no its AM. [1] —Preceding unsigned comment added by 79.76.246.62 (talk) 02:20, 8 November 2007 (UTC)[reply]


You need a very low bandwidth. Typically the signal will be 5kHz wide, at a frequency of 10MHz. A VCO is no where near stable for this and will drift off in a few minutes. You can see this on old cheap shortwave radios, which will need retuning every so often. The crystal oscillator is much more stable. A VCO locked to a crystal is one way to get flexibility. Another important thing for a frequency standard is low phase noise. The best way would be to have a narrow crystal filter at 10 MHz, but even so the ionosphere causes fading and phase shifts. For 60kHz standard a LO would not be needed. Graeme Bartlett 05:57, 8 November 2007 (UTC)[reply]
So you can use the ultra stable 60 kHz freq directly (or multiplied up to 1 MHz or 10 MHz or whatever)? Is that what your saying? If so, why do most designs use a local oscillator locked to the incoming frequency? —Preceding unsigned comment added by 79.76.246.62 (talk) 12:06, 8 November 2007 (UTC)[reply]
See Superheterodyne receiver. Changing the frequency of the local oscillator is what tunes the radio to a station. Its frequency is beat against everything coming in from the antenna. The resulting harmonics are filtered for the intermediate frequency, 455kHz in the case of AM. A big advantage to this system is that from there on the amplifiers need only pass the one relatively low frequency. I don't know what you mean by a crystal oscillator locked to the incoming frequency, but it has been a long time, so please clarify. --Milkbreath 03:33, 8 November 2007 (UTC)[reply]
Crystal control is absolutely not needed in an AM radio. I have owned several old AM radios which would stay tuned to a station for a year or more without re-tuning. Older car radios had pushbuttons which mechanically rotated a tuning cap to the desired stations, and did not need re-tuning for years at a time. Edison 13:10, 8 November 2007 (UTC)[reply]
For a frequency standard, a Local Oscilator is not stable enough. You would need to down convert to the intermediate frequency, filter, and then up convert to the original stable frequency to get the reference. On HF frequencies around 10 MHz drift is ten times bigger than it is on the AM band around 1 MHz that you get on an old car radio. If you just want to listen to the time pips all this extra stability is not needed, you just need to keep the radio tuned to the station. Graeme Bartlett 20:38, 8 November 2007 (UTC)[reply]
I think the answer is that the off air frequency references have excellent long term stability, but are subject to the vagaries of radio reception such as : noise, timing uncertainty due to reflections from the ionosphere etc, and unwanted amplitude modulations, breaks in reception etc. OTOH, Local crystal oscillators can be made to have excellent short term stability and low phase noise etc but are subject to long term drift from component aging. Put the two together and get the best of both worlds!

Minerals edit

Is it possible to make a mineral (in this case a molecule containing iron) magnetic by running an electric current through it, or around it? I have reference here to the specific mineral asbestos, or one of its several 'subspecies'?76.182.3.188 01:56, 8 November 2007 (UTC)[reply]

Very few minerals can be magnetized in a way such that they remain magnetic after the inducing field is removed. See Magnetization. A good many minerals, including some without iron, such as salt, can be made to give a diamagnetic response while the inducing field is present, but I do not think asbestos is one of them (but I'm not certain about that). Cheers Geologyguy 03:03, 8 November 2007 (UTC)[reply]
Most minerals do not conduct, so you cannot easily run a current through them. In the case of asbestos it is a good insulator so it will not conduct at any reasonable voltage. If you ran a current around it you would have an electromagnet. The atoms in the mineral would respond in some way, but most have no strong response. A few Iron minerals may respond with ferromagnetism and even be magnetised as in Magnetite Graeme Bartlett 05:52, 8 November 2007 (UTC)[reply]
See Lodestone, a mineral (Fe3O4) some samples of which are found in the ground as natural magnets. The Wikipedia article does not say it, but other sources say the magnetism may result from lightning striking the mineral [2] [3] [4]. Edison 13:07, 8 November 2007 (UTC)[reply]

microwave hyperthermia edit

any reason not to heat someone up (their whole body to 104 F - 107 F) with a very high tech microwave (in order to generate a healing fever)? It penetrates and is the same frequency as cell phones... —Preceding unsigned comment added by 76.168.69.208 (talk) 02:02, 8 November 2007 (UTC)[reply]

Microwaves do not heat everything uniformly, and in particular they will heat organs with a high water content more rapidly. Of special concern are the eyes where high levels of microwaves can promote cataracts and other damage.
On the more general point, I'm not sure that heat is necessarily helpful in fighting disease. Fever is one of a myriad of reactions the body produces to combat disease, but an externally produced hyperthermia might be more detrimental than helpful since it wouldn't necessarily be accompanied by the same suite of immune responses as a natural fever. Dragons flight 02:25, 8 November 2007 (UTC)[reply]
Also note: Wikipedia does not give medical advice. Anyway, the Hyperthermia article states that temperatures above 104 degrees Fahrenheit are "life threatening". I wouldn't reccomend it either, unless you know what you are doing, and I would get a second opinion (from a qualified professional) either way. I also have never heard of inducing a fever to heal by this method (or any), as fevers are caused naturally by the body as part of the immune response. SmileToday☺(talk to me , My edits) 02:28, 8 November 2007 (UTC)[reply]
He might have gotten this from last week's episode of House, in which a portion of a treatment for an individual was artificially raising his body temperature. Someguy1221 02:39, 8 November 2007 (UTC)[reply]
Microwave heating of smaller regions of the body to fever-range temperatures has been tested for various therapeutic purposes. If you're envisioning putting the entire body in a microwave oven (even a high-tech one) to heat the entire patient at once, you're out of luck. Per Dragons flight, you would get dangerous local heating effects that are very difficult to control. There are other, lower-tech methods that are just as effective. Where I have seen microwave heating employed is to do rapid, local thermal ablation of smaller volumes—a microwave antenna is inserted into a solid tumour, and the temperature elevated high enough to 'cook' the tissue.
If you go to ClinicalTrials.gov and search on the term hyperthermia, you'll find a number of trials – mostly for cancer – that are testing the use of whole-body hyperthermia as a way to sensitize the body to radiation or chemotherapy or to potentiate the immune system's response to malignant tissue (e.g. [5], [6], [7]). Techniques that have been used to achieve hyperthermia include induction heating, warm wax immersion, hot water blankets, and radiant infrared heating. Patients under general anasthesia can also be treated using extracorporeal hyperthermia—blood can be drawn from the body, warmed externally, and returned to circulation. TenOfAllTrades(talk) 12:45, 8 November 2007 (UTC)[reply]
I'll note that in non-medical contexts, there have been various suggestions to replace a home's heating system with a (low powered) microwave generator. Instead of heating the air, you heat the body directly. Supposively, this would save on energy costs. A quick Google search turns up [8]. -- 20:25, 8 November 2007 (UTC) —Preceding unsigned comment added by 128.104.112.105 (talk)

Why is water transparent? edit

Why is water transparent? I did some searching, and the reason for this is because water is transparent to the visible spectrum of light. But why does it exhibit this property? Does it have something to do with its hydrogen bonds that are responsible for so many of its other special properties? Acceptable 02:37, 8 November 2007 (UTC)[reply]

There’s some explanation of this in absorption spectrum. Which frequencies a water molecule can absorb depends on what possible quantum states the molecule has. The frequencies of photons that the molecule can absorb correspond to the possible differences in energy between pairs of states. The ratio of a difference in energy level to the corresponding freqency of light is known as Planck's constant. MrRedact 03:20, 8 November 2007 (UTC)[reply]
I should point out, water is slightly blue. Malamockq 03:28, 8 November 2007 (UTC)[reply]
Consider checking out Color of water. --slakrtalk / 04:32, 8 November 2007 (UTC)[reply]
Plasmon frequency? —Preceding unsigned comment added by TreeSmiler (talkcontribs) 03:15, 9 November 2007 (UTC)[reply]
Eyeballs are mostly made of water. This would affect what light could be seen.Polypipe Wrangler 21:24, 13 November 2007 (UTC)[reply]

Tectospinal tract edit

Is it safe to conclude from this picture that the superior colliculus is connected with the inferior colliculus through the tectospinal tract? Lova Falk 10:29, 8 November 2007 (UTC)[reply]

There are some axons that go from the superior colliculus to various brain locations (tectobulbar axons) but I think the vast majority of the axons in the tectospinal tract first go anterior (ventral) and then cross the midline before descending past the level of the inferior colliculus. A good neuroanatomy textbook will have a figure for the tectospinal tract showing a series of brain cross-sections, one at the level of the superior colliculus, one at the level of the inferior colliculus and several more going down to the spine. This is the best I could find online (and it is not very good). --JWSchmidt 18:51, 8 November 2007 (UTC)[reply]

Point and shoot digital camera edit

Is it true that camera manufacturer deliberately introduce shutter lag into PS digital camera to "encourage" their clients to buy the much much more expensive DSLR camera instead. 220.237.184.66 12:06, 8 November 2007 (UTC)[reply]

Unlikely, as not all point-and-shoot manufacturers have a DSLR in their lineup. One reason that comes to mind is that contrast-detection autofocus is slower than phase-detection autofocus. It's also possible that "something" has to be done to the CCD or CMOS sensor before taking the shot, if the sensor has been used for a live preview. For example, CCDs may accumulate charge that needs to be cleared. Since DSLRs generally do not have a live preview, they can keep the sensor ready to shoot. Maybe people know of other factors that I'm not thinking of. -- Coneslayer 13:31, 8 November 2007 (UTC)[reply]
Point-and-shoot cameras are slower because they use contrast detection rather than phase detection autofocus, and because they usually have a less-powerful focus motor to increase battery life. You can test this by pre-focusing a point-and-shoot: once it's focused, it's actually faster than a DSLR at taking the picture, because the DSLR needs to get the viewfinder mirror out of the way. --Carnildo 22:46, 8 November 2007 (UTC)[reply]

Inelastic collision edit

About Inelastic collision As we know that in inelastic collision the initial and final momentum,total energy are conserved but kinetic energy is not conservsed.this is why ,why kinetic energy is not conserved .Plz explain the example of the collsion of cars.also mentioned it that after collsion the two cars come to rest then how is initial and final momentum the same , as they are moving with a speed before collision .thanx ........usman —Preceding unsigned comment added by Star33 2009 (talkcontribs) 13:15, 8 November 2007 (UTC)[reply]

It may be useful to compare inelastic collisions with elastic collisions, whereby kinetic energy is conserved (see the first two sentences of that article, then contrast with the lede in inelastic collision). As for the car collision, bear in mind that physics is not nearly so concerned with "speed" as it is with velocity, and consider that the momentum of the two-car system is what's being conserved, not the momentum of the two individual cars. — Lomn 14:00, 8 November 2007 (UTC)[reply]
I actually ran a little experiment on this. Kinetic energy is always conserved as a physical law- if it's not, you haven't included everything in the system. So we have to know where the energy of collision goes to. I modelled this with a spring system, as springs are a good approximation for any interaction and well-established in the interaction of particles. So then we have energy initial = energy final, or  . Now note that the energy of the spring includes both the spring potential energy, 1/2kx^2, and the kinetic energy of particle 1 and 2 oscillating against the spring. It is what is called a simple harmonic oscillator. So now
 ,
and so energy is conserved. The spring model has some consequences: it implies both that there are oscillations between the two particles after collision and that there is a non-zero collision time with finite acceleration. In the intro physics lab which I TA, the data showed the finite acceleration which agreed to good approximation with the theoretical acceleration of the spring (a linear differential equation - if you want me to show you how it is solved, please let me know), but no oscillations after collision. This is because the oscillations are quickly damped out as heat (another differential equation), which should also be measurable with a calorimeter, but I haven't tried this experiment. Energy is still conserved, as heat is another form of energy for which we can account.
For your second question, note that momentum is a vector quantity, so it has both magnitude and direction - that should put you on the right track. SamuelRiv 14:11, 8 November 2007 (UTC)[reply]
Momentum is a vector - but kinetic energy isn't.
If you crash two very stiff objects together (two billard balls for example) then they bounce off and the sum of the kinetic energies of the two balls is almost exactly the same before as after. When you whack two cars together, the energy is absorbed in bending and tearing metal and plastic - so they don't bounce off much - there is no kinetic energy left, it all turned into heat. If you bounce two rubber balls together, the result is somewhere between the two extremes. KINETIC energy is not conserved in real-world collisions - but TOTAL energy is. SteveBaker 03:38, 9 November 2007 (UTC)[reply]

camera pixel resolution edit

What is the engineering standard for stating the number of pixels that make up a camera? The actual number of individual sensors, i.e., the number of rows of sensors times the number of columns of sensors on the chip, or the number of different areas of a picture that are focused onto a single sensor or single small group of individual sensors in sequence to build the whole picture? Clem 13:54, 8 November 2007 (UTC)[reply]

Megapixel ratings are just a description of the number of sensors. The fact that more goes into a good picture than that rating alone is one of the reason the rating system is seen as being somewhat deceptive. As the page points out, in cameras this is even more deceptive, since each sensor generally registers only one color, and so the final image resolution can be easily a third less than the MP rating. --24.147.86.187 15:03, 8 November 2007 (UTC)[reply]

Here is an example of a sensor with three layers one for red, one for green and one for blue.[9] Along with the powerpoint that outlines the old technology and explains this new technology.[10] It is marketed as a 4.5 Megapixel CMOS direct image sensor with a maximum picture size of 1420 x 1064 x 3 matrix as seen in the HanVision HVDUO-5M digital camera. David D. (Talk) 23:05, 8 November 2007 (UTC)[reply]


The megapixel count on a camera is the number of pixels in the image it produces. The actual number of light-sensing elements depends on the sensor technology used and the relative influences of the marketing and engineering departments. Sensors using the Bayer pattern and the related CMYK pattern will typically have as many single-color sensor elements as there are pixels in the output image. Cameras using the Foveon sensor pattern can have one-third as many full-color sensor elements as there are pixels; cameras using Fuji's Super CCD pattern have one-half as many songle-color sensor elements as there are pixels. Cheap high-megapixel cameras will use a small sensor and scale the image up: a 20-megapixel camera might use a 5-million-element sensor and use interpolation to produce an image with more pixels. --Carnildo 23:35, 8 November 2007 (UTC)[reply]

Old cars carbon emissions edit

How does the average carbon output of a 1980 1.4l petrol engine compare to a recent one? I heard statistics saying that 20% of the oldest cars represent 60% of total emissions. Is this correct? In view of these statistics some countries want to tax (or are already doing it) old cars. If we add the carbon output of producing a new car and disposing of the old one to the equation, is it still so favorable to buying a new car vs. keeping an old one running longer? Should we add to this the carbon output necesssary to produce enough wealth to buy the new car or is this irrelevant? Thank you. Keria 14:06, 8 November 2007 (UTC)[reply]

Please be sure to distinguish pollution from carbon emissions. Older engines emitted far, far more pollution (unburned hydrocarbons, carbon monoxide, and the like) but the amount of carbon ultimately emitted is a strict function of the fuel economy of the car. Your 1980 1.4L car is probably emitting much less carbon per mile than my 2003 4.2L Audi (which averages about 25 miles/US gallon).
Atlant 17:20, 8 November 2007 (UTC)[reply]
You only have to compare fuel consumption numbers - that's a pretty fair comparison because the amount of CO2 you get out of a gallon of gas is about the same no matter how you burn it. My 2007 MINI Cooper'S with a 1.6l engine manages 7.5l/100km city and 5.3l/100km highway. A 1986 Honda Integra (pretty similar in size) also had a 1.6l engine and manages 7.8l/100km city, 6.8l/100km highway. So what gives? We don't do much better now than we did then...well, the MINI manages 170hp - the Integra managed about 108hp. What's happening is that we're getting more power out of engines than we used to. So whilst the consumption of a 1.6l engine isn't much better than it always was - we're able to stick a 1.6l engine into a car that would have required a 2.5l engine 20 years ago. Having said that, my old 848cc 1962 Mini manages 5.1l/100km on the highway - fractionally better than the 2007 version - but the 1962 car only has 37hp and could only manage a top speed of 72mph - the 2007 car goes MORE THAN TWICE AS FAST on the same amount of gas. SteveBaker 03:24, 9 November 2007 (UTC)[reply]

Ladder to Space edit

There's this infinitely tall ladder in my backyard. When I climb three or four feet and let go of the rungs I obviously fall back down to the ground. What is the minimum distance I would have to climb so that when I let go of the rungs I would never fall back down to the ground? (I think the answer is 22,240 miles where I would join all the geostationary artifical satellites of the Clarke Belt, but I'm not sure. Orbits, and rocket science in general, confuse the bejeezus out of me.) Sappysap 14:08, 8 November 2007 (UTC)[reply]

You might want to check out Space elevator. -- Coneslayer 14:17, 8 November 2007 (UTC)[reply]
It'd be geostationary orbit altitude, but only if you're also at the equator. Climbing a ladder ascending vertically from New York will not put you in a stable orbit at that altitude; you'd have to go higher. Climbing a ladder from the North Pole confers you no velocity and you'll drop from any altitude. I'm guessing the tangent of latitude is relevant to exactly how high you've got to go, but I'm not sure. — Lomn 14:24, 8 November 2007 (UTC)[reply]
Irrelevant Post Ahead: I just have to say, that Space Elevator article is extremely fascinating. Beekone 14:49, 8 November 2007 (UTC)[reply]
That is true. See Coriolis effect. We can calculate the height needed to get you in orbit - the thing about a ladder is that it's rigid, so your angular velocity at the top of the ladder is equal to the angular velocity of the earth at that latitude, so your orbit will only occur at a geosynchronous distance: see Geosynchronous orbit derivation. SamuelRiv 14:52, 8 November 2007 (UTC)[reply]
Where's a mathematician when you need one? Say your ladder is at 40 degrees north latitude. You will have to climb up to where your speed is the geostationary orbital speed. This will be higher than if your ladder were at the equator. When you let go, you will move toward the earth until you reach the geostationary altitude where you'll stay until perturbations mess things up. I'm sorry, but I don't feel like deriving the formula for all that, the relationship between angular velocity, altitude and latitude. Where's that mathematician? There are guys who can do this sort of thing in their head. --Milkbreath 17:13, 8 November 2007 (UTC)[reply]
Perhaps they're all here?
Atlant 17:17, 8 November 2007 (UTC)[reply]
Right. I had a chance to think about this just now while watching a Labrador retriever do his business (hardly a Newtonian anecdote, eh?). The only way you'll be able to let go of the ladder and just stay there is if the ladder is at the equator. So, the nerds are off the hook. (I suspect that the answer to my pointless question above is a straight line tangent to the earth at a geostationary point above the equator.) --Milkbreath 17:41, 8 November 2007 (UTC)[reply]
Not quite - there are important consequences when you get far enough out where the rotation of the Earth is too fast for gravity and you get vertical components of the coriolis effect. Integrate across the length of the ladder accounting for atmospheric effects and you'll get a mess. The ladder itself needs a counterweight to make it stable, which is why the space elevator needs such a large counterweight mass. Oh, and the math for latitude at angle phi just needs a sine term, sin(phi), to multiply through. See the formula at Coriolis effect. SamuelRiv 20:43, 8 November 2007 (UTC)[reply]
Blaise Gassend computed whether, where and how hard an object falling from any level on an equatorial elevator will hit the ground. —Tamfang 18:49, 9 November 2007 (UTC)[reply]

Answering the actual question edit

We were not asked how high you would have to climb on the ladder in order to enter geostationary orbit. We were asked how high you would have to climb in order to never fall back down to the ground. Which means that any stable orbit would do.

Since the atmosphere has no sharp outer boundary, there is no specific altitude above which you have to orbit in order for orbital decay due to atmospheric drag to become negligible. However, I will assume for simplicity that a distance of 8,000 km above the Earth's center (that's roundly 1,600 km or 1,000 miles above the surface) is what you need to achieve. So you want to be in an orbit with its perigee at that distance and its apogee at the point where you jump off the ladder.

The answer clearly depends on your latitude. If the ladder is at the North or South Pole, it doesn't matter how high you climb: the ladder is simply rotating around its own axis and that doesn't put you in orbit, so you'll always fall back to Earth. But if the ladder is at latitude L and you climb to a distance A above the Earth's center, then you are moving horizontally in a circle of circumference 2 pi A cos L, completing one circle per sidereal day. From here on I'll suppress units; numerical values are based on distances in km, times in seconds, speeds in km/s, etc. Then your speed is V = KA where K = (2 pi/86164) cos L.

Now, using the formulas on this page with some changes of variable names, the orbit's apogee and perigee distances A and P (from the center of the Earth), and the apogee speed V, are related by

V² = G M (2P/(A(A+P))

or in other words

V² A (A + P) = 2P G M

where G is the gravitational constant and M is the Earth's mass, and the value of GM is known to be 398,600. But in this case we also know that V = KA, so we have

K²A³ (A + P) = 2P G M

and since all the other values are known (for a specific latitude), we merely have to solve this for A. As it reduces to a quartic equation, this is not easily done by algebra, but it can be solved numerically by a simple computer program.

For example, suppose your backyard is at latitude L = 45°. Then we have K = (2 pi/86164) cos L = 0.00010313 and K² = 1.0635e-8. We are assuming P = 8,000, and we know GM = 398,600. So 2GMP = 2 x 8,000 x 398,600 = 6.3776e9, and we have

(1.0635e-8) A³ (A + 8,000) = 6.3776e9

so

A³ (A + 8,000) = 5.9968e17

with the numerical solution that A = 26,020 km to 4 significant digits. Taking the Earth's radius at your latitude as 6,370 km, the answer is that you would have to climb 19,650 km or say 12,210 miles in order to jump off and reach a stable orbit with the perigee mentioned above.

If your backyard is on the equator, you're in a much better position. In that case K² is larger by a factor (cos 0 / cos 45°)², which conveniently is exactly 2, which makes A³ (A + 8,000) smaller by the same factor, i.e.

A³ (A + 8,000) = 2.9985e17

This has the numerical solution A = 21,630 km. If your ladder is at the equator, you must climb a mere 15,250 km or 9,475 miles to jump into a stable orbit. And if you wanted to enter a geostationary orbit (not a bad idea if you ever intend to come back down using the ladder!), then on the equator it would be possible by climbing to the height mentioned above. Anywhere else, of course, it would not be.

--Anonymous, edited 04:49 UTC, November 9, 2007.

The effect of stars on Earth edit

What if a genie were to withdraw all of the stars in the universe except the sun, and the photons in transit to Earth were taken away as well. Would there be a gravitational effect? Would there be a climate change on Earth? Essentially, do the stars in the night sky play any describable role in Earth's affairs? —Preceding unsigned comment added by 150.167.179.111 (talk) 17:00, 8 November 2007 (UTC)[reply]

Well, stars provide a useful amount of light on a clear night (see night vision), but I think there's no other routinely discernable effect. The opinions of astrologers will differ, of course.
Note: It is thought that a sufficiently close supernova would emit enough gamma radiation to toast us all, but I think we all hope that won't occur any time soon, so I'm excluding that as a current effect.
Atlant 17:11, 8 November 2007 (UTC)[reply]
The stars are all far enough away that gravity is not an issue; starlight doesn't make up an appreciable amount of radiation reaching Earth so it shouldn't have any effect on the temperature, climate, etc. My bet would be "no". The stars don't play any real physical role in Earth's affairs. --24.147.86.187 18:57, 8 November 2007 (UTC)[reply]
For that matter, said genie could remove everything but Earth/Moon/Sun and we'd see no appreciable difference apart from the view in the nighttime sky. — Lomn 19:10, 8 November 2007 (UTC)[reply]
In terms of climate, the effect of the loss of every single star is essentially nil over any short or medium time scale. (Over an extremely long period of time – hundreds of millions of years – there's the risk Atlant notes of a nearby supernova explosion.) This web page talks about using single stars as light sources of known, carefully-measured intensity for evaluating the sensitivity of digital camera sensors. Interestingly, it also provides the relative illumination provided by starlight (0.001 lumens per square meter) compared to full daylight (10 .000 lumens per square meter). Making the reasonable assumption that most of the energy we receive from starlight will be at visible and near-visible wavelengths, distant stars contribute less than a millionth of the incoming radiation to Earth.
As for gravity, the effect is again negligible unless a massive star passes extremely close to the Earth-Sun system. (This would be a very rare event.) Since gravitational force follows an inverse square relationship, a star the size of the Sun only one light year away will pull on the Earth lss than one-billionth as strongly as the Sun does. TenOfAllTrades(talk) 19:52, 8 November 2007 (UTC)[reply]
On the other hand, the effect could be catastrophic. See Ice_age#Causes_of_ice_ages. While I don't believe this theory, just remember that we are in some kind of orbit around a galactic center, and therefore there is a very real gravitational effect on the Earth and Solar System. The planets and outer solar system all have enormous influence regarding the slinging of comets and asteroids into Earth's path, and there are clear measurable gravitational effects from Venus, Mars, and Jupiter. The moral of the story is that in a chaotic system like climate and ecology, you cannot just ignore the small variables. SamuelRiv 20:28, 8 November 2007 (UTC)[reply]

On the one hand, the physical effects on the planet are described above. On the other hand, there is the potential effects on society and humanity. Consider the espers' trick on Ben Reich in The Demolished Man, multiplied by six billion, or the end of "Nightfall" in reverse... the sudden disappearance of every single star in the universe would be quite traumatic. The most calm would be rightly troubled, and the least calm would revert to base fears, thoughts of religiously inspired (or even very real) armageddon. When people panic on large scales, Bad Things Happen. If everyone in the world is exposed to the same instantaneous trauma (and even something as simple as the stars disappearing can be quite effecting, I'd imagine), its a safe bet that human society as a whole would have a fairly hard time coming through and recovering from such a scenario. --Jeffrey O. Gustafson - Shazaam! - <*> 10:54, 12 November 2007 (UTC)[reply]

growing crystals of copper sulfate edit

how would you make a hot, concentrated solution of copper sulfate? —Preceding unsigned comment added by 86.42.210.0 (talk) 17:24, 8 November 2007 (UTC)[reply]

You would boil some water (perhaps in a kettle), pour it in a heat resistant glass or ceramic container, then add copper sulphate crystals and stir. Do not use aluminium or steel containers as copper will plate on to their surfaces. Then you decant the solution, leaving any undissolved stuff behind. Commercial copper sulphate probably has ferrous sulphate as well, so it may not be pure. As the solution cools you will get a growth of crystals. If you can hang a little crystal from a thread, you can make it grow into a big crystal. Other experiments you can do with copper sulfate solution are: add ammonia to get a dark blue solution which can dissolve cotton, add biuret to get a different dark blue solution, add a base like sodium bicarbonate to make a precipitate. Graeme Bartlett 20:14, 8 November 2007 (UTC)[reply]

Viruses edit

Virus is a DNA With a Protein Coat Protecting It Can a virus be destroyed if the protective protein coat is damaged so that it cant protect DNA anymore if yes then can an enzyme be used as protease to digest the protein coat thus destroying the virus ????? —Preceding unsigned comment added by 212.71.37.97 (talk) 18:04, 8 November 2007 (UTC)[reply]

Proteases generally act, well, in a general way. They either consume a protein at one of its ends, or they cleave proteins at specific amino acids. So as you can see, a protease based antiviral measure would cause quite a bit of collateral damage if used to "carpet bomb" infected tissue. It could destroy or inactivate the virus; however, some viruses are even evolved with this in mind, and cleavage of viral proteins upon entering the cell can actually activate the virus (I can't for the life of me remember what article this is in, but it came up in a previous ref desk question I can't find in the archives). Far easier to target them with antibodies. Someguy1221 19:39, 8 November 2007 (UTC)[reply]
Something like http://dx.doi.org/630030? DMacks 21:19, 8 November 2007 (UTC)[reply]
Also, your definition of "virus" is not quite right. There are many RNA viruses. And the RNA of many of those viruses can infect cells as "naked RNA"; no protein coat is required for the virus to infect cells and produce new virions. - Nunh-huh 21:18, 8 November 2007 (UTC)[reply]

ecosystem edit

can you show me a picture of an ecosystem (example) that a 4th grader could use to help them do a project? —Preceding unsigned comment added by 72.18.102.36 (talk) 20:16, 8 November 2007 (UTC)[reply]

I like the images and text here. But there were other suitable examples when I did a google image search. Man It's So Loud In Here 21:08, 8 November 2007 (UTC)[reply]

nuclear energy: a given edit

Let's face it: we are running out of oil. The federal "government" doesn't have a plan for the event known to the public (or is the coal industry now going to move in for what it's worth?). We are in a bad spot so we quickly fall back to our former nuclear technology, which could be a rescue except for the problem of "spent" nuclear fuel. So has there been any design for a facility that can "speed up the procees of nuclear decay" of a spent fuel on-site? Can a half-life be made into a quarter-life? There's energy there. LShecut2nd 23:29, 8 November 2007 (UTC)[reply]

Radioactive decay is a quantum process and as far as I know there's no way to affect it one way or another. That being said, there are plenty of other ways that one can imagine dealing with the waste problem. Unfortunately the stakes are quite high and the need for government intervention quite high as well so as a result it is a rather toxic bureacratic issue, so to speak, and progress has been pretty slow and problematic. --24.147.86.187 00:21, 9 November 2007 (UTC)[reply]
Breeder reactors are one technology that affects the quality and quantity of waste by transmuting some of the waste into other substances. The drawback is that this technology produces plutonium which is much easier to use as the core of a nuclear bomb than ordinary reactor materials. Dragons flight 01:39, 9 November 2007 (UTC)[reply]


It is a fundamental mistake to say that we are running out of oil. If (hypothetically) we were to continue consuming it at the rate we do now then we would indeed run out in 40 to 150 years (depending on the economics of pulling oil from sands and shales as the price inevitably rises). However, if we burn oil at this rate for even 20 more years, the planet will die. So given that we don't intend to kill the planet - we WILL cut our consumption. So - the problem remains - somehow we have to stop using oil. Certainly there are alternatives - nuclear isn't wonderful - but the difficulties of safely storing nuclear waste is a lot more tenable than the the problems of collecting and storing millions of tons of CO2 gas. Nuclear waste will trash the environment wherever we put it - but it's a lot better than trashing the entire planet. So let's pick a place (right in the middle of a desert someplace might be good) and dump all of the stuff there. Sure, the consequences will be nasty - but a heck of a lot less than melting ice caps, rising sea levels, increasing human misery - annihilation of some terrifying percentage of the species of life.
But you can't speed up the rate that the low level waste decays. Sure, it contains energy - but at a level that can't be economically recovered. There really isn't much you can do but let it decay. The plan ought to be to use nuclear as an emergency stop-gap. We URGENTLY need to get away from fossil fuels - and the efforts with wind, solar, wave, etc really aren't cutting it - and the one renewable resource that did anything (hydroelectric) has proven to be a problem, the dams silt up and the downstream environment suffers...argh! So we need to build a bunch of nuclear power plants - shutting down the coal, oil and gas plants as we do so. Then research - LOTS of SERIOUS research. Whatever happened to huge orbiting solar power plants with microwave downlinks? Why aren't there VAST numbers of windmills everywhere? Fusion power - perpetually "25 years away" from getting something working...we need a 'Manhatten Project' for fusion. For vehicles, we have other problems - cheap electricity could solve it - but I think ethanol may be the more likely answer.
SteveBaker 02:42, 9 November 2007 (UTC)[reply]
It is quite easy to speed up nuclear decay. First, you need to chemically separate the waste into elemental components, since they must be treated differently. Then, use a neutron source to bombard the correct set of your elements. You can get the neutrons from a fusion reactor. The process can be engineered as a net power source. None of this can be done today: the science is there, but the engineering is not. This is however a way that our grandchildren can eliminate the low-level nuclear waste that we must generate to avoid killing ourselves with oil. -Arch dude 03:04, 9 November 2007 (UTC)[reply]
Well, have a look on Fusion power and ITER. Thermonuclear reactors are the next-generation-nuclear-device which would produce far less waste while consuming heavy water (D2O, more precisely, heavy isotopes of hydrogen: deuterium and tritium), i.e. they will be able to provide Humankind with cheap clean energy. —Preceding unsigned comment added by 62.63.76.14 (talk) 09:51, 9 November 2007 (UTC)[reply]
Is fusion energy really all that clean? The fusion reactor itself becomes highly radioactive. "...most of the radioactive material in a fusion reactor would be the reactor core itself, which would be dangerous for about 50 years, and low-level waste another 100. By 300 years the material would have the same radioactivity as coal ash..." If fusion energy became viable on a world scale to power everything, how bad of a nuclear waste problem would we have? Sappysap 14:24, 9 November 2007 (UTC)[reply]
Probably very little; there isn't anything important under the Sahara and Gobi deserts, or the Great Basin of Nevada. These would hold massive quantities of waste. —Preceding unsigned comment added by 98.196.46.72 (talk) 16:12, 11 November 2007 (UTC)[reply]
Also see Integral Fast Reactor, and this Q&A here. Too bad the project was killed, eh? grendel|khan 15:36, 9 November 2007 (UTC)[reply]

The Reichsbrücke article does not state why the bridge collapsed and was anybody found guilty. Does anybody know any details concerning that collapse? Mieciu K 23:58, 8 November 2007 (UTC)[reply]

Following the third link in the References section, I find this paragraph:
Ursachen. Nachdem zunächst Gratz seinen Rücktritt angeboten hatte, übernahm der Wiener SP-Planungsstadtrat Fritz Hofmann die politische Verantwortung für den Einsturz und schied wenige Tage nach der Katastrophe aus dem Amt. Eine Expertenkommission gab kurz darauf bekannt, dass der linke Pfeiler der nach Ende des Zweiten Weltkrieges sanierten Brücke zum Teil mit Sand und "unverdichtetem Beton" gefüllt gewesen war. Durch das schlechte Material sei Wasser eingedrungen, was schließlich zu dem Einsturz führte.
Combining pieces from two machine translations of this (Google Language Tools and Babelfish) and putting the word order into something more like English myself, I figure that this says:
Causes. After Gratz [the mayor] initially offered his resignation, the Viennese FR planning town councillor Fritz Hofmann took political responsibility for the collapse and resigned from office a few days after the disaster. An expert commission shortly afterwards announced that the left column of tbe bridge, rehabilitated after the end of the Second World War, had been partially filled with sand and "uncompressed concrete". Water penetrated by the bad material, which ultimately led to the collapse.
--Anonymous, 03:29 UTC, November 9, 2007.