Wikipedia:Reference desk/Archives/Science/2012 November 30

Science desk
< November 29 << Oct | November | Dec >> December 1 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


November 30 edit

Can someone familiar with nuclear physics comment on whether polywell fusion reactors are actually feasible or not? To a laymen it sounds awfully lot like a free-energy device scam. There's been very little peer review on this since the research team is under a publishing embargo.Dncsky (talk) 00:56, 30 November 2012 (UTC)[reply]

It's not a free-energy device scam — fusion is real physics — but with all fusion programs there are three major hurdles: 1. Can it reach ignition? That is, can it generate more energy through the reactions than it takes to start the reactions? So far none of the methods pursued have done this, yet, with the exception of thermonuclear weapons. 2. Can it produce useful net energy? That is, can you get electricity out of it? This requires both considerable efficiency and also in some cases elaborate means of extracting the energy from the fusion reaction. 3. Can it produce economically viable net energy? That is, can a fusion plant be economically competitive with other fuels out there? Another big unknown. The rub is that as of yet, we're still trying to find a method that satisfies #1. NIF was supposed to do it; it hasn't so far and it's not clear that it ever will. ITER is supposed to do it, we'll see. Once someone has found a way to do #1, it should be relatively easy to figure out a way to scale it up for #2. As for #3, it really depends on technical constraints we don't know yet, as well as issues unrelated to fusion technology (e.g. the price of natural gas).
I'm not a fusion scientist but my read of the Polywell article suggests that the people working on it really think it will work, and that they have been able to convince other experts at funding agencies that it might work. I don't think it's a scam, but that doesn't mean it will actually work. The fusion quest is nothing if not a field of failed dreams. So far. --Mr.98 (talk) 03:25, 30 November 2012 (UTC)[reply]
I am familiar with the current state of fusion research and I support every cent behind the 10 billion spent on ITER. Regardless of whether it will eventually work that money still needs to be spent. I am asking because the only publication I can find on the polywell reactor [1] seems to have been discredited more than a decade ago[2][3]. Yet they are still receiving funding from the Navy. Dncsky (talk) 04:27, 30 November 2012 (UTC)[reply]
The "References" list in the article is quite long and includes lots of scientific publications. I guess I'm not understanding why you think there is not much information published on it. The article shows it going through many funding reviews, many publications, and so on. The Navy has given them a few million — not chump change, but not a huge amount by research standards. Again, I can't evaluate the technical merits, but I don't see anything here that's against the laws of physics. The question is just whether it'll work or not, and that's not an easy thing to answer usually without spending some money on it. --Mr.98 (talk) 14:08, 30 November 2012 (UTC)[reply]
The References list is quite long, but it doesn't include a whole lot of peer-reviewed publications; it's mostly an assortment of press clippings, contractor and company 'reports' of various types, and patents. Conceptually, 'polywell' devices are fairly well understood. Like their cousins, the Farnsworth fusors, they're able to confine and fuse small amounts of hot ions. In that respect, polywell devices are 'real' fusion devices that actually 'work'—unlike cold fusion setups, there's no question that fusor and polywell devices really do fuse hydrogen atoms. Also like the fusors, however, polywell devices aren't terribly efficient at it. The 'true believers' think it's possible to eventually engineer around the inefficiencies of the technique and create a workable energy source; most physicists remain (highly) skeptical.
The Navy has a history of being a sucker for pie-in-the-sky fusion research of all flavors; they're one of the few stable sources of funding for cold fusion research as well. TenOfAllTrades(talk) 20:24, 1 December 2012 (UTC)[reply]

What are the global\general\basic ingredients of sand "regular" earth? edit

(sorry for possible misterminology, i don't have basic knowledge in geology). thanks 79.182.153.70 (talk) 04:34, 30 November 2012 (UTC)[reply]

Sand regular earth? Are you talking about plain old Sand? As our article discusses, the chemical makeup of "sand" is highly variable, as it is defined based on its gross properties rather than its chemical composition. Our article gives some of the more common types of sand and what they are made of. Since you mentioned "earth", you might also be interested in soil. Someguy1221 (talk) 04:42, 30 November 2012 (UTC)[reply]
okey, it's becoming interesting. what do you mean by "it is defined based on its gross properties rather than its chemical composition". this sentence is very abstract to a layman like me. i ask what are the most basic and general and typical ingredients of sand (and soil) just like a little boy, a curious boy would ask about them, i rally am a total ignorant in this matter. you are doing an holy work by lifting me out of this mud of ignorance (nice example eh?). — Preceding unsigned comment added by 79.182.153.70 (talk) 06:59, 30 November 2012 (UTC)[reply]
In laymans terms, sand is pretty much any finely ground rock or mineral. If it looks like sand and it feels like sand, it's sand, no matter what it's really made of. In the same way wood is wood no matter what tree you cut it from. Someguy1221 (talk) 11:17, 30 November 2012 (UTC)[reply]
The most common material is quartz which would make a white sand. Also common are rock forming minerals such as feldspar and pyroxene. Other sand may be made from fragments of rock like basalt, or pieces of shell or coral. Black sand may contain ilmenite or particles or wood or charcoal or discoloured by iron sulfide. Graeme Bartlett (talk) 10:27, 30 November 2012 (UTC)[reply]
There is a nice table of the chemical composition of the Earth crust at Composition_of_the_Earth#Chemical_composition. The most abundant substances are silica, alumina, and lime, as far as I know all rock-forming materials (and hence possible sources for sand). --Stephan Schulz (talk) 12:27, 30 November 2012 (UTC)[reply]
Our article mentions quartz is common for sand in 'inland continental settings and non-tropical coastal settings'. It also mentions 'The bright white sands found in tropical and subtropical coastal settings are eroded limestone and may contain coral and shell fragments in addition to other organic or organically derived fragmental material'. There is a bit more useful info, I strongly suggest the OP read it if they haven't already. Nil Einne (talk) 13:47, 30 November 2012 (UTC)[reply]
As noted, 'sand' can be from just about any mineral that is ground by erosion and accumulates in one spot. I strongly recommend a visit to Hawaii to examine their...sand. In addition to 'white' sand (predominantly quartz, most common) you can find black sand (Punalu'u Beach, principally basalt), green sand (Papakolea Beach, colored by olivine), and red sand (Kaihalulu Bay, rich in iron compounds). TenOfAllTrades(talk) 13:50, 30 November 2012 (UTC)[reply]
Are you sure of the first statement? Haiwaii would generally be consider a tropical region and most sand there is likely to be coastal, and as I mentioned above (perhaps with an EC) our article suggests silica or quartz is actually often not the predominate material in white sand in such settings. [4] [5] [6] seem to agree that silica or quartz sand is not common in Haiwaii. Nil Einne (talk) 14:15, 30 November 2012 (UTC)[reply]
How about extremely small pebbles between X and Y mm diameter? Might it also help to identify what sand is not? EG: clay & soil, both with significantly more organic and fine powder-like material?165.212.189.187 (talk) 14:54, 30 November 2012 (UTC)[reply]
If the questioner goes to his neighborhood building center or hardware store and asks for "sand" for his child's sandbox, he is likely to get "play sand." This product has a Material Safety data Sheet which says it is made of "crystalline silica," which seems to be another name for silicon dioxide. It has been washed to reduce the dust and dirt. The other type of sand he would find in the store is "all purpose sand," which is darker and contains more dust. It is used for making concrete. Edison (talk) 15:50, 30 November 2012 (UTC)[reply]

Without NASA we wouldn't have computers edit

Someone told me this, is it correct? ScienceApe (talk) 18:08, 30 November 2012 (UTC)[reply]

No, computers were already around before NASA was even founded. See e.g. the Atanasoff–Berry Computer and ENIAC. - Lindert (talk) 18:18, 30 November 2012 (UTC)[reply]
On the general topic, I have read that entire industries have come out of the space program, because sending rocket ships into orbit requires engineering designs that can vary by at most one part in 10 000, or something like that. I don't know what exactly we owe to the space program, although I know it isn't computers, and it isn't teflon. IBE (talk) 18:27, 30 November 2012 (UTC)[reply]
Orange Tang. SteveBaker (talk) 18:43, 30 November 2012 (UTC)[reply]
Certainly NASA didn't produce the first computer...who precisely did depends crucially on your definition of the word "computer"...Atanasoff–Berry Computer (circa 1942) is the most likely candidate...but there are reasons to say that it doesn't count. Arguably, NASA owned the first small "microcomputer". NASA paid IBM to build a 19" long computer (weighing in at 60lbs!) in May 1963. It was used by NASA on the Gemini program and had a fairly respectable 16k bytes of memory. However, there is always a problem with "Without A, we wouldn't have B" arguments. Clearly, IBM had the technology to build this thing - so if NASA hadn't paid them to do it, what is to say that six months or a year later, someone else wouldn't have? Since computers were already in fairly widespread use in 1963 - it was only a matter of time before someone else built a tiny one. There is no evidence whatever that NASA's machine was widely copied - or that it is somehow the progenitor of all computers that followed it - to the contrary, it was not much more than a dumb calculator with far fewer features than it's contemporaries. SteveBaker (talk) 18:43, 30 November 2012 (UTC)[reply]
It seems according to the Computer page the Z3 was first computer (obviously depending on your definition of computer) in 1941. Dja1979 (talk) 20:20, 30 November 2012 (UTC)[reply]
Here in the UK, everybody knows that Colossus was the first computer. I think we should have a List of computers claimed to be the first computer. Alansplodge (talk) 20:38, 30 November 2012 (UTC)[reply]
To which we could add Charles Babbage's Analytical Engine of 1837, "the first design for a general-purpose computer that could be described in modern terms as Turing-complete." Alansplodge (talk) 20:45, 30 November 2012 (UTC)[reply]
At the Computer History Museum in Mountain View, California, the exhibition hall is set up so that the first items you see are historical computation contraptions dating to pre-history; such artifacts as abacuses and cuneiform tabulations. The exhibition hall progresses forward through more advanced mathematical machines; Babbage engines; punch-cards and time-clocks (other intricate mechanical devices that could perform domain-specific computation); Curta peppermills; and finally, after you've gone through a whole row of historic inventions, you finally see the first of the electronic and electronic-digital-machines that start to resemble what we call a computer. Like any question of history, there is much room for debate and different perspective. You can navigate an online version of the "Revolution" - the first 2000 years of computing, and the Computer History Timeline. The museum used to be free and open to the public; but now charges an admission fee. Nimur (talk) 22:02, 30 November 2012 (UTC)[reply]
Colossus was the first programmable computer. We have History of computing hardware, but it could use some work. --Tango (talk) 01:11, 1 December 2012 (UTC)[reply]
A more plausible argument is that without nuclear weapons, we wouldn't have modern computers. See e.g. ENIAC, Project Whirlwind, SAGE... It's still an historical fallacy (there's no reason to think that the computer wouldn't have been developed and funded for other reasons, and the history of computing is nothing but an endless string backwards of priority arguments over what counts as the "first computer" anyway), but it's more on target than the NASA reference — nukes, their deployment, and attempts to defend against them were much more influential in the short and long terms. --Mr.98 (talk) 00:37, 1 December 2012 (UTC)[reply]

Actually it was Nazis, not NASA or nukes that set America on the road to computing. You had not only the big push to break Axis power codes, but also the IBM sales to help manage the final solution. So it was a win-win, of sorts. :-( Hcobb (talk) 00:42, 1 December 2012 (UTC)[reply]

America? Surely you jest. The British were the ones who did the hard work on Nazi cryptography, and the only ones (I believe) who built anything that looked like computers for it. The Americans worked on Japan's and later the Soviet's codes but they were not terrible consequential with regards to the Nazis (and I'm not too sure of the role of computing, per se, in American cryptological efforts). And while I think the topic of IBM during World War II is of great interest, I think saying that the Holocaust really advanced computing is a bit much. The Hollerith sorting machines, while useful, weren't really computers in the modern sense at all. (Neither, really, was Colossus, but it was a step along the way.) For American computing, nukes played a much bigger role than cryptography, initially. --Mr.98 (talk) 03:27, 1 December 2012 (UTC)[reply]
As has been pointed out before, Americans who read American books etc think that Americans invented the computer, as is understood in modern times by the term - i.e., a machine that can be programmed, at any time after commissioning, to do a multitude of unrelated things. And the British, who read British books, like to think that the British invented the computer, even though the code breaking apparatus and even the later university computing machines do not conform to this meaning. However, Conrad Zuse, a German, beat all of that, having a fully programmable computer in commercial use before any of that. Yep - the Germans were first. See http://en.wikipedia.org/wiki/Zuse. At the time British and American airplane manufactures were using "loft computers" (rooms full of dozens of junior engineers doing wing design and stress calculations manually), German airplane manufacturers were using a Zuse computer to do it. Be carefull about the claims about the British cracking the German enigma messages. British media have made much noise about it practically ever since, and good on them. But it was very specialised with little or no commercial value, and the Americans did important work too - they just kept good and quiet about it. Keit 121.221.37.153 (talk) 06:08, 1 December 2012 (UTC)[reply]
" Americans... just kept good and quiet about it" See U-571 (film). Alansplodge (talk) 12:28, 1 December 2012 (UTC)[reply]
I'd argue, without any primary sources, that advances in electronics related to the Vietnam War led to the curve of improving technology and lower prices that made digital electronics so available to all, including personal computers, cell phones, etc. Gzuckier (talk) 03:56, 3 December 2012 (UTC)[reply]
Sorry for jumping in late to the party, but here is an interesting retraction to an article [7] -- I don't know what the title of the original article was but I've heard the anecdote before that the Apollo guidance computer led to a large demand for early integrated circuits. 192.220.135.34 (talk) 00:51, 5 December 2012 (UTC)[reply]

blocking in experimental design edit

  Resolved

I understand (more or less) what a block is in experimental design, but I don't get what the authors are talking about here when they say "In addition, although blocking subjects on initial interest in the target activity of course eliminated any between-groups differences in this variable,..." What is this "blocking subjects" referring to? In a block design, I thought the blocks were supposed to be arranged in advance, and furthermore, our article states "A nuisance factor is used as a blocking factor if every level of the primary factor occurs the same number of times with each level of the nuisance factor". This doesn't look like something you could arrange after the experiment. What's going on? IBE (talk) 18:23, 30 November 2012 (UTC)[reply]

I'm not sure what is confusing you. The article describes the blocking and the assignment of subjects to groups as arranged before the experiment, as far as I can see (p. 131, upper left). Looie496 (talk) 19:30, 30 November 2012 (UTC)[reply]
Perhaps it's the use of the word "subjects", meaning "people on whom we experiment" versus the more common meaning of "topics" ? StuRat (talk) 03:35, 1 December 2012 (UTC)[reply]
No, I had just missed the bit that Looie pointed out. I did read most of the article, but I would have skimmed over that bit, and my inexperience confused me. I should have at least done a ctrl-f, because the article is searchable. I was just thrown by something that seemed to come out of left field. Thanks for pointing it out. IBE (talk) 08:22, 2 December 2012 (UTC)[reply]

Electrical Properties of Tubes edit

What if instead of wires we were to use tubes (of copper, say for indoor wiring)? Would there be any advantages/disadvantages and are there any special electrical properties of such a configuration? Seems as if the current would travel on the outer surface mostly, but I'm just guessing, honestly. 66.87.126.32 (talk) 20:51, 30 November 2012 (UTC)[reply]

I believe this was done, in some places, specifically with the tube carrying either positive or negative, with a regular insulated wire inside carrying the opposite charge. There are several disadvantages, though:
1) Not flexible, so much harder to install and maintain, especially where bends are needed.
2) Takes up more space.
3) Requires more electrical insulation.
4) Since it's uncommon, people might not realize it's carrying current, and be electrocuted.
5) Access to the interior wires is more difficult.
Using the tube as ground/earth with both positive and negative insulated wires inside makes more sense, especially out-of-doors, where the tube provides additional protection from the environment, for the wires. The tube might also function as a structural support, say when using a flagpole as the ground/earth for a light placed on top, with wires running inside. StuRat (talk) 21:51, 30 November 2012 (UTC)[reply]
The one-conductor-inside-another layout is termed coaxial cable. As the article indicates, it's used mostly for RF signal transmission (where it has beneficial properties), although the configuration has been used for power in certain situations, though attempts to search for a good reference are swamped by mentions of coaxial power connectors. - By the way, the article Skin effect shows a three-wire-bundle high-voltage power line, mentioning that because of the skin effect, they're effectively one conductor, which is taking the tube-as-conductor idea one step further. -- 205.175.124.30 (talk) 22:20, 30 November 2012 (UTC)[reply]
Except that coax cable is flexible, and I think the OP means rigid pipes. StuRat (talk) 03:38, 1 December 2012 (UTC)[reply]
The OP asked about using tubes as conductors, saying "thinking most of the currrent flows on the outer surface", indicating he's heard about skin effect - where the magnetic field created by alternating current opposes the flow of current where the field is strongest, which is inside the conductor. However, for skin effect to the significant, the conductor diameter must be large enough, and/or the frequency must be large enough. In the design of high power radio transmitters, both factors apply and the use of tubing instead of solid wire is common. In the design of low and medium power electronics, another solution is used - litz wire. In the transmission of electrical power at 100's of megawatt levels, the diamter is large enough for skin effect and proximity effect (the magentic field from one conductor can aid or oppose the current in an adjacent conductor) to be significant and hollow conductors are used, as well as the grouped conductors mentioned by 205.175.124.30. Often, the conductors consist of a central steel tension member surround by copper strands. The steel supplies the mechanical strength without affecting the electrical properties too much. In domestic house wiring, the wire diameter is way too small for skin and proximity effects at the frequency used to be significant, and it's cheapest to just use solid wire or normal stranded wire. Keit 124.182.170.42 (talk) 01:18, 1 December 2012 (UTC)[reply]
Does the "magentic field" cause nearby objects to turn magenta ? :-) StuRat (talk) 03:42, 1 December 2012 (UTC) ~[reply]
One for you and one for me, Stu. You can get flexible pipe, and you can get rigid coax, which is often used in professional radio equipment. I agree though, that the OP didn't mean coax. Keit 121.221.37.153 (talk) 05:58, 1 December 2012 (UTC) [reply]

Very elucidating, thanks so much! 66.87.126.32 (talk) 01:38, 1 December 2012 (UTC)[reply]

Just adding that, in the UK, older 2-wire mains supply wiring to properties is being replaced with a "co-axial" cable where the "live" is on the inside, and the neutral is a stranded "tube" around the outside. This is for safety, of course. I don't know whether it is used in other countries. Dbfirs 08:30, 1 December 2012 (UTC)[reply]
In Australia, as in most other countries, you get two applications: 1) Fireproof MINS cabling - this has an active solid conductor in the centre, surrounded by mineral insulation, in turn surrounded by a metal tube. But the tube is not a neutral or return conductor - it is earthed, and only carries current under fault conditions. It's there because the mineral insulation must be surrounded by metal for mechanical reasons. It is only used, in buildings, where the supply of electricity must survive a fire. 2) Underground high voltage street cabling also is coaxial, with one coax for each of the three phases, with the outer "tube", termed the "screen" comprising of strands and functioning as the neutral/earth per Multiple Earth Nuetral practice. Here the coaxial construction is used for three reasons; a) there is no external magnetic or electric field, which is important with underground cables as a significant field could cause soil heating and be a hazard for humans and animals (or power loss/dissipation in the outer steel wire armouring if fitted, which it usually is), b) if the cable is cut by say a backhoe, the screen must be penetrated first, and will always be there to carry away the fault current and avoid high voltage on the backhoe. c) it shields the outer plastic sheath from electrical stress, and avoids a touch shock/tingle hazard on the outer plastic sheath via capacitance. Keit 60.230.222.186 (talk) 10:47, 1 December 2012 (UTC)[reply]

Okay, so consider a bare, tubular conductor in air connected in series with a power circuit. Does the majority of the current travel on the outer or inner surface? What if the conductor was dipped in an insulator? What would be the differences between using a DC and AC power supply? 66.87.127.92 (talk) 20:49, 1 December 2012 (UTC)[reply]

With DC, the current flows smoothly and continuously. With a tube conductor, the current does not flow on the outside surface, nor does it flow on the inside surface. DC flows evenly distributed throughout the thickness of the conductor, just as it flows with the even density throughout a conductor of any shape. To see why this is so, recall that any electrical conductor offers resistance to the flow of current. Take your tube conductor - you can consider it as consisting of any number of tightly fitting concentric thin tubes - all having the same electrical resistance per unit cross-sectional area, and each thin tube electrically in parallel with the other thin tubes.
With AC, two effects, known as skin effect and proximity effect come into effect. These effects depend on frequency. Frequency is the rate at which the current alternates from one direction to the other. The alternation sets up an alternating magnetic field that causes skin and proximity effect. The higher the rate of alternation, ie frequency, the faster the rate of change in magnetic field and thus the stronger the skin and proximity effects. (DC has zero rate of change, so zero skin and proximity effect). In a solid circular cross-sectional wire, the magnetic field opposes the flow of current, and the opposition is strongest where the magnetic field is strongest, which is in the centre of the conductor. So, at a frequency high enough, the current is minimal in the centre - this is skin effect. The higher the frequency, and the bigger the diameter of the wire, the greater the fraction of current forced to flow near the outer surface. If the frequency is very high, a tube works just as well, as any metal in the centre isn't doing anything - at extreme frequency, the current flows close to the outer surface of a tube.
At power main frequencies, skin effect is negligible at the wire sizes used in houses, and the current is evenly distributed throughout the conductor cross section just as it is with DC, regardless of whether the conductor is a solid wire, a tube, or some arbitrary shape.
At frequencies high enough for skin effect, if the are two parallel routed conductors, carrying current in opposite directions (as in one conductor carrying the return current from a load, the conductors insulated from each other), the magnetic field of one conductor will aid the current in the other conductor. Thus the current in each conductor tends to be concentrated in the part of the conductor cross section nearest the other conductor. This is called proximity effect. If two parallel routed conductors are both carrying current in the same direction, proximity effect will tend to force the current to flow in the part of the conductor cross section furtherest away from the other conductor.
As skin and proximity effects are driven by magnetic fields, the type of insulation around the conductors has no effect on them. However the presence of any ferromagnetic material and other parallel routed conductors (if earthed at both ends) such as steel conduit does affect skin and proximity effects.
Keit 124.182.167.109 (talk) 07:35, 2 December 2012 (UTC)[reply]

Okay, I think I understand now. Thanks again Keit for the excellent explanation. Cheers! 66.87.126.240 (talk) 14:48, 2 December 2012 (UTC)[reply]

When did the last visible (with naked eye) star appear in the sky? edit

Did Jesus looked at the same sky as us? Comploose (talk) 21:08, 30 November 2012 (UTC)[reply]

Pretty much, with a few exceptions. Some stars only become visible as a result of a nova or supernova. A new star being ignited might only very slowly become visible to us, as the dust clouds around it clear. The Earth's precession also makes different stars into the pole stars every so often (Polaris isn't always the North Star). There are also periodic comets which are only visible certain years, like Halley's Comet. See List of periodic comets. StuRat (talk) 21:18, 30 November 2012 (UTC)[reply]
Timeline_of_white_dwarfs,_neutron_stars,_and_supernovae is your friend :) Dr Dima (talk) 21:26, 30 November 2012 (UTC)[reply]
Proper motion also has an effect; a star might be nearer or further away, or in a slightly different place relative to others. But 2000 years is a pretty short timescale for such things; while there were differences, most of them were subtle. I don't think any significant naked-eye stars have appeared or disappeared in that time. AlexTiefling (talk) 21:28, 30 November 2012 (UTC)[reply]
They haven't appeared of disappeared, but some of them have moved noticeably. According to our article on Alpha Centauri:
"Edmond Halley in 1718 found that some stars had significantly moved from their ancient astrometric positions.[63] For example, the bright star Arcturus (α Boo) in the constellation of Boötes showed an almost 0.5° difference in 1800 years,[64] as did the brightest star, Sirius, in Canis Major (α CMa).[65] Halley's positional comparison was Ptolemy's catalogue of stars contained in the Almagest[66] whose original data included portions from an earlier catalog by Hipparchos during the 1st century BCE"
Alpha Centauri itself has a much larger proper motion than Arcturus or Sirius, and moves by 1 degree per millenium. For reference, that's about the width of your thumb at arm's length, or twice the angular diameter of the Sun or Moon.
So in conclusion, Jesus' sky would have looked almost identical to ours, with the exception that the north pole would have been in between Polaris and Kochab instead of very close to Polaris (see http://en.wikipedia.org/wiki/File:Precession_N.gif). --140.180.249.151 (talk) 23:22, 30 November 2012 (UTC)[reply]

We're forgetting about something here:

The sky as seen by Jesus:

"Class 1: Excellent dark-sky site. The zodiacal light, gegenschein, and zodiacal band (S&T: October 2000, page 116) are all visible — the zodiacal light to a striking degree, and the zodiacal band spanning the entire sky. Even with direct vision, the galaxy M33 is an obvious naked-eye object. The Scorpius and Sagittarius region of the Milky Way casts obvious diffuse shadows on the ground. To the unaided eye the limiting magnitude is 7.6 to 8.0 (with effort); the presence of Jupiter or Venus in the sky seems to degrade dark adaptation. Airglow (a very faint, naturally occurring glow most evident within about 15° of the horizon) is readily apparent. With a 32-centimeter (12½-inch) scope, stars to magnitude 17.5 can be detected with effort, while a 50-cm (20-inch) instrument used with moderate magnification will reach 19th magnitude. If you are observing on a grass-covered field bordered by trees, your telescope, companions, and vehicle are almost totally invisible. This is an observer's Nirvana!"

The sky we see today:

"Class 9: Inner-city sky. The entire sky is brightly lit, even at the zenith. Many stars making up familiar constellation figures are invisible, and dim constellations such as Cancer and Pisces are not seen at all. Aside from perhaps the Pleiades, no Messier objects are visible to the unaided eye. The only celestial objects that really provide pleasing telescopic views are the Moon, the planets, and a few of the brightest star clusters (if you can find them). The naked-eye limiting magnitude is 4.0 or less."

Count Iblis (talk) 23:30, 30 November 2012 (UTC)[reply]

The sky as seen by Jesus. Merry Christmas! Thincat (talk) 12:21, 1 December 2012 (UTC)[reply]

Small angle formula edit

D = X · d / 206,265

What is the difference between D and d? They are both distances so can someone explain the difference to me in an easy way to understand?Pendragon5 (talk) 21:52, 30 November 2012 (UTC)[reply]

It appears to me that your formula is a special case of arc length of a circle, commonly denoted as  ; but you've got it in a form where the radius angle is presented in normalized units (your constant 206,265). I didn't recognize that constant off the top of my head, but it wouldn't surprise me if it's related to, e.g., a special case of angular resolution. And, lo and behold, it's a conversion constant to arc-seconds, when d is the radial distance to the target, and D is the linear size of the object, and X is measured in radians. 206265, a unit conversion factor I don't have any common use for.
So, in plain english: d is the distance to the object, and D is the size of the object. This case-sensitive notation is a little ugly. Perhaps it derives from an era when ink was much more expensive. Nimur (talk) 22:27, 30 November 2012 (UTC)[reply]
What do you mean by the size of the object? Its diameter? Its radius? Its mass? Or what?Pendragon5 (talk) 23:32, 30 November 2012 (UTC)[reply]
To be perfectly pedantic, it's none of those things. It's the cross-section, determined by the appropriate projection geometry of your optical system. For simple geometrical objects, like stars and planets that are spherical, this value is well-approximated by the diameter. Nimur (talk) 00:07, 1 December 2012 (UTC)[reply]