Wikipedia:Reference desk/Archives/Science/2014 June 30

Science desk
< June 29 << May | June | Jul >> July 1 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


June 30 edit

Live and Neutral edit

Why does the resistance becomes almost zero when a live wire touches a neutral wire? — Preceding unsigned comment added by 182.66.60.246 (talk) 03:14, 30 June 2014 (UTC)[reply]

Resistance between what two points? But assuming a wire has little resistance in general, obviously connected wires still have little resistance. Do you mean potential (voltage)? That's a more surprising case. Again assuming low-resistance conductors, the voltage of a "high voltage" line drops when it touches a ground/neutral line. But then remember that voltage is a difference not an absolute value, and the ground/neutral is the usual reference point. And it's not surprising that connecting two wires causes them to have the same voltage as each other. Attaching a live wire to ground causes (almost) all the current to flow that way, so it doesn't seem that unusual that connecting a wire to neutral causes it to have little or no measurable voltage with respect to neutral. DMacks (talk) 03:21, 30 June 2014 (UTC)[reply]
See Short circuit. 24.5.122.13 (talk) 04:16, 30 June 2014 (UTC)[reply]
Rephrasing DMacks, the resistance is almost zero because the resistance of any wire (in this sense) is almost zero. Touching 2 wires together just completes the circuit, where previously there was an air gap (which has very high resistance). That one is live and the other neutral is irrelevant. cmɢʟeeτaʟκ 19:23, 1 July 2014 (UTC)[reply]

Current Induction edit

What happens at atomic level in a coiled wire when a bar magnet in moved in and out of coil which induces curret in the wire? — Preceding unsigned comment added by 182.66.60.246 (talk) 03:22, 30 June 2014 (UTC)[reply]

In the frame of the wire, the magnetic field strength changes as the magnet moves around; an electric field is produced according to Faraday's law of induction. In the frame of the magnet, it's just the Lorentz force on the charges moving along with the wire. --Tardis (talk) 03:52, 30 June 2014 (UTC)[reply]
Although your question specifically requests an explanation at the atomic level, a conduction electron in a piece of metal isn't actually closely associated with any particular atom at any given time. So it works reasonably well to think of the conduction electrons as forming an "electron gas" that's confined to the wire, independent of any atoms. For details, see Free electron model and Nearly free electron model, or if you're super interested study a beginning book on solid-state physics, which will require an understanding of quantum mechanics as a prerequisite. Red Act (talk) 05:50, 30 June 2014 (UTC)[reply]

Passenger Aeroplane edit

What happens to those carbon dioxide which are exhaled by passengers of a passenger aeroplane and there is no accomodation of it although the plane is totally airtight? — Preceding unsigned comment added by 182.66.60.246 (talk) 03:39, 30 June 2014 (UTC)[reply]

Passenger aircraft aren't totally sealed. As our article cabin pressurization explains, in most passenger jet aircraft bleed air extracted from the engine compression stages is fed to the cabin, and subsequently exits via an outflow valve. Boeing have recently gone over to using electrical pumps to provide inlet air instead - reducing the risk of introducing contaminated air into the cabin. AndyTheGrump (talk) 03:51, 30 June 2014 (UTC)[reply]
That's true - but even if it were not, a person can live on about 16 cubic feet of air per hour (see calculations here, for example) - limited by the amount of exhaled CO2 rather than the amount of oxygen used. According to Boeing, the 747-8 has a cabin volume of about 30,000 cubic feet for around 450 passengers...which is 66 cubic feet each. So without the air being replaced at all, the passengers could survive for around 4 hours on just the air in the cabin.
Put another way, you only need to replace 16*450=7200 cubic feet of air per hour...which is 120 cfm. The extractor fan in my bathroom is rated at 110 cfm - so if the passengers are mostly just sitting quietly or sleeping, it could probably exchange enough air to keep the entire 747 ventilated.
SteveBaker (talk) 04:17, 30 June 2014 (UTC)[reply]
For reference, a cubic feet is 28 litre in proper units. 131.251.254.110 (talk) 07:45, 30 June 2014 (UTC)[reply]
The 16 cubic feet Steve cited would be at sea level. In a pressurized aircraft the internal air pressure is only around 3/4 to 4/5 of sea-level pressure, which suggests that a somewhat larger volume of air per hour would be needed. (Not necessarily 4/3 to 5/4 as much, because it depends on human response to oxygen and CO2 at those pressures. But probably something like that.) Of course this is just a detail and does not invalidate Steve's comment, or for that matter, Andy's. --70.49.171.225 (talk) 08:05, 30 June 2014 (UTC)[reply]
I don't trust first-principles calculations on matters like this because biology has a way of surprising us. But my guess (and anything other than literally looking at data from a submarine accident or something is just a guess) is that the partial pressure of CO2 ought to be the more limiting factor, and would not depend on atmospheric pressure. Wnt (talk) 11:26, 30 June 2014 (UTC)[reply]
See also Environmental control system (aircraft). Red Act (talk) 04:13, 30 June 2014 (UTC)[reply]
"Totally airtight?" Passenger airplanes with pressurized cabins cannot be accurately described as totally airtight. For much of a flight, the air in the passenger cabin is at constant pressure but not because the cabin is airtight. The pressure is constant because air is allowed to escape from the cabin at the same rate as fresh air is introduced by the cabin pressurization system. There is a significant amount of fresh air being introduced to the cabin every hour, and the same amount escaping. The escaping air contains a slightly higher-than-normal amount of CO2 and water vapour, and a slightly lower-than-normal amount of oxygen; and the introduced air contains normal CO2, water vapour and oxygen. Dolphin (t) 12:43, 30 June 2014 (UTC)[reply]
Some comments:
1) People with breathing problems, like asthma, might die far sooner from excess carbon dioxide.
2) Everyone would be made uncomfortable far earlier.
3) Ironically, "bad air" is more of a problem for planes stuck on the tarmac, as the air exchange system depends on running the engines, and they don't want to do that on the ground for an extended period, or they need to return to refuel before take-off. StuRat (talk) 14:51, 30 June 2014 (UTC)[reply]
They can use the APU, or run the ventilation using external power. 24.5.122.13 (talk) 21:40, 1 July 2014 (UTC)[reply]

Interstitial tear of the Posterior cruciate ligament edit

What is a Interstitial tear of the Posterior cruciate ligament in the knee? Is is classified as a grade 1, 2, or 3 tear? Just to clarify I do not have a knee injury and I am not asking for medical advise, I just like learning about the anatomy of the knee. --Sara202020 (talk) 07:13, 30 June 2014 (UTC)[reply]

See Posterior cruciate ligament. An interstitial tear is one which occurs in the body of the ligament, rather than a tear where one of its ends detaches from the bone. (See this article about the rotator cuff for an explanation of the terminology). The grade of a tear depends on its severity, rather than its location, so an interstitial tear can be any of the three grades. Tevildo (talk) 11:45, 30 June 2014 (UTC)[reply]

Trimix (injection) edit

Trimix (injection) talks about Trimix injections, but http://finance.yahoo.com/news/ed-patients-now-60-days-123000584.html talks about Trimix gel. Is Trimix gel even a thing, or is it a scam like like "herbal Viagra". Either way, the Wikipedia page should mention it. 76.194.214.123 (talk) 09:07, 30 June 2014 (UTC)[reply]

"Trimix gel" is a registered trademark of a company that can easily be located via your favourite search engine, and who have some ultra-yiffy videos on their website which appear to demonstrate their product's effectiveness. It seems to be a form of Trimix that's suitable for topical application. On the other hand, they don't give the impression of being a well-established pharmaceutical firm. Anyone suffering from erectile disfunction should contact his doctor for reliable advice, of course. Tevildo (talk) 21:47, 2 July 2014 (UTC)[reply]

Colour space and spectrum colours edit

What proportion of the colour space of human vision is covered by the spectrum colours (rainbow colours)? Since human-perceived colours are described by three numbers, and spectrum colours by two (wavelength and intensity), it seems to me that the mathematical answer to this question should be zero, but can that actually be correct? 86.179.117.18 (talk) 13:12, 30 June 2014 (UTC)[reply]

It's more-or-less true that the spectral colors, distinguishing different intensities, form a 2-D surface within the 3-D color space that humans perceive, and it's true that a 2-D surface has zero volume. However, in reality humans have a non-zero just-noticeable difference in color perception, such that there's a small but non-zero volume within the 3-D color space that's perceptually indistinguishable from the spectral colors. Red Act (talk) 14:47, 30 June 2014 (UTC)[reply]
One useful way of characterizing a color is in terms of hue, saturation, and intensity. In that scheme, the "spectrum colors" are the colors that have maximum saturation. Looie496 (talk) 15:20, 30 June 2014 (UTC)[reply]
An actual numeric answer to your question may be hard to come by, but I'd think that 1% would be a good order-of-magnitude estimate, given that the number of distinguishable colors increases by roughly a factor of 100 for each additional dimension in an animal's color space, according to Color vision#In other animal species. The Color difference article talks about some of the complexities involved in trying to come up with color space models such that just noticeable differences between two colors are equivalent to the same Euclidean difference between points in the color space model, anywhere within the color space. A big part of the difficulty with perceptual non-uniformities in the models is due to the human eye being more sensitive to some colors than others; see Color vision#Physiology of color perception. Red Act (talk) 16:11, 30 June 2014 (UTC)[reply]
Thanks for the replies. 86.179.117.18 (talk) 16:33, 30 June 2014 (UTC)[reply]
A color is a mixture of many frequencies, each at different intensities - as such, there are an enormous number of them. However, since our eyes are only really telling us how much each of three sensors is stimulated, we're really perceiving that as just three numbers. Some species of shrimp have twelve sensor types - so their color perception should be represented by 12 numbers...so right there, you know that we're missing out on a lot. If we could distinguish 100 different brightness levels (which is a very roughly reasonable statement), then we can see 100x100x100 = one million distinct colors. If the shrimp can see 10 different brightness levels then it can hypothetically see 100x100x100x100x100x100x100x100x100x100x100x100 = 1,000,000,000,000,000,000,000,000 colors....and some hypothetical animal with yet more sensors would be able to distinguish even more. Of course whether it's brain is equipped to use that information is very doubtful...but in principle, it's vastly more capable than we are.
However, you used the word "spectrum" - and that changes things. A spectrum (such as you'd see in a rainbow or as spread out by a prism) has at each point the intensity of light at one single frequency. There are definitely colors that we can easily distinguish (like the color we call "magenta" that is a mix of red and blue) that aren't in the spectrum at all. At each point in the spectrum, there is a single frequency at some particular intensity. But how many "colors" that is depends on how finely you measure that. If you sliced the spectrum into 100 frequency bands - and measured the intensity of each one to one part in one hundred - then there would be 10,000 distinct colors - and we could perhaps argue that humans (being able to see a million colors) can see 100 times more colors than there are in the spectrum. But that's a bogus argument. Suppose we sliced the spectrum into a hundred thousand bands - each with 100 intensity steps. Now we're saying that there are 10 million colors in the spectrum - and humans can only see a million colors (some of which, admittedly, aren't even IN the spectrum).
So I don't think this is a very meaningful question. It's all about precision of measurement - and that's not an apples-and-apples comparison because you have to decide the precision of frequency measurement and compare it to the precision of intensity measurement.
It's just like asking: Can you throw a ball further than the weight of an elephant? If you measure the distance in millimeters and the weight in kilograms - then yes. But if you measure the distance in meters and the weight in micrograms - then no. It's truly meaningless.
SteveBaker (talk) 19:31, 1 July 2014 (UTC)[reply]
The question isn't really comparing apples to oranges, because the question effectively does specify the measurement precision to be used. The OP explicitly specifies "human perceived colors", so dividing the visible spectrum into 100,000 frequency bands is clearly not what the OP is intending, because the human visual system can't distinguish that many different frequencies.
Conceptually, the product space of the set of all frequencies in the human-visible spectrum with the set of light intensities the human eye can handle is a two-dimensional manifold. There exists a map from each point on that 2-D "spectral manifold" to a point within the 3-D human color space. (Whether or not that map has a well-defined inverse is irrelevant.) The image of that map forms a 2-D submanifold of the 3-D color space. One can then define a 3-D "spectral subspace" of the color space as being the set of all points in the color space which humans are incapable of distinguishing from some point on the 2-D submanifold. You can then make an apples-to-apples division of the volume of the 3-D spectral subspace by the volume of the whole color space, if you first introduce on the color space a metric such that the color difference using that metric between any pair of colors that are just noticeably different to humans is the same no matter where that pair of colors are within the color space. That ratio is what the OP is asking for. The answer can't be very precise, because color perception involves biology and psychology, not just physics, but the question is indeed asking for a meaningful number. Red Act (talk) 23:26, 1 July 2014 (UTC)[reply]

regarding species edit

I always wondered: if a biological variety A of a species X can have offspring with variety B (of X), and B with C (of X), A need not be able to with C. So, if A and C don't produce offspring, is A a species with respect to C? - SCIdude (talk) 14:26, 30 June 2014 (UTC)[reply]

Our article ring species addresses this. -- ToE 14:51, 30 June 2014 (UTC)[reply]
Yes. As the article says, A and C would be considered the same species if they can have fertile common descendants, even if not directly. However in practice things can sometimes get a bit tricky. I read an article a couple of years ago (can't remember how to find it) about a case involving two varieties of fish that were classified as distinct species because they did not interbreed. However, when a third variety of fish was introduced into the environment, it was capable of breeding with both of them. What this sort of thing really points out is the arbitrariness of the "species" concept. Many modern biologists no longer take it very seriously. Looie496 (talk) 15:11, 30 June 2014 (UTC)[reply]
Well, we can take the concept of "species" seriously, even if we acknowledge the species problem. Systematics and cladistics seem to be replacing taxonomy, and there are ontological problems with the notion of defining species by ability to produce fertile offspring -- but for most purposes it's just fine to talk about Apis mellifera or Carduus nutans or other common species. Yes, there are ring species and cases where species don't make a lot of sense (e.g. archaea). But just because there are corner cases and tough calls doesn't mean species designations are completely arbitrary. Unless your goal is to resolve evolutionary relationships, species work just fine for most biological applications. SemanticMantis (talk) 15:51, 30 June 2014 (UTC) (p.s. Dialect continuum is a nice analogy for ring species.)[reply]
Beyond this, remember that some types of individuals are universally considered the same species, despite what's likely an inability to reproduce together; an Irish Wolfhound and a Chihuahua aren't really going to be able to produce puppies, but nobody's going to say that they're separate species. Nyttend (talk) 11:52, 1 July 2014 (UTC)[reply]
Members of the same species don't need to be able to reproduce together directly, they only need to be able to have overlapping trees of descendants. As a rather obvious example, a pair of males or a pair of females cannot reproduce together. Dogs, by the way, seem like an excellent example of a "ring species". Looie496 (talk) 12:18, 1 July 2014 (UTC)[reply]
I understand; I was attempting to provide a very simple example that everyone assumes. Not trying to denigrate your more scientific approach; it's just that considering the dogs a single species is much more intuitive to me, and it may be likewise for SCIdude and others as well. Nyttend (talk) 12:58, 1 July 2014 (UTC)[reply]

Micro SD type of and how many GBs edit

How many types of micro SD cards there are and how many gigabytes can they carry? — Preceding unsigned comment added by 162.219.184.233 (talk) 15:04, 30 June 2014 (UTC)[reply]

This question belongs on Wikipedia:Reference desk/Computing. Looie496 (talk) 15:13, 30 June 2014 (UTC)[reply]
Yes, and the questioner should ask a more specific question there if they don't find what they want in our Secure Digital article. -- ToE 21:08, 30 June 2014 (UTC)[reply]

Water plant in Dutch nature reserve edit

 
Unknown plant

.

Hello, would anybody be able to identify the water plant shown on this picture? Someone found it in a wetland reserve in the North of the Netherlands. A quite long discussion in the nl.wiki Biology café gave no satisfying result, though Ludwigia and Salix were mentioned. Regards, Apdency (talk) 15:42, 30 June 2014 (UTC)[reply]

Are you sure it's a water plant at all ? Those usually don't grow up beyond the waterline much, but just float on the water and spread out, as that's a more effective way to increase the amount of sunlight that hits the leaves. See water lily or lotus flower for examples. I'm wondering if this isn't a terrestrial plant which is in an area that's been flooded. StuRat (talk) 16:08, 30 June 2014 (UTC)[reply]
Uhm, having read that, I was planning to propose Vaccinium corymbosum, but now I see that on Commons, EB Doulton already did the work. Apdency (talk) 16:55, 30 June 2014 (UTC)[reply]
But that's a North American plant, so it would have to be an invasive species, if present in the Netherlands. StuRat (talk) 17:08, 30 June 2014 (UTC)[reply]
I learned that it has been imported there from North-America in the 1940s. This Dutch language page is one of the sources; it also shows other pics of Vaccinium corymbosum in that particular reserve ('Fochteloërveen'). Apdency (talk) 19:07, 30 June 2014 (UTC)[reply]
Our article on that plant should be updated accordingly. Are you up to it ? StuRat (talk) 16:45, 1 July 2014 (UTC)[reply]
Uhm, I hope this will be a challenge to anyone more used to writing on biology topics than I am. Apdency (talk) 19:47, 1 July 2014 (UTC)[reply]

RGB colour triangle edit

In an RGB colour triangle, such as the one here, I understand, of course, that the corner points are R, G and B, and that the RGB values of the interior points are somehow related to the distances to the corners. However, this "somehow related" calculation can be done in numerous different ways, and the resulting triangles will contain different subsets of all the possible RGB colours. Even if it is desired to have (255, 255, 255) at the centre of the triangle, there will still be many ways to do it. What is the reason (if any) for choosing one method over any other? Is there one particular method that makes special sense? 86.179.117.18 (talk) 17:00, 30 June 2014 (UTC)[reply]


 
Points in the RGB color space corresponding to linear steps in each coordinate i.e. 0% 25% 50% 75% 100%. The actual intensity steps depend on the gamma of your display.


The corner points have [RGB] coordinates [255 0 0] [0 255 0] and [0 0 255]. The sides are the locii of coordinates [0 n n] [n 0 n] [n n 0]. Ideally the intensity of each primary light is proportional to its coefficient 0...255 and the white point is at the Centroid of the triangle. However a typical CRT has a power-law luminance response, typically gamma=2.2, so will not respond proportionally to [RGB] coordinates. The solution is to apply Gamma correction. This color list attempts to show the difference gamma correction makes but will be constrained by the display you actually use. 84.209.89.214 (talk) 18:05, 30 June 2014 (UTC)[reply]

Ultimately, the choice of color-coordinates (and the various ways that you can plot them) are heuristics guided by data collected during several landmark perceptual psychology experiments. For example, CIE 1931 uses data that was collected by the world's preeminent psychologists in 1931; CIE 1976 uses essentially the same methodology and new data collected in 1976... Now, if I understand correctly, humans didn't evolve very much during those decades, so if the data was actually representative of the human population using sound scientific methodology, these color spaces should be exactly the same.
To be perfectly honest, the whole concept of projection into "perceptual" colorspace is, in my opinion, a lot of false precision puffed up by people who call themselves color scientists. (I have yet to meet a "color scientist" who is an optical physicist but I've met plenty who studied art or film or photography - they're really interested in the artistic side of science). Since I started off my career as a pedigreed optical physicist, I am incredibly frustrated by these silly color spaces: all I want is an imaging spectrometer and an intensity-versus-wavelength or frequency spectrum plot across the visible spectrum. That is a precise methodology: we can build machines (spectrum analyzers and photon counters) that exactly tell us everything that matters with respect to the color. We can build other machines (monochromators and tunable lasers, for example - or even just regular illuminants and calibrated color filters) to precisely reconstruct any combination of colored light, with arbitrary precision. Any sufficiently-well-sampled reconstruction of a scene that produces the same spectrum is guaranteed (as a side-effect) to also have the same coordinates when projected into a 3-element perceptual colorspace - which is, at the core, nothing but an obscenely under-sampled coordinate-transform of the frequency spectrum. With a bit of linear algebra, you can construct any color space you like, subject to any constraint you wish to impose, and produce a brand new n-element colorspace; and if your linear algebra exactly matches that used by the CIE or the ITU or any of the other prestigious consortiums, then you get one of their numerous "standard colorspaces." Nimur (talk) 18:21, 30 June 2014 (UTC)[reply]
Correct, but encoding an movie with a seperate spectrum stored for each pixel will take a lot more storage/bandwidth, and making a display that reconstructs that spectrum at at each pixel doesn't sound cheap. The reason three dimensional color spaces work pretty well are because in the end that spectrum gets broken down to responses from three (technically four) types of receptors in the eyes so, like you say, it's all linear algebra from there. Katie R (talk) 19:18, 30 June 2014 (UTC)[reply]
....linear algebra based on an approximate, implementation-dependent, somewhat-standards-compliant heuristic. I don't mind the imprecision; I don't mind the psychology-based perceptual heuristics... what irks me is the phony pretense of numerical accuracy - which is, I think, the same thing that the OP's question is getting at. Nimur (talk) 19:25, 30 June 2014 (UTC)[reply]
For tasks such as paint mixing and textile production, a precision no less than the ability of a critical eye to distinguish a difference between colours is justified and, in practice, is paid for in a commercial system such as Pantone's which samples thousand(s) of colours. According to Pantone, the colour of 2014 shall be Radiant Orchid! One sympathises with Nimur's disdain for the arty fashion designers and other consumer-oriented companies that don't care much about electromagnetic spectral analysis of that shade. 84.209.89.214 (talk) 00:19, 1 July 2014 (UTC)[reply]
  • Thanks for all the replies. Let me put it another way. Is there anything special or significant, either physically or perceptually, about the subset of RGB colours that are displayed in the colour triangle? I apologise if this question has actually been answered above, but if it has, I'm sorry, I can't pick it out. 86.179.117.18 (talk) 19:33, 30 June 2014 (UTC)[reply]
The color triangle has no utility until 3 practical phosphors (for a CRT) or dyes (for a positive film) are formulated that can reproduce a generous Gamut of colours. The article Phosphor shows the plethora of chemical formulations that have been investigated. Development of color TVs took a long time due to the long search for a red phosphor; for this purpose a rare earth phosphor YVO4,Eu3 was introduced in 1964. The RGB triangle is thus dictated by the state-of-the-art of additive colour reproduction, just like the subtractive CMY (Cyan, Magenta, Yellow) triangle for printing is limited by practical ink formulations. In this case, 3-ink colour printing is incapable of showing a deep black and in practice a fourth ink is needed for good quality images, see CMYK color model. 84.209.89.214 (talk) 23:49, 30 June 2014 (UTC)[reply]
Phrased another way: we know how to build tri-color-filter-arrays - "red/green/blue" combinations - for things like television displays and camera sensors and printer inks, such that the "greenish" color matches the average human eye's perception of green; and the "reddish" color filter matches the average human eye's perception of "red"... or can be emulated by some combination of those inks/phosphors/dyes. This lets us use only three numbers - we can call them "R,G,B" or "X,Y,Z" or "n1,n2,n3", or {z[0],z[1],z[2]} to approximate a full color spectrum. This is possible because researchers and industrialists have spent years experimenting with various printable inks, chemical dyes for phosphors and LCD panels, and so on. The color triangle is just a plot that idealizes the possible combinations.
The subset of RGB values inside the color triangle therefore correspond to all the possible combinations of (printer ink, or LCD pixel brightnesses on your computer screen, and so on). The "size" of the triangle, loosely speaking, shows the extent to which such inks/dyes/phosphors can reproduce all possible colors that humans can normally see. A standard colorspace can be compared, in triangle-form, to the calibrated colorspace of any machine, enabling designers to determine how one set of chemical dyes compares to another; or how one brand of television hardware corresponds to another, and so forth. Nimur (talk) 00:13, 1 July 2014 (UTC)[reply]
Inks for Color printing are normally the subtractive primaries cyan, magenta and yellow, not RGB. 84.209.89.214 (talk) 00:39, 1 July 2014 (UTC)[reply]
Different coordinates, different color limitations, exact same methodology. Let me emphatically reiterate: when you represent the complete spectrum, everything is incredibly simple. There's no such thing as "additive" or "subtractive" color. All that matters is the intensity at each wavelength. Nimur (talk) 01:37, 1 July 2014 (UTC)[reply]
"The subset of RGB values inside the color triangle therefore correspond to all the possible combinations of (printer ink, or LCD pixel brightnesses on your computer screen, and so on)." -- Now I am totally confused. How can this be true? Surely numerous RGB combinations -- in fact most combinations -- are not present in the colour triangle??? This is the whole basis of my question: Is there anything fundamentally significant about the subset of combinations that are present? 86.179.117.18 (talk) 02:13, 1 July 2014 (UTC)[reply]
Okay, let's disassemble this a different way; my intent was not to add confusion. When you look at one of these color-space gamut plots, you're looking at a two-dimensional plot of a three-dimensional color space. What's worse, three dimensions isn't really even enough to capture every possible combination of photons in the real, physical world!
So what the "color scientists" have done is take one slice out of the three-element model. Usually, they do this by selecting "constant brightness" - by decomposing their color model into a luminance and chrominance space. This means that they intentionally pick one coordinate to be a parameter that they think looks like brightness, (what-ever that means); and that leaves two coordinates to describe what they think looks like color. Now, the physicist deep within all of us is just fuming over this decomposition: "brightness" is a vague and fuzzy description of radiometric intensity. In other words, total number of photons. But, perceptual psychology experiments seem to indicate that not all photons look as bright. Green photons look brighter! It's weird, but it's a real effect that's mostly determined by the anatomy of the human eye (the cells in our eye that are more sensitive to green are shaped differently than the other types); consequently, this affects the way our brain's visual cortex interprets the signals that the retina produces. "Green" photons not only look "green," but they also look "bright."
When you look at this 2-D plot, you are seeing one single slice - usually a slice of constant "brightness" (Z) - and so the X,Y positions in the chart correspond to the "color" part of the photons. This is totally not-physical. It is a mathematical transformation: a projection onto a new set of coordinates, defined by integrating the incident light, premultiplied by a spectral sensitivity curve, to result in a number. That number is called "X" for one of the curves; and "Y" for another curve... you get the idea. And now, they graph the "X vs. Y" plot - and draw a triangle in it for the region that they believe corresponds to the visual range. By construction, there exists a gamut in this plot that corresponds to all possible nonnegative combinations of certain primary colors. Keep in mind that the position of the primaries in X,Y,Z space is determined by the same math, so it's definitionally true that the triangle (or the convex hull) between them must contain all possible nonnegative combinations of primaries - for this particular Z slice.
This is a whole lot of linear algebra, involving a whole lot of singular matrices and crude pseudoinverses... which is why, as I said earlier, things are so much easier if you stick to pure physics - total photon count, and optical spectrum - which can totally and uniquely define the color of any object in the scene. This is the purist, spectroscopic approach, and it is what chemists, optical scientists, and astronomers use. It is sometimes called radiometry. Contrast it to perceptual photometry, with all the XYZ color-spaces and weird gamuts... which is what photographers, filmmakers, and "color scientists" use (or, at least, they pretend to use it so they look busy, while a chemist designs their camera-film, or a physicist designs their digital camera sensor, or so on). Nimur (talk) 03:34, 1 July 2014 (UTC)[reply]
...And Nimur's optical physics experience (and bias ;) show clear! Very good explanation, but I think you're a bit hard on some of the systems we use to describe color. I take your points that the "pure physics" approach to "color" is somewhat conceptually simpler. As you say, it's just "intensity at each wavelength" -- but that's not very useful for a person who uses colors for designing print, fabric, user interfaces, and other things that IP 84... mentions above. I mean, that's an infinite dimensional Hilbert space your talking about (or is it a Banach space? -- even worse). If I want to make a poster to look similar on my monitor, your monitor, and when printed, shall we discuss it in terms of a specific Fourier series? And if so, to what end? As you also point out, human color perception does weird things, with "bright" greens etc. RGB, CMY(K) and other systems do have some problems, but they also serve many purposes that cannot be achieved with spectral descriptions and analysis (even if we could pass elements of L2 around, we'd have still have to translate that information into instructions that a printer or monitor could use). It can be confusing to go back and forth between different systems, and RGB et al. can never suffice for representing all colors/wave forms, due to the projections, singular matrices and other lossy parts of the process that you mention. But I won't be disparaging RGB or CMYK for being "non-physical" -- it is not their goal to be physical, but to offer a systematic framework that can be used to describe and manipulate colors, in a variety of contexts. At that, they succeed, at the cost of using transformations that make it hard for the average person to follow the math of an RGB code, through display technology, back to the spectrogram.
To answer OP's recent follow-up questions explicitly: in concept, all RGB combinations of a certain brightness are in that triangle. No monitor or printer can display all (infinitely many) of them, but as a mathematical concept (i.e. as a projection of the span of the basis vectors), they are all there. The significance of that triangle is that it represents the boundaries of everything describable in that system, even if that system cannot describe every wave-form of light that some human eye can distinguish. SemanticMantis (talk) 04:08, 1 July 2014 (UTC)[reply]
Nimur, physical spectra and human perceptual models are both essential in different contexts. You can't specify RGB primaries as physical spectra. An LCD or OLED display can't practically produce the exact spectrum of the YVO4,Eu3 red phosphor in an old CRT, and shouldn't have to. You define primaries using CIE xy coordinates or something analogous so that the engineers are free to pick the cheapest method of producing that psychological color. It isn't realistically possible to capture or store images with complete physical spectra at visible wavelengths either. Color photography and color TV would be impossible in practice if we "stuck to pure physics". DVDs and streaming video wouldn't be possible without lossy video compression that's closely tied to properties of the human visual system.
You said "There's no such thing as 'additive' or 'subtractive' color" when you deal with physical spectra rather than XYZ coordinates. That's wrong. RGB displays produce light, so they work additively. Inks absorb light, so they work subtractively. Subtractive color is more complicated because it depends not only on the absorption spectra of the inks but also on the spectrum of the illuminant. This is all "pure physics".
The CIE xy plane is a 2D projective representation of the 3D space of perceptual colors. There's one point in the xy plane for every line through the origin in XYZ space. It's not a slice of constant brightness; points in the xy plane don't have a brightness.
Perceptual brightness is zero in the infrared and ultraviolet and positive in between. It has to have a maximum somewhere. The maximum happens to be at around 555 nm, which is perceptually green. This is no more strange than if it were anywhere else, as far as I can tell. -- BenRG (talk) 06:29, 1 July 2014 (UTC)[reply]
Photons don't remember how they were produced: when light hits your eye or your camera, the photon doesn't magically know that it came from an additive combination of spectral light sources, or a subtractive absorption/reflection of photons from a pigmented surface. By the time the light gets to you, the only state the photons carry that is relevant to their color is their wavelength. How the photons got that wavelength is irrelevant, because the photon doesn't carry that kind of information. What matters is the intensity and wavelength: these are the most general possible way to describe the color of a scene.
All the various color models exist to help approximate this completely general representation of color using as few numbers as possible. That is, again, why I call these color models "obscenely undersampled," but as BenRG and SemanticMantis point out, these models are good enough to fool most human eyes, and therefore they have some utility. Nimur (talk) 16:01, 1 July 2014 (UTC)[reply]
The cones in the eye don't know where the light came from either, so there's no difference between additive and subtractive psychological color either, from that perspective.
Low-frequency (radio) astronomy works by direct sampling of the electromagnetic field, but the frequency of visible light is on the order of 1015 Hz. Even astronomers use color filters at those frequencies, much like animal eyes. I think it's unavoidable. -- BenRG (talk) 03:34, 2 July 2014 (UTC)[reply]
  • Thanks, at last we have got to the actual question that I asked. "all RGB combinations of a certain brightness are in that triangle." So, I take this to mean f(R, G, B) = const? What is the function and what is the constant? Thanks for your patience with this. 86.179.0.75 (talk) 11:17, 1 July 2014 (UTC)[reply]
In this case, that function would be one way to represent the color space conversion matrices. For example, if you were using MATLAB to perform your image processing, you could use this reference to review the matrix multiplication that would define f. If you were using CoreGraphics ColorSpace, you could create and pass in your own function and define your own colorspace. Many standard color-spaces exist - and they would therefore have a standard function: for example, Rec. 601, Rec. 709, and Rec. 2020 all standardize the format conversion mathematics, including a colorspace definition. You can represent them as 3x3 matrices of standard coefficients, and you should always look those coefficients up in a reliable reference (like the technical specification itself). Speaking from experience - many implementations claim to use one of these standards, but fail abysmally if measured against calibrated equipment. Many more times, I have seen software implementations that report a standard colorspace but use a home-made set of coefficients. So, tread with caution: the standardized colorspace might not be the one that's in common use and commonly called "standard." Nimur (talk) 16:07, 1 July 2014 (UTC)[reply]
See Chromaticity - it has a good description of the diagram. The CIE color spaces attempt to make perceptual brightness one of the 3 axes in their system. For example the L in the Luv system is luminance, and by setting that constant and varying u,v you can create the CIELUV chromaticity diagram. CIE_1931#CIE_xy_chromaticity_diagram_and_the_CIE_xyY_color_space explains the transformation from the XYZ to the xyY color space that is used for the commonly seen CIE 1931 diagram. Katie R (talk) 13:24, 1 July 2014 (UTC)[reply]
  • "in concept, all RGB combinations of a certain brightness are in that triangle" -- This seemed to be the answer to my question, but now I am having doubts that it can be true. When I look at the triangle in question, it seems to me that the vertices are (255,0,0), (0,255,0) and (0,0,255), and the centre is (255,255,255). I can't understand how (255,255,255) can possibly have the same brightness as, say, (255,0,0) in any sensible physical or perceptual sense, since the former has all the brightness of the latter (R = 255) plus a whole lot extra. 86.167.19.242 (talk) 17:45, 1 July 2014 (UTC)[reply]
The vertices represent the chromaticity of each primary, independent of the luminosity. It is possible for all three primaries to have vastly different luminosities, but the slice of the color space represented by the diagram sets the luminosity value constant on all three. Katie R (talk) 18:30, 1 July 2014 (UTC)[reply]
A 3D rendering of the edges of an RGB color cube projected to the xyY color space with the CIE 1931 diagram drawn slicing through it could be interesting, showing how the RG and B vertices are projected onto the plane. Unfortunately I have no idea where to find one, and I don't have the time to make one. — Preceding unsigned comment added by Katie Ryan A (talkcontribs) 18:37, 1 July 2014 (UTC)[reply]
Sorry, I don't understand how your reply resolves my question. Are you saying that (255,0,0) and (255,255,255) have the same luminosity? Is this the same as saying that they have the same brightness? 86.167.19.242 (talk) 19:24, 1 July 2014 (UTC)[reply]
Luminosity is pretty much the same as perceived brightness - it's the term the CIE uses when talking about how their standard observer perceives brightness. I'm saying that (255,0,0) and (255,255,255) might not both exist on that diagram. The red corner of the triangle has the same x,y coordinates as the (255,0,0) color, but the point on that plane of equal luminosity may actually be something like (200,0,0) or (300,0,0) which gets limited to (255,0,0) for display. Katie R (talk) 19:59, 1 July 2014 (UTC)[reply]
The first response in this thread, above, stated that "The corner points have [RGB] coordinates [255 0 0] [0 255 0] and [0 0 255]". I have been assuming that this information was correct. 86.167.19.242 (talk) 20:44, 1 July 2014 (UTC)[reply]
The animated diagram presents the only real color space that your screen can show. Identify the one bright spot that is not rotating. That is the (255,255,255) white point. The 3-D array is rotating about the "greys" axis from the white point to black (0,0,0). Looking along that axis towards the origin you see the rotating triangle of [RGB] points. We can speak of an analytical brightness variation only along that greys axis. Surfaces of constant perceptual brightness (PB) cut across the axis. The corner coordinates [RGB] need not lie on exactly the same PB plane and certainly do not lie on the same PB plane as the (255,255,255) white point. However the CIE 1931 color space and RGB triangles drawn on it are 2-D representations that hide the PB dimension (greys axis) that disappears "into the paper". 84.209.89.214 (talk) 00:07, 2 July 2014 (UTC)[reply]
The two-dimensional chromaticity diagram is a projection of the 3D color space, not a slice through it. Each point on the 2D diagram represents a line through the origin in the 3D space, not a point in the 3D space. The line through the origin consists of all possible brightnesses of a particular color (hue and saturation). The RGB values (1,0,0), (2,0,0), ..., (255,0,0) all map to the same point in the 2D diagram (one of the vertices of the triangle). Likewise the RGB values (1,1,1), ..., (255,255,255) all map to the same point (the "white point"). The white point is in the interior of the triangle, but it doesn't have to be at any of the centers of the triangle, and generally it isn't. You can project the 3D space into 2D in many different ways, and depending on which one you pick, the white point will be in a different location relative to the vertices. But in every projection, the same colors are inside the triangle and the same colors are outside. -- BenRG (talk) 02:34, 2 July 2014 (UTC)[reply]