Talk:Diffraction-limited system

Latest comment: 5 years ago by Gah4 in topic Implications for digital photography

Define variables edit

The article gives a formula without defining the valuables.

  • Even worse, it doesn't even give the actual formula for the diffraction limit. It gives the first "approximation" of the limit. LesPaul75talk 21:22, 15 February 2011 (UTC)Reply
I added the defs. I don't know of a better formula, but that's the one in the Abbe memorial; I'm not sure why it's called an approximation; maybe just that there's no exact answer. Dicklyon (talk) 06:37, 16 February 2011 (UTC)Reply
It doesn't say what d is, though.

Suggestion edit

Could the article give some examples with which people might be more familiar, for instance photography? I read the article but really didn't understand what it was talking about. -mattbuck (Talk) 18:53, 15 August 2013 (UTC)Reply

Implications for digital photography edit

This section has many faults, including not citing its sources, but the main one is that it is just wrong. The concept of a pixel-size-based 'diffraction limit' for a digital camera, as proposed here is unsupported by any theory or, for that matter, observational evidence. Over the next few days I'll try to rewrite the section, supposing such a section is needed. I suspect the source for this, had it been cited, is the 'Cambridge in Colour' web site, which expounds this theory. — Preceding unsigned comment added by Bobn2 (talkcontribs) 06:10, 11 July 2014 (UTC)Reply

Edited this section, in the end, I couldn't keep any of the previous version. In fact, I'm not convinced this section is needed at all, but the new version is, I believe, right and does cite it sources.Bobn2 (talk) 16:24, 11 July 2014 (UTC)Reply

I do think it seems odd that it suggests full-frame cameras only have about 10-megapixel sensors... and that's if you consider a monochrome / panchromatic B&W model, or a colour one that doesn't decimate the output of the bayer filtered array at all but does a best effort interpolation (non interpolated colour being more like 2.5mpx). SLRs can use apertures lower than f/8, can't they?
*googles for "full frame camera"* ... so, Canon's flagship model as of now (the 2017 EOS 5DS R) offers a 50-megapixel sensor. Unless something very strange is going on, I would suggest the mathematics here are highly suspect. Canon aren't a company who often give a suggestion of not knowing what they're doing. And I can't say my old Powershot A720 IS gives obviously blurred images even if automatically stopped down to f/8 or even f/11 when using full optical zoom in bright conditions (summer midday at lower latitudes), even though its sensor is much smaller (something like 1/2.5 inch, or about 10mm diagonal rather than 43mm - suggesting a maximum effective resolution in those conditions of 0.6mpx or even 0.4mpx, when it actually still produces decently sharp 8mpx ones). The EOS is also described as going up against Nikon rivals with 36, 20, 16 and 14.5mpx... There are also Canons available with 20, 22 and 30mpx, Sonys with 24 and 42mpx, Pentax with 36, Leica with 24... and with the prices ranging between $1000 and $7000, even before you buy lenses, and the cameras being very much marketed towards professionals, I doubt they'd get away with selling cameras that claim 15 to 50 megapixel resolutions but actually only provide 10 megapixel-equivalent clarity or less, as the users would soon notice the blurring when zooming into the captured images. And beyond this regular pro grade, there are really high-end "medium format" models (STARTING at $7,000, and going up to $33,000 or more) with sensors that have upto 2.5x the image area and as much as 100 megapixels (so equivalent of ~40mpx at full-frame, or twice the spatial resolution of 10mpx on each axis... possibly there's a 2x or even 4x miscalculation in the equation somewhere?). Also, bear in mind that 4K video counts as just over 8mpx, and would work quite comfortably as a slight letterbox inside a ~9.8mpx (3840x2560) full-frame sensor. Super hi-vision video would therefore need a sizeable quad-full-frame sensor at the same spatial resolution ("large format"?) as it's four times the pixel count as 4K... or essentially 3" x 2", making for a pretty unwieldy device that would certainly be larger than the ~40mpx models currently in production.
Obviously the uses for such cameras are a bit limited, as printing out a full 3:2 aspect 10mpx image at 200dpi would produce a medium-small 18x12 inch (or roughly A3) poster, and doing the same with a 40mpx producing a large 36x24 inch (~A1) print, beyond or even at which size you can easily afford lower resolution as it won't often be subjected to super close scrutiny. The medium-format extended resolution models would be essentially for billboard-size imaging... same as medium-format film was. But, all the same, there is a need to take pictures of that size for those uses, and the equation suggests that, if using an f/8 lens and the same 3:2 aspect, you'd need to have a huge 114 x 76mm sensor (4.5 x 3 inch) to support that.
Thinking about it, there must be some term missing from it, as, particularly, the size of the image that comes out of the back of a lens also depends on the viewing angle out of the front of it (regular, wide, narrow, telephoto, fisheye...), how tightly it reduces that image down, the focal distance, and particularly how far the sensor is from the outlet of the lens - exactly the same as the effect you get when adjusting the zoom/focus of a projector and how far away it is from the target screen. The further the projector is from the screen (with the same lens strength and zoom setting), the larger the image... (it's hard to say much about the spatial resolution given how most digital projector images are still far lower resolution than the potential optical maximum, with few 4K models available and even FHD ones being somewhat expensive vs WXGA or SVGA models with pixels exhibiting very sharp, straight-edged screendoor effects to their pixels; however, the main thing that seems to change is the brightness, the image can still be made as sharp with tight zoom and close positioning as wide zoom and distant position, suggesting the potential maximum spatial rez is much higher than currently achieved).
After all, if the aperture is all that matters, the size of the sensor shouldn't matter much. You could have a small sensor with tiny pixels very close to the lens output, or a larger one with chunky pixels some distance from it, and get the same result. There's a term missing somewhere, or maybe the whole thing is just misguided. Wouldn't the aperture be more involved in the *angular* resolution of the image, and the diffraction limit for the sensor (which is spatial instead) be independent from that? So, issues of chromatic aberration aside, the wider the image you can get out of the lens at the same sensor/focal distance, the higher the resolution?
I feel *I'm* also misunderstanding things slightly here (diffraction etc was never my strongest suit in physics lessons...), but working on a purely empirical, prove-it-yourself basis, the theory here doesn't seem to match up with what can be observed with a cheap digicam with a known sensor size and resolution, and that shows or allows control of the aperture setting, plus the zoom function of MS Paint.
NB, you can get SLR lenses for full-frame cameras that can stop down to at least f/2.8... which changes the game somewhat, doesn't it? In fact, that's the lower bound of a very common, near-default zoom lens for the Canon EOS range, the 24-70mm model (clocking in at a cool $1700, fwiw, and pretty long... so again, not something that's exactly a point-and-shoot). There's also a cheaper f/4 model available (as well as other zoom options in both aperture ratings) - which still doubles the spatial and quadruples the aereal resolution... though, both of those can also go to much tighter apertures, and that's something that would be used quite a lot to improve *focal* clarity (widening the depth of field and sharpening things up in the centre of it in extreme cases) when there's enough light for it and special soft-focus effects aren't desired.
And in fact there are fixed-zoom 35, 50 and 85mm options at f/1.8, 1.4 or even f/1.2 (as well as 18 or 21mm wide-angles at f/2.8), for maximum light collection and extreme DoF effects (Bokeh, portraits with blurred backgrounds but sharp foreground even in a fairly small studio, etc)... which suggests *really* high rez possibilities even under the bounds of this formula, more than 4x the spatial and 16x the aereal density (so 150+ megapixel wouldn't be out of reach, or even about 40 if you averaged all the subpixels of a bayer quad into a single whole... in fact at 1.2, the latter could easily manage 50 at full frame and over 100 at medium format...).
Indeed the tightest aperture listed for any lens on a shortlist of "best EOS lenses" (linked from the page of full-frame SLR bodies I was looking at above) is f/4.5 thru f/5.6 for a super-telephoto (100 to 400mm), which is noted as having "poor low-light performance", though that's sort of expected as that kind of zoom cuts out a lot of the available light anyway. The implication being that you'd be unlikely to stop it down any tighter unless there was some special requirement for it, e.g. shooting directly into the sun or trying for an extra long exposure without using an NDF to restrict the light input still further. So essentially the lower bound on resolution - with green light - would be around 20mpx, with such a lens (somewhat higher for blue, lower for red, but shouldn't be noticeable in the main).
So, hmm... bit weird. Why is f/8 being used as the standard here in the first place, then? The usable aperture range for these cameras seems about the same as my pocket-size Powershot rangefinder model, and indeed my mobile phone's camera seems to use roughly the same, so the argument that a smaller sensor in a point-and-shoot usually comes with wider apertures doesn't really wash. 51.7.49.61 (talk) 18:45, 16 August 2017 (UTC)Reply
Until recently, higher quality digital cameras have a low-pass spatial filter in front of the sensor array. This is needed to prevent aliasing. More ordinary digital cameras have resolution limited by the lens, and don't need a low-pass filter. In an ideal sampled system, you low-pass filter with a sharp drop just below the Nyquist limit. Since actual filters aren't ideal, the cut is somewhat lower. For many lenses, spherical aberration is the limit at larger apertures (smaller f/ number), and diffraction at small apertures (large f/ number). Note also that in photography stop down means small aperture or large f/ number. When the sensor resolution reaches the point where no lens can exceed its Nyquist limit, then the low-pass filter isn't needed. As noted, the highest resolution for most lenses is about f/8, where aberration and diffraction cross. Larger sensers are not used for resolution reasons, but for light gathering sensitivity. Gah4 (talk) 20:14, 24 August 2017 (UTC)Reply
Note also that there is a not-so-obvious factor of two that comes from Nyquist sampling theory. Before digital, resolution was in line (pairs)/mm, usually written is l/mm. (sometimes l/in). Using pixels/mm or pixels/in, the number is twice as large. In any case, the goal with the high resolution sensors is to get passed the diffraction limit. That is, such that the result is diffraction limited, and not sensor limited. Gah4 (talk) 20:22, 24 August 2017 (UTC)Reply

External links modified edit

Hello fellow Wikipedians,

I have just added archive links to one external link on Diffraction-limited system. Please take a moment to review my edit. You may add {{cbignore}} after the link to keep me from modifying it, if I keep adding bad data, but formatting bugs should be reported instead. Alternatively, you can add {{nobots|deny=InternetArchiveBot}} to keep me off the page altogether, but should be used as a last resort. I made the following changes:

When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{Sourcecheck}}).

 Y An editor has reviewed this edit and fixed any errors that were found.

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—cyberbot IITalk to my owner:Online 14:10, 29 March 2016 (UTC)Reply

low pass filters edit

The digital photography section doesn't mention the optical low-pass filter that is on most DSLR cameras. (Some now have enough resolution to be diffraction limited with even the better lenses, and leave it out.) Gah4 (talk) 14:59, 14 October 2016 (UTC)Reply

Implications for digital photography edit

For the section Implications for digital photography, it might be worth mentioning that many higher quality digital cameras have an optical low-pass filter in front of the sensor array. The usual one is made using birefringent material. Some now have high enough resolution, and don't use the LPF. Most non-interchangeable lens digital cameras have an optical system designed to be diffraction limited, and avoid the LPF. Gah4 (talk) 03:56, 16 September 2018 (UTC)Reply