This article is rated Start-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||
|
New framework
editKYN, your new extensive discussion of a three-part image quality framework is interesting, and might be OK, but it's really not clear yet, as it's unsourced. Where did you get this framework? If you made it up, let's try to find sources that resemble it, and use those. I don't think it helps to have that much "authoritative sounding" framework if no source is cited for it. Dicklyon (talk) 05:11, 1 August 2009 (UTC)
- Calling it an authoritative framework is dressing it up is more fancy words than it deserves, it's more like an attempt to categorize the various effects that are mentioned in the article into related categories. The current article is a more or less random list of things that relate to image quality, and my intention was to have those which belong to the image formation process, the measurement process, coding, and perception in separate sections. Does that really bring us close to original research? Anyway, your point about sources is taken, and I will try to find something useful next time I get to my office. --KYN (talk) 22:21, 1 August 2009 (UTC)
- That's about what I figured. And yes, I do think it's pretty much OR to split up the space that way, unless you find sources. Probably you'll find at least a couple of alternative ways of framing the area, and we can mention more than one and perhaps pick one to organize the rest of the article around. Dicklyon (talk) 22:25, 1 August 2009 (UTC)
- There are some sources that can be of some help here:
- [1] Bernd Jähne, Horst Haußecker, and Peter Geißler (editors), Handbook of Computer Vision and Applications, Volume 1, Acedemic Press, 1999. Part I and II are the most relevant for how images are producedby a camera
- [2] Bernd Jähne and Horst Haußecker (editors), Computer Vision and Applications, Academic Press, 2000. Part I is most relevant for how images are produced by a camera
- [3] Bernd Jähne, Digital Image Processing, Springer, 2005. Part II is most relevant for how images are produced by a camera
- The first one is part of a threee volume series that also covers image processing/computer vision and applications and the second one is a condensed version of these three volumes. Therefore both give more or less the same presentation of the process from light interaction with objects in the scene to how a digital image is produced by the camera. The third one picks up certain parts from the first two but goes more into the theoretical parts of computer vision.
- In [1,2], the authors make a distinction between the optical image formation, i.e., what happens with the light between the light source and the detector, referred to as image sensing, in the sense that they are described in separate chapters. The term image formation is sometimes used to mean the full range from light source to a digital image, sometimes only what happens before the sensing. In Figure 1.1 in [2] there is a finer division into (1) interaction between object and light, producing radiance that is independent of any viewer or sensor, (2) interaction between light and the imaging system (e.g. lenses), producing irradiances on the detector, (3) interaction between light and the photo-sensor to produce an electrical signal, and (4) ADC sampling to produce a digital image.
- In [3], chapter 7 Image Formation, we learn that image formation includes the three factors: (1) the geometric part that describes how a light ray reflected from an object in the scene can be projected to the image plane, (2) how bright is the light of this ray when it reaches the image plane, and (3) how is this brightness digitized into a digital image. This chain excludes the sensing/measurement process that converts light into an electrical signal.
- My conclusion is that these sources provide support describing what happens between the light source and the image that is read out from the camera in tems of a chain of processes that all affect the quality of the resulting image. On the other hand, in all three sources there is no specific mentioning of image quality here, but instead there are more or less detailed descriptions of the various effects that makes the resulting image deviate from the ideal. --KYN (talk) 10:35, 3 August 2009 (UTC)
- I think that's not what we're looking for. There are many sources that describe the imaging process (I'm unclear on why you'd pick three by the same guy) but we need sources that specifically talk about image quality. I've done this myself, in reference to a conceptual "ideal camera", but I didn't include geometric distortion, since I hadn't usually thought of that as a a quality issue; indeed, we design cameras with "controlled distortion" and render non-rectilinearly all the time, without a thought about "quality". Abbas al Gamal presented at a recent a workshop a view of image quality based on an ideal camera model that differed slightly from mine, but still didn't have geometric distortion as an issue. Probably in some fields, like aerial mapping, it's a big deal, but in typical image quality discussions, the pinhole camera model is not a part of the ideal. So we'd want to see sources that include that in image quality if we're to do so here. Dicklyon (talk) 14:42, 3 August 2009 (UTC)
- Is you point that the pinhole camera is not the ideal model? Sure, there are other camera models for sensing the light field (omni-directional, etc) and there are other image sensors that not even sense light (MR, ultrasound, range, etc) but for your typical (whatever that means) camera, I believe the pinhole model is the ideal. It is not just about gemetric distortion, it describes light simply as a ray (in the form of a perfect line), so all object at all distances appear in focus, there is no diffraction by the aperture, no chromatic abberation, no vignetting, etc.
- I agree the geometric distortion may or may not be an issue for image quality. If we are just taking pictures to show where we have been during vaction, it is clearly not an issue, but if we want to do image stitching, or want to apply computer vision techniques to estimate ego-motion, or do 3D reconstruction of the scene from an image sequence, or do augmented reality (this is pretty much what the film industry is about nowadays), then being able to reduce or compensate for the geometric distortion is one of key issues.
- I use A. El Gamal and H. Eltoukhy, "CMOS Image Sensors" IEEE Circuits and Devices Magazine, Vol. 21. Iss. 3, May-June 2005. as literature in one of my courses, and it does provide an excellent overview of the sensing mechanisms for CMOS, and it can be used here, but it doesn't cover the rest. Is your (or Abbas El Gamal's) ideal camera model documented somewhere? --KYN (talk) 08:35, 4 August 2009 (UTC)
another view of image quality
editHello,
I'm wondering if image quality shouldn't be understood in a human perception point of view. I mean as psychologists do by defining e.g. Gestalt Theory or ecological approach to visual perception. For an effective starting on the subject, see Gaetano Kanizsa's book : Grammatica del vedere) The image formation technologies aren't studied but on the contrary the brain interpretation of image contents and its related image quality measurement.
Moreover, we could investigate the relationship between psychologist models and digital information technomogy, i.e. the domain of image quality assessment (IQA). IQA is part of the quality of experience (QoE) that puts the end-users in the center of any digital process (digitizing, compressing, broadcasting and so on) unlike the now classical quality of services (QoS). A good starting reference is the paper Mean squared error: Love it or leave it? A new look at Signal Fidelity Measures, by Zhou Wang and Alan Bovik, in Signal Processing Magazine, IEEE , vol.26, no.1, January 2009.
Regards
--Stéfane 14:29, 27 February 2014 (UTC) — Preceding unsigned comment added by Stefane.paris (talk • contribs)
Suggestions
editGood article if a bit confused. Two suggestions: 1) Contrast is not also known as gamma, in fact gamma is something entirely different; 2) Diffraction should be listed as a source of reduced IQ --Jack Hogan (talk) 08:06, 5 July 2011 (UTC)
- Gamma has more than one use. (There are only so many greek letters.) In silver halide photography, gamma is used in describing the characteristic or Hurter–Driffield curve, related to contrast.[1]
- It is also used for gamma correction regarding the non-linearities in electronic imaging systems, Specifically, the intensity of a CRT image is not linear with respect to the grid-cathode voltage.[2] Gah4 (talk) 13:36, 10 October 2016 (UTC)
References
- ^ "Basic Sensitometry and Characteristics of Film" (PDF). motion.kodak.com. Kodak. Retrieved 10 October 2016.
- ^ "Linearity and Gamma". www.arcsynthesis.org. Retrieved 10 October 2016.
Adding a tools section?
editDoes it make sense to add a tools section that mentions some of the available resources for measuring image quality? There are certainly a variety of both research-oriented & commercial ones around, so I'm thinking that starting to list them might be useful.GeekPhotog (talk) 17:41, 6 November 2015 (UTC)
Diffraction
editI notice that the article uses the ideal (presumably not diffraction limited) pinhole camera to start the discussion. Nowhere in the article is diffraction, a fundamental physical limit on image quality, mentioned. In actual pinhole cameras, there is an optimal pinhole size, balancing diffraction with the actual size of the hole. Gah4 (talk) 13:02, 10 October 2016 (UTC)