This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (April 2010) (Learn how and when to remove this template message)
Image compression is a type of data compression applied to digital images, to reduce their cost for storage or transmission. Algorithms may take advantage of visual perception and the statistical properties of image data to provide superior results compared with generic data compression methods which are used for other digital data.
Lossy and lossless image compressionEdit
Image compression may be lossy or lossless. Lossless compression is preferred for archival purposes and often for medical imaging, technical drawings, clip art, or comics. Lossy compression methods, especially when used at low bit rates, introduce compression artifacts. Lossy methods are especially suitable for natural images such as photographs in applications where minor (sometimes imperceptible) loss of fidelity is acceptable to achieve a substantial reduction in bit rate. Lossy compression that produces negligible differences may be called visually lossless.
Methods for lossy compression:
- Transform coding – This is the most commonly used method.
- Discrete Cosine Transform (DCT) – The most widely used form of lossy compression. It is a type of Fourier-related transform, and was originally developed by Nasir Ahmed, T. Natarajan and K. R. Rao in 1974. The DCT is sometimes referred to as "DCT-II" in the context of a family of discrete cosine transforms (see discrete cosine transform). It is generally the most efficient form of image compression.
- The more recently developed wavelet transform is also used extensively, followed by quantization and entropy coding.
- Reducing the color space to the most common colors in the image. The selected colors are specified in the colour palette in the header of the compressed image. Each pixel just references the index of a color in the color palette, this method can be combined with dithering to avoid posterization.
- Chroma subsampling. This takes advantage of the fact that the human eye perceives spatial changes of brightness more sharply than those of color, by averaging or dropping some of the chrominance information in the image.
- Fractal compression.
Methods for lossless compression:
- Run-length encoding – used in default method in PCX and as one of possible in BMP, TGA, TIFF
- Area image compression
- Predictive coding – used in DPCM
- Entropy encoding – the two most common entropy encoding techniques are arithmetic coding and Huffman coding
- Adaptive dictionary algorithms such as LZW – used in GIF and TIFF
- DEFLATE – used in PNG, MNG, and TIFF
- Chain codes
The best image quality at a given compression rate (or bit rate) is the main goal of image compression, however, there are other important properties of image compression schemes:
Scalability generally refers to a quality reduction achieved by manipulation of the bitstream or file (without decompression and re-compression). Other names for scalability are progressive coding or embedded bitstreams. Despite its contrary nature, scalability also may be found in lossless codecs, usually in form of coarse-to-fine pixel scans. Scalability is especially useful for previewing images while downloading them (e.g., in a web browser) or for providing variable quality access to e.g., databases. There are several types of scalability:
- Quality progressive or layer progressive: The bitstream successively refines the reconstructed image.
- Resolution progressive: First encode a lower image resolution; then encode the difference to higher resolutions.
- Component progressive: First encode grey-scale version; then adding full color.
Region of interest coding. Certain parts of the image are encoded with higher quality than others. This may be combined with scalability (encode these parts first, others later).
Meta information. Compressed data may contain information about the image which may be used to categorize, search, or browse images. Such information may include color and texture statistics, small preview images, and author or copyright information.
Processing power. Compression algorithms require different amounts of processing power to encode and decode. Some high compression algorithms require high processing power.
The quality of a compression method often is measured by the peak signal-to-noise ratio. It measures the amount of noise introduced through a lossy compression of the image, however, the subjective judgment of the viewer also is regarded as an important measure, perhaps, being the most important measure.
Notes and referencesEdit
- "Image Data Compression".
- Nasir Ahmed, T. Natarajan and K. R. Rao, "Discrete Cosine Transform," IEEE Trans. Computers, 90–93, Jan. 1974.
- Burt, P.; Adelson, E. (1 April 1983). "The Laplacian Pyramid as a Compact Image Code". IEEE Transactions on Communications. 31 (4): 532–540. CiteSeerX 10.1.1.54.299. doi:10.1109/TCOM.1983.1095851.
- Shao, Dan; Kropatsch, Walter G. (February 3–5, 2010). Špaček, Libor; Franc, Vojtěch (eds.). "Irregular Laplacian Graph Pyramid" (PDF). Computer Vision Winter Workshop 2010. Nové Hrady, Czech Republic: Czech Pattern Recognition Society.
- Image compression – lecture from MIT OpenCourseWare
- Image Coding Fundamentals
- A study about image compression – with basics, comparing different compression methods like JPEG2000, JPEG and JPEG XR / HD Photo
- Data Compression Basics – includes comparison of PNG, JPEG and JPEG-2000 formats
- FAQ:What is the state of the art in lossless image compression? from comp.compression
- IPRG – an open group related to image processing research resources