Audio coding format

(Redirected from Audio coding formats)

An audio coding format[1] (or sometimes audio compression format) is a content representation format for storage or transmission of digital audio (such as in digital television, digital radio and in audio and video files). Examples of audio coding formats include MP3, AAC, Vorbis, FLAC, and Opus. A specific software or hardware implementation capable of audio compression and decompression to/from a specific audio coding format is called an audio codec; an example of an audio codec is LAME, which is one of several different codecs which implements encoding and decoding audio in the MP3 audio coding format in software.

Comparison of coding efficiency between popular audio formats

Some audio coding formats are documented by a detailed technical specification document known as an audio coding specification. Some such specifications are written and approved by standardization organizations as technical standards, and are thus known as an audio coding standard. The term "standard" is also sometimes used for de facto standards as well as formal standards.

Audio content encoded in a particular audio coding format is normally encapsulated within a container format. As such, the user normally doesn't have a raw AAC file, but instead has a .m4a audio file, which is a MPEG-4 Part 14 container containing AAC-encoded audio. The container also contains metadata such as title and other tags, and perhaps an index for fast seeking.[2] A notable exception is MP3 files, which are raw audio coding without a container format. De facto standards for adding metadata tags such as title and artist to MP3s, such as ID3, are hacks which work by appending the tags to the MP3, and then relying on the MP3 player to recognize the chunk as malformed audio coding and therefore skip it. In video files with audio, the encoded audio content is bundled with video (in a video coding format) inside a multimedia container format.

An audio coding format does not dictate all algorithms used by a codec implementing the format. An important part of how lossy audio compression works is by removing data in ways humans can't hear, according to a psychoacoustic model; the implementer of an encoder has some freedom of choice in which data to remove (according to their psychoacoustic model).

Lossless, lossy, and uncompressed audio coding formats

edit

A lossless audio coding format reduces the total data needed to represent a sound but can be de-coded to its original, uncompressed form. A lossy audio coding format additionally reduces the bit resolution of the sound on top of compression, which results in far less data at the cost of irretrievably lost information.

Transmitted (streamed) audio is most often compressed using lossy audio codecs as the smaller size is far more convenient for distribution. The most widely used audio coding formats are MP3 and Advanced Audio Coding (AAC), both of which are lossy formats based on modified discrete cosine transform (MDCT) and perceptual coding algorithms.

Lossless audio coding formats such as FLAC and Apple Lossless are sometimes available, though at the cost of larger files.

Uncompressed audio formats, such as pulse-code modulation (PCM, or .wav), are also sometimes used. PCM was the standard format for Compact Disc Digital Audio (CDDA).

History

edit
 
Solidyne 922: The world's first commercial audio bit compression sound card for PC, 1990

In 1950, Bell Labs filed the patent on differential pulse-code modulation (DPCM).[3] Adaptive DPCM (ADPCM) was introduced by P. Cummiskey, Nikil S. Jayant and James L. Flanagan at Bell Labs in 1973.[4][5]

Perceptual coding was first used for speech coding compression, with linear predictive coding (LPC).[6] Initial concepts for LPC date back to the work of Fumitada Itakura (Nagoya University) and Shuzo Saito (Nippon Telegraph and Telephone) in 1966.[7] During the 1970s, Bishnu S. Atal and Manfred R. Schroeder at Bell Labs developed a form of LPC called adaptive predictive coding (APC), a perceptual coding algorithm that exploited the masking properties of the human ear, followed in the early 1980s with the code-excited linear prediction (CELP) algorithm which achieved a significant compression ratio for its time.[6] Perceptual coding is used by modern audio compression formats such as MP3[6] and AAC.

Discrete cosine transform (DCT), developed by Nasir Ahmed, T. Natarajan and K. R. Rao in 1974,[8] provided the basis for the modified discrete cosine transform (MDCT) used by modern audio compression formats such as MP3[9] and AAC. MDCT was proposed by J. P. Princen, A. W. Johnson and A. B. Bradley in 1987,[10] following earlier work by Princen and Bradley in 1986.[11] The MDCT is used by modern audio compression formats such as Dolby Digital,[12][13] MP3,[9] and Advanced Audio Coding (AAC).[14]

List of lossy formats

edit

General

edit
Basic compression algorithm Audio coding standard Abbreviation Introduction Market share (2019)[15] Ref
Modified discrete cosine transform (MDCT) Dolby Digital (AC-3) AC3 1991 58% [12][16]
Adaptive Transform Acoustic Coding ATRAC 1992 Un­known [12]
MPEG Layer III MP3 1993 49% [9][17]
Advanced Audio Coding (MPEG-2 / MPEG-4) AAC 1997 88% [14][12]
Windows Media Audio WMA 1999 Un­known [12]
Ogg Vorbis Ogg 2000 7% [18][12]
Constrained Energy Lapped Transform CELT 2011 [19]
Opus Opus 2012 8% [20]
LDAC LDAC 2015 Un­known [21][22]
Adaptive differential pulse-code modulation (ADPCM) aptX / aptX-HD aptX 1989 Un­known [23]
Digital Theater Systems DTS 1990 14% [24][25]
Master Quality Authenticated MQA 2014 Un­known
Sub-band coding (SBC) MPEG-1 Audio Layer II MP2 1993 Un­known
Musepack MPC 1997

Speech

edit

List of lossless formats

edit

See also

edit

References

edit
  1. ^ The term "audio coding" can be seen in e.g. the name Advanced Audio Coding, and is analogous to the term video coding
  2. ^ "Video – Where is synchronization information stored in container formats?".
  3. ^ US patent 2605361, C. Chapin Cutler, "Differential Quantization of Communication Signals", issued 1952-07-29 
  4. ^ Cummiskey, P.; Jayant, N. S.; Flanagan, J. L. (1973). "Adaptive Quantization in Differential PCM Coding of Speech". Bell System Technical Journal. 52 (7): 1105–1118. doi:10.1002/j.1538-7305.1973.tb02007.x.
  5. ^ Cummiskey, P.; Jayant, Nikil S.; Flanagan, J. L. (1973). "Adaptive quantization in differential PCM coding of speech". The Bell System Technical Journal. 52 (7): 1105–1118. doi:10.1002/j.1538-7305.1973.tb02007.x. ISSN 0005-8580.
  6. ^ a b c Schroeder, Manfred R. (2014). "Bell Laboratories". Acoustics, Information, and Communication: Memorial Volume in Honor of Manfred R. Schroeder. Springer. p. 388. ISBN 9783319056609.
  7. ^ Gray, Robert M. (2010). "A History of Realtime Digital Speech on Packet Networks: Part II of Linear Predictive Coding and the Internet Protocol" (PDF). Found. Trends Signal Process. 3 (4): 203–303. doi:10.1561/2000000036. ISSN 1932-8346.
  8. ^ Nasir Ahmed; T. Natarajan; Kamisetty Ramamohan Rao (January 1974). "Discrete Cosine Transform" (PDF). IEEE Transactions on Computers. C-23 (1): 90–93. doi:10.1109/T-C.1974.223784. S2CID 149806273. Archived from the original (PDF) on 2016-12-08. Retrieved 2019-10-20.
  9. ^ a b c Guckert, John (Spring 2012). "The Use of FFT and MDCT in MP3 Audio Compression" (PDF). University of Utah. Retrieved 14 July 2019.
  10. ^ Princen, J.; Johnson, A.; Bradley, A. (1987). "Subband/Transform coding using filter bank designs based on time domain aliasing cancellation". ICASSP '87. IEEE International Conference on Acoustics, Speech, and Signal Processing. Vol. 12. pp. 2161–2164. doi:10.1109/ICASSP.1987.1169405. S2CID 58446992.
  11. ^ Princen, J.; Bradley, A. (1986). "Analysis/Synthesis filter bank design based on time domain aliasing cancellation". IEEE Transactions on Acoustics, Speech, and Signal Processing. 34 (5): 1153–1161. doi:10.1109/TASSP.1986.1164954.
  12. ^ a b c d e f Luo, Fa-Long (2008). Mobile Multimedia Broadcasting Standards: Technology and Practice. Springer Science & Business Media. p. 590. ISBN 9780387782638.
  13. ^ Britanak, V. (2011). "On Properties, Relations, and Simplified Implementation of Filter Banks in the Dolby Digital (Plus) AC-3 Audio Coding Standards". IEEE Transactions on Audio, Speech, and Language Processing. 19 (5): 1231–1241. doi:10.1109/TASL.2010.2087755. S2CID 897622.
  14. ^ a b Brandenburg, Karlheinz (1999). "MP3 and AAC Explained" (PDF). Archived (PDF) from the original on 2017-02-13.
  15. ^ "Video Developer Report 2019" (PDF). Bitmovin. 2019. Retrieved 5 November 2019.
  16. ^ Britanak, V. (2011). "On Properties, Relations, and Simplified Implementation of Filter Banks in the Dolby Digital (Plus) AC-3 Audio Coding Standards". IEEE Transactions on Audio, Speech, and Language Processing. 19 (5): 1231–1241. doi:10.1109/TASL.2010.2087755. S2CID 897622.
  17. ^ Stanković, Radomir S.; Astola, Jaakko T. (2012). "Reminiscences of the Early Work in DCT: Interview with K.R. Rao" (PDF). Reprints from the Early Days of Information Sciences. 60. Retrieved 13 October 2019.
  18. ^ Xiph.Org Foundation (2009-06-02). "Vorbis I specification - 1.1.2 Classification". Xiph.Org Foundation. Retrieved 2009-09-22.
  19. ^ Terriberry, Timothy B. Presentation of the CELT codec. Presentation (PDF).
  20. ^ Valin, Jean-Marc; Maxwell, Gregory; Terriberry, Timothy B.; Vos, Koen (October 2013). High-Quality, Low-Delay Music Coding in the Opus Codec. 135th AES Convention. Audio Engineering Society. arXiv:1602.04845.
  21. ^ Darko, John H. (2017-03-29). "The inconvenient truth about Bluetooth audio". DAR__KO. Archived from the original on 2018-01-14. Retrieved 2018-01-13.
  22. ^ Ford, Jez (2015-08-24). "What is Sony LDAC, and how does it do it?". AVHub. Retrieved 2018-01-13.
  23. ^ Ford, Jez (2016-11-22). "aptX HD - lossless or lossy?". AVHub. Retrieved 2018-01-13.
  24. ^ "Digital Theater Systems Audio Formats". Library of Congress. 27 December 2011. Retrieved 10 November 2019.
  25. ^ Spanias, Andreas; Painter, Ted; Atti, Venkatraman (2006). Audio Signal Processing and Coding. John Wiley & Sons. p. 338. ISBN 9780470041963.