MPEG Surround (ISO/IEC 23003-1[1] or MPEG-D Part 1[2][3]), also known as Spatial Audio Coding (SAC)[4][5][6][7] is a lossy compression format for surround sound that provides a method for extending mono or stereo audio services to multi-channel audio in a backwards compatible fashion. The total bit rates used for the (mono or stereo) core and the MPEG Surround data are typically only slightly higher than the bit rates used for coding of the (mono or stereo) core. MPEG Surround adds a side-information stream to the (mono or stereo) core bit stream, containing spatial image data. Legacy stereo playback systems will ignore this side-information while players supporting MPEG Surround decoding will output the reconstructed multi-channel audio.

Moving Picture Experts Group (MPEG) issued a call for proposals on MPEG Spatial Audio Coding in March 2004. The group decided that the technology that would be the starting point in standardization process, would be a combination of the submissions from two proponents - Fraunhofer IIS / Agere Systems and Coding Technologies / Philips.[5] The MPEG Surround standard was developed by the Moving Picture Experts Group (ISO/IEC JTC 1/SC29/WG11) and published as ISO/IEC 23003 in 2007.[1] It was the first standard of MPEG-D standards group, formally known as ISO/IEC 23003 - MPEG audio technologies.

MPEG Surround was also defined as one of the MPEG-4 Audio Object Types in 2007.[8] There is also the MPEG-4 No Delay MPEG Surround object type (LD MPEG Surround), which was published in 2010.[9][10] The Spatial Audio Object Coding (SAOC) was published as MPEG-D Part 2 - ISO/IEC 23003–2 in 2010 and it extends MPEG Surround standard by re-using its spatial rendering capabilities while retaining full compatibility with existing receivers. MPEG SAOC system allows users on the decoding side to interactively control the rendering of each individual audio object (e.g. individual instruments, vocals, human voices).[2][3][11][12][13][14][15] There is also the Unified Speech and Audio Coding (USAC) which will be defined in MPEG-D Part 3 - ISO/IEC 23003-3 and ISO/IEC 14496-3:2009/Amd 3.[16][17] MPEG-D MPEG Surround parametric coding tools are integrated into the USAC codec.[18]

The (mono or stereo) core could be coded with any (lossy or lossless) audio codec. Particularly low bitrates (64-96 kbit/s for 5.1 channels) are possible when using HE-AAC v2 as the core codec.

Perception of sounds in space

edit

MPEG Surround coding uses our capacity to perceive sound in the 3D and captures that perception in a compact set of parameters. Spatial perception is primarily attributed to three parameters, or cues, describing how humans localize sound in the horizontal plane: Interaural level difference (ILD), Interaural time difference (ITD) and Interaural coherence (IC). This three concepts are illustrated in next image. Direct, or first-arrival, waveforms from the source hit the left ear at time, while direct sound received by the right ear is diffracted around the head, with time delay and level attenuation, associated. These two effects result in ITD and ILD are associated with the main source. At last, in a reverberant environment, reflected sound from the source, or sound from diffuse source, or uncorrelated sound can hit both ears, all of them are related with IC.  

Description

edit

MPEG Surround uses interchannel differences in level, phase and coherence equivalent to the ILD, ITD and IC parameters. The spatial image is captured by a multichannel audio signal relative to a transmitted downmix signal. These parameters are encoded in a very compact form so as to decode the parameters and the transmitted signal and to synthesize a high quality multichannel representation.

 

MPEG Surround encoder receives a multichannel audio signal x1 to xN where the number of input channels is N. The most important aspect of the encoding process is that a downmix signal, xt1 and xt2, which is typically stereo, is derived from the multichannel input signal, and it is this downmix signal that is compressed for transmission over the channel rather than the multichannel signal. The encoder may be able to exploit the downmix process so as to be more advantageous. It not only creates a faithful equivalent of the multichannel signal in the mono or stereo downmix, but also creates the best possible multichannel decoding based on the downmix and encoded spatial cues as well. Alternatively, the downmix could be supplied externally (Artistic Downmix in before Diagram Block). The MPEG Surround encoding process could be ignored by the compression algorithm used for the transmitted channels (Audio Encoder and Audio Decoder in before Diagram Block). It could be any type of high-performance compression algorithms such as MPEG-1 Layer III, MPEG-4 AAC or MPEG-4 High Efficiency AAC, or it could even be PCM.

The spatial signals are generated and recovered in two types of filter modules. The reverse-OTT (one-to-two) generates one downmixed stream, one level difference, one coherence value, and an optional residue signal from one pair of signals. The reverse-TTT (two-to-three) element generates two downmixed streams, two level differences, one coherence value, and an optional residue signal. In both the forward (decoding) and reverse (encoding) directions, arranging these filters into a tree setup allows for arbitrary downmixing and recovery.[19]

Legacy compatibility

edit

The MPEG Surround technique allows for compatibility with existing and future stereo MPEG decoders by having the transmitted downmix (e.g. stereo) appear to stereo MPEG decoders to be an ordinary stereo version of the multichannel signal. Compatibility with stereo decoders is desirable since stereo presentation will remain pervasive due to the number of applications in which listening is primarily via headphones, such as portable music players.

MPEG Surround also supports a mode in which the downmix is compatible with popular matrix surround decoders, such as Dolby Pro-Logic.[19]

Applications

edit

Digital Audio Broadcasting

edit

Due to the relatively small channel bandwidth, the relatively large cost of transmission equipment and transmission licenses and the desire to maximize user choices by providing many programs, the majority of existing or planned digital broadcasting systems cannot provide multichannel sound to the users.

DRM+ was designed[20] to be fully capable of transmitting MPEG Surround and such broadcasting was also successfully demonstrated.[21]

MPEG Surround's backward compatibility and relatively low overhead provides one way to add multichannel sound to DAB without severely reducing audio quality or impacting other services.

Digital TV Broadcasting

edit

Currently, the majority of digital TV broadcasts use stereo audio coding. MPEG Surround could be used to extend these established services to surround sound, as with DAB.

Music download service

edit

Currently, a number of commercial music download services are available and working with considerable commercial success. Such services could be seamlessly extended to provide multichannel presentations while remaining compatible with stereo players: on computers with 5.1 channel playback systems the compressed sound files are presented in surround sound while on portable players the same files are reproduced in stereo.

Streaming music service / Internet radio

edit

Many Internet radios operate with severely constrained transmission bandwidth, such that they can offer only mono or stereo content. MPEG Surround Coding technology could extend this to a multichannel service while still remaining within the permissible operating range of bitrates. Since efficiency is of paramount importance in this application, compression of the transmitted audio signal is vital. Using recent MPEG compression technology (MPEG-4 High Efficiency Profile coding), full MPEG Surround systems have been demonstrated with bitrates as low as 48 kbit/s.

See also

edit

References

edit
  1. ^ a b ISO (2007-01-29). "ISO/IEC 23003-1:2007 - Information technology -- MPEG audio technologies -- Part 1: MPEG Surround". ISO. Archived from the original on 2011-06-06. Retrieved 2009-10-24.
  2. ^ a b MPEG. "MPEG standards - Full list of standards developed or under development". chiariglione.org. Archived from the original on 2010-04-20. Retrieved 2010-02-09.
  3. ^ a b MPEG. "Terms of Reference". chiariglione.org. Archived from the original on 2010-02-21. Retrieved 2010-02-09.
  4. ^ "Preview of ISO/IEC 23003-1, First edition, 2007-02-15, Part 1: MPEG Surround" (PDF). 2007-02-15. Archived (PDF) from the original on 2011-06-14. Retrieved 2009-10-24.
  5. ^ a b ISO/IEC JTC 1/SC29/WG11 (July 2005). "Tutorial on MPEG Surround Audio Coding". Archived from the original on 2010-04-30. Retrieved 2010-02-09.{{cite web}}: CS1 maint: numeric names: authors list (link)
  6. ^ "Working documents, MPEG-D (MPEG Audio Technologies)". MPEG. Archived from the original on 2010-02-21. Retrieved 2010-02-09.
  7. ^ MPEG Spatial Audio Coding / MPEG Surround: Overview and Current Status (PDF), Audio Engineering Society, 2005, archived (PDF) from the original on 2011-07-18, retrieved 2009-10-29
  8. ^ ISO (2007). "BSAC extensions and transport of MPEG Surround, ISO/IEC 14496-3:2005/Amd 5:2007". ISO. Archived from the original on 2011-06-06. Retrieved 2009-10-13.
  9. ^ AES Convention Paper 8099 - A new parametric stereo and Multi Channel Extension for MPEG-4 Enhanced Low Delay AAC (AAC-ELD) (PDF), archived from the original (PDF) on 2011-09-28, retrieved 2011-07-18
  10. ^ ISO/IEC JTC 1/SC29/WG11 (October 2009), ISO/IEC 14496-3:2009/FPDAM 2 – ALS simple profile and transport of SAOC, N11032, archived from the original (DOC) on 2014-07-29, retrieved 2009-12-30{{citation}}: CS1 maint: numeric names: authors list (link)
  11. ^ ISO (2010-10-06). "ISO/IEC 23003-2 - Information technology -- MPEG audio technologies -- Part 2: Spatial Audio Object Coding (SAOC)". Archived from the original on 2012-02-01. Retrieved 2011-07-18.
  12. ^ Spatial Audio Object Coding (SAOC) – The Upcoming MPEG Standard on Parametric Object Based Audio Coding (PDF), 2008, archived (PDF) from the original on 2012-03-12, retrieved 2011-07-19
  13. ^ Manfred Lutzky, Fraunhofer IIS (2007), MPEG low delay audio codecs (PDF), archived (PDF) from the original on 2011-09-27, retrieved 2011-07-19
  14. ^ MPEG (October 2009). "91st WG11 meeting notice". chiariglione.org. Archived from the original on 2010-02-17. Retrieved 2010-02-09.
  15. ^ ISO/IEC JTC 1/SC 29 (2009-12-30). "Programme of Work (Allocated to SC 29/WG 11) - MPEG-D". Archived from the original on 2013-12-31. Retrieved 2009-12-30.{{cite web}}: CS1 maint: numeric names: authors list (link)
  16. ^ "ISO/IEC DIS 23003-3 - Information technology -- MPEG audio technologies -- Part 3: Unified speech and audio coding". 2011-02-15. Archived from the original on 2012-01-28. Retrieved 2011-07-18.
  17. ^ "ISO/IEC 14496-3:2009/PDAM 3 - Transport of unified speech and audio coding (USAC)". 2011-06-30. Archived from the original on 2012-01-29. Retrieved 2011-07-18.
  18. ^ "Unified Speech and Audio Coder Common Encoder Reference Software". March 2011. Archived from the original on 2011-08-06. Retrieved 2011-07-18.
  19. ^ a b Herre, Jürgen; Kjörling, Kristofer; Breebaart, Jeroen; Faller, Christof; Disch, Sascha; Purnhagen, Heiko; Koppens, Jeroen; Hilpert, Johannes; Rödén, Jonas; Oomen, Werner; Linzmeier, Karsten; Chong, Kok Seng (8 December 2008). "MPEG Surround-The ISO/MPEG Standard for Efficient and Compatible Multichannel Audio Coding". Journal of the Audio Engineering Society. 56 (11): 932–955. Abstract
  20. ^ "DRM system enhancement approved by ETSI" (Press release). DRM Consortium. 2 September 2009. Archived from the original on 15 October 2009. Retrieved 2009-10-20.
  21. ^ "DRM+ in Band I promoted as a most suitable technology to complement other digital radio standards in countries like France" (Press release). DRM Consortium. 16 July 2009. Archived from the original on 15 October 2009. Retrieved 2009-10-20.
edit