Open main menu

In coding theory, decoding is the process of translating received messages into codewords of a given code. There have been many common methods of mapping messages to codewords. These are often used to recover messages sent over a noisy channel, such as a binary symmetric channel.

NotationEdit

  is considered a binary code with the length  ;   shall be elements of  ; and   is the distance between those elements.

Ideal observer decodingEdit

One may be given the message  , then ideal observer decoding generates the codeword  . The process results in this solution:

 

For example, a person can choose the codeword   that is most likely to be received as the message   after transmission.

Decoding conventionsEdit

Each codeword does not have an expected possibility: there may be more than one codeword with an equal likelihood of mutating into the received message. In such a case, the sender and receiver(s) must agree ahead of time on a decoding convention. Popular conventions include:

  1. Request that the codeword be resent – automatic repeat-request.
  2. Choose any random codeword from the set of most likely codewords which is nearer to that.
  3. If another code follows, mark the ambiguous bits of the codeword as erasures and hope that the outer code disambiguates them

Maximum likelihood decodingEdit

Given a received codeword   maximum likelihood decoding picks a codeword   that maximizes

 ,

that is, the codeword   that maximizes the probability that   was received, given that   was sent. If all codewords are equally likely to be sent then this scheme is equivalent to ideal observer decoding. In fact, by Bayes Theorem,

 

Upon fixing  ,   is restructured and   is constant as all codewords are equally likely to be sent. Therefore,   is maximised as a function of the variable   precisely when   is maximised, and the claim follows.

As with ideal observer decoding, a convention must be agreed to for non-unique decoding.

The maximum likelihood decoding problem can also be modeled as an integer programming problem.[1]

The maximum likelihood decoding algorithm is an instance of the "marginalize a product function" problem which is solved by applying the generalized distributive law.[2]

Minimum distance decodingEdit

Given a received codeword  , minimum distance decoding picks a codeword   to minimise the Hamming distance:

 

i.e. choose the codeword   that is as close as possible to  .

Note that if the probability of error on a discrete memoryless channel   is strictly less than one half, then minimum distance decoding is equivalent to maximum likelihood decoding, since if

 

then:

 

which (since p is less than one half) is maximised by minimising d.

Minimum distance decoding is also known as nearest neighbour decoding. It can be assisted or automated by using a standard array. Minimum distance decoding is a reasonable decoding method when the following conditions are met:

  1. The probability   that an error occurs is independent of the position of the symbol.
  2. Errors are independent events – an error at one position in the message does not affect other positions.

These assumptions may be reasonable for transmissions over a binary symmetric channel. They may be unreasonable for other media, such as a DVD, where a single scratch on the disk can cause an error in many neighbouring symbols or codewords.

As with other decoding methods, a convention must be agreed to for non-unique decoding.

Syndrome decodingEdit

Syndrome decoding is a highly efficient method of decoding a linear code over a noisy channel, i.e. one on which errors are made. In essence, syndrome decoding is minimum distance decoding using a reduced lookup table. This is allowed by the linearity of the code.[3]

Suppose that   is a linear code of length   and minimum distance   with parity-check matrix  . Then clearly   is capable of correcting up to

 

errors made by the channel (since if no more than   errors are made then minimum distance decoding will still correctly decode the incorrectly transmitted codeword).

Now suppose that a codeword   is sent over the channel and the error pattern   occurs. Then   is received. Ordinary minimum distance decoding would lookup the vector   in a table of size   for the nearest match - i.e. an element (not necessarily unique)   with

 

for all  . Syndrome decoding takes advantage of the property of the parity matrix that:

 

for all  . The syndrome of the received   is defined to be:

 

To perform ML decoding in a binary symmetric channel, one has to look-up a precomputed table of size  , mapping   to  .

Note that this is already of significantly less complexity than that of a standard array decoding.

However, under the assumption that no more than   errors were made during transmission, the receiver can look up the value   in a further reduced table of size

 

only (for a binary code). The table is against pre-computed values of   for all possible error patterns  .

Knowing what   is, it is then trivial to decode   as:

 

For Binary codes, if both   and   are not too big, and assuming the code generating matrix is in standard form, syndrome decoding can be computed using 2 precomputed lookup tables and 2 XORs only. [4]

Let   be the received noisy codeword, i.e.  . Using the encoding lookup table of size  , the codeword   that corresponds to the first   bits of   is found.

The syndrome is then computed as the last   bits of   (the first   bits of the XOR are zero [since the generating matrix is in standard form] and discarded). Using the syndrome, the error   is computed using the syndrome lookup table of size  , and the decoding is then computed via   (for the codeword, or the first   bits of   for the original word).

The number of entries in the two lookup tables is  , which is significantly smaller than   required for standard array decoding that requires only   lookup. Additionally, the precomputed encoding lookup table can be used for the encoding, and is thus often useful to have.

Partial response maximum likelihoodEdit

Partial response maximum likelihood (PRML) is a method for converting the weak analog signal from the head of a magnetic disk or tape drive into a digital signal.

Viterbi decoderEdit

A Viterbi decoder uses the Viterbi algorithm for decoding a bitstream that has been encoded using forward error correction based on a convolutional code. The Hamming distance is used as a metric for hard decision Viterbi decoders. The squared Euclidean distance is used as a metric for soft decision decoders.

See alsoEdit

SourcesEdit

  • Hill, Raymond (1986). A first course in coding theory. Oxford Applied Mathematics and Computing Science Series. Oxford University Press. ISBN 978-0-19-853803-5.
  • Pless, Vera (1982). Introduction to the theory of error-correcting codes. Wiley-Interscience Series in Discrete Mathematics. John Wiley & Sons. ISBN 978-0-471-08684-0.
  • J.H. van Lint (1992). Introduction to Coding Theory. GTM. 86 (2nd ed.). Springer-Verlag. ISBN 978-3-540-54894-2.

ReferencesEdit

  1. ^ Feldman, Jon; Wainwright, Martin J.; Karger, David R. (March 2005). "Using Linear Programming to Decode Binary Linear Codes". IEEE Transactions on Information Theory. 51 (3). pp. 954–972. doi:10.1109/TIT.2004.842696.
  2. ^ Aji, Srinivas M.; McEliece, Robert J. (March 2000). "The Generalized Distributive Law". IEEE Transactions on Information Theory. 46 (2): 325–343. doi:10.1109/18.825794.
  3. ^ Albrecht Beutelspacher & Ute Rosenbaum (1998) Projective Geometry, page 190, Cambridge University Press ISBN 0-521-48277-1
  4. ^ Jack Keil Wolf (2008) An Introduction to Error Correcting Codes, Course: Communication Systems III, UCSD, http://circuit.ucsd.edu/~yhk/ece154c-spr17/pdfs/ErrorCorrectionI.pdf