In telecommunication and information theory, the code rate (or information rate[1]) of a forward error correction code is the proportion of the data-stream that is useful (non-redundant). That is, if the code rate is for every k bits of useful information, the coder generates a total of n bits of data, of which are redundant.

Different code rates (Hamming code).

If R is the gross bit rate or data signalling rate (inclusive of redundant error coding), the net bit rate (the useful bit rate exclusive of error correction codes) is .

For example: The code rate of a convolutional code will typically be 12, 23, 34, 56, 78, etc., corresponding to one redundant bit inserted after every single, second, third, etc., bit. The code rate of the octet oriented Reed Solomon block code denoted RS(204,188) is 188/204, meaning that 204 − 188 = 16 redundant octets (or bytes) are added to each block of 188 octets of useful information.

A few error correction codes do not have a fixed code rate—rateless erasure codes.

Note that bit/s is a more widespread unit of measurement for the information rate, implying that it is synonymous with net bit rate or useful bit rate exclusive of error-correction codes.

See also

edit

References

edit
  1. ^ Huffman, W. Cary, and Pless, Vera, Fundamentals of Error-Correcting Codes, Cambridge, 2003.