Talk:64b/66b encoding

Latest comment: 1 year ago by 134.238.168.56 in topic Overhead Calculations

DC Balance edit

As I understand it the statement "This means that there are just as many 1s as 0s in a string of two symbols, and that there are not too many 1s or 0s in a row" is not correct. This is true for 8b/10b encoding as the rigid encoding ensures DC balance over two symbols. My understanding is that 64b/66b encoding will have DC balance due to the averaging effect of the scrambling, but it is not guaranteed over two symbols. —Preceding unsigned comment added by 128.222.37.58 (talk) 16:25, 24 August 2010 (UTC)Reply

Intentional DC balance violations edit

The initial state of the scrambler is known, the transformation function of the scrambler is also known. So it should be possible to send chosen payload that modifies the scrambler state in a way that it outputs only 0x00000000 or 0xFFFFFFFF and violates DC balance a lot. Are there any known attacs based on this? What would happen on a 10GbE-Link in this case? --RokerHRO (talk) 09:53, 16 March 2015 (UTC)Reply

Scrambling polynomial edit

"128b/130b (...) uses a different scrambling polynomial: x23 + x21 + x16 + x8 + x5 + x2 + 1" – different to what? No other LFSR polynomial is mentioned in the article. Maybe it should be specified which polynomial is used, or at least its length. —Cousteau (talk) 05:39, 12 May 2015 (UTC)Reply

Statistical chance of 65-bit run edit

The current text is:

Most clock recovery circuits designed for SONET OC-192 and 64b/66b are specified to tolerate an 80-bit run length. Such a run cannot occur in 64b/66b because transitions are guaranteed at 66-bit intervals, and in fact long runs are very unlikely. Although it is theoretically possible for a random data pattern to align with the scrambler state and produce a long run of 65 zeroes or 65 ones, the probability of such an event is equal to flipping a fair coin and having it come up in the same state 64 times in a row. At 10 Gigabits per second, the expected event rate of a 66-bit block with a 65-bit run-length, assuming random data, is 66x264/(2x1010) seconds, or about once every 1900 years.

It's the last sentence that I'm concerned about, the statement on the odds of producing a 65-bit sequence of all '0's or all '1's. Within the 66b/64b scheme, the first two preamble bits must be '01' or '10'. So the 65-bit run length problem being discussed can only occur if the following 64 bits all match the second bit of the preamble. For each preamble pattern, there is only one 64-bit pattern out of the 2^64 possibilities where all bits are the same value.

So the odds are 1 in 2^64, meaning the last sentence should read, "At 10 Gigabits per second, the expected event rate of a 66-bit block with a 65-bit run-length, assuming random data, is 2^64/(10^10) seconds or about once every 131.5 years."

Any comments please, before I look to making this change.ToaneeM (talk) 07:42, 10 January 2018 (UTC)Reply

ToaneeM, I agree that this section could be better explained.
I agree that, for each preamble pattern, there is only one 64-bit data pattern that exactly matches the scrambler output and the last bit of the preamble, producing a 65-bit run length.
So I agree that for random data the odds are 1 in 2^64 per block.
At 10 Gigabits per seconds, each bit takes 1/(10^10) of a second.
Each block takes 66 bits to transmit, or 66/(10^10) of a second.
So I estimate the expected time-to-first-event rate is (time per block) * (1/odds that block gives a run) = ( 66 / (10^10) seconds ) * ( 2^64 ) =~= 3858 years.
Which is still a factor of 2 off from the current version of the article. Why? --DavidCary (talk) 03:12, 5 May 2020 (UTC)Reply

Overhead section edit

The opening section says:

″The overhead can be reduced further by doubling the payload size to produce the 128b/130b encoding used by PCIe 3.0 and 128b/132b encoding used by USB 3.1 and Display Port 2.0.

But the overhead is not reduced with 128b/132b (2/64 = 4/128).

94.137.103.165 (talk) 13:22, 30 September 2020 (UTC)Reply

Overhead Calculations edit

The overhead discussion near the beginning of the article is incorrect. Here is the original text:

The protocol overhead of a coding scheme is the ratio of the number of raw payload bits to the number of raw payload bits plus the number of added coding bits. The overhead of 64b/66b encoding is 2 coding bits for every 64 payload bits or 3.125%. This is a considerable improvement on the 25% overhead of the previously-used 8b/10b encoding scheme, which added 2 coding bits to every 8 payload bits.

That paragraph should read like this:

The protocol overhead of a coding scheme is the ratio of the number of added coding bits to the number of raw payload bits. The overhead of 64b/66b encoding is 2 coding bits for every 64 payload bits or 3.125%. This is a considerable improvement on the 25% overhead of the previously-used 8b/10b encoding scheme, which added 2 coding bits to every 8 payload bits.

134.238.168.56 (talk) 17:16, 2 April 2023 (UTC)Reply