Blog: FEC in 100G networks and beyond

April 23, 2019
A wide variety of error correcting codes have been developed, but they generally can be classified into two main types: block and convolutional. A best of both worlds approach combines the two types in concatenated coding schemes.
Forward error correction (FEC) is used in a variety of contexts to ensure data signal transmission over “noisy” communication channels. The idea behind the technique is to encode the original message prior to transmission with redundant data. This data is an error correcting code (ECC), created by an FEC algorithm scheme, that gets forwarded along with the data and decoded by the receiver. On the receiving end, this affords an opportunity to correct errors — thus reducing the bit-error rate (BER) and increasing reliability.

Because the redundant bits are transmitted across the same paths as the original data they are designed to protect, there is a tradeoff between bit-error and data rates. More reliable codes tend to be more complex, with more redundant bits in play. By taking up more space in the transmission channel, such codes can result in lower rates of data transmission — even as they improve received signal-to-noise ratio (SNR).

A key concept related to this tradeoff is known as the Shannon limit, also known as channel capacity. Named for information theory pioneer Claude Shannon, this is the theoretical maximum information transfer rate for a channel with a base noise level.

ECC types

A wide variety of ECCs have been developed, but they generally can be classified into two main types: block and convolutional. Block codes add redundant bits as fixed-size blocks to the end of an initial message. They are typically decoded by hard-decision algorithms that determine bit correspondence based on the signal’s relation to a “one or zero” threshold.

Convolutional codes, by contrast, continuously add redundant bits and have an arbitrary length. These use soft-decision algorithms with additional bits to provide a “confidence factor” as to where the signal lies in relation to that same threshold. This allows for a much higher error-correction performance, but also adds greatly to the complexity of the code.

A best of both worlds approach combines the two types in concatenated coding schemes, in which the convolutional code performs the primary correction work and the block code subsequently catches leftover errors. Such schemes can perform within about 1 to 1.5 dB of the Shannon limit.

Going the distance

In the context of fiber-optic networking, FEC is used to address optical SNR (OSNR) — one of the key parameters that determines how far a wavelength can travel before it needs regeneration. FEC is especially important at high-speed data rates, wherein advanced modulation schemes are required to minimize dispersion and signal correspondence with the frequency grid. Without the incorporation of FEC, 100G transport would be limited to extremely short distances.

The first standard for optical FEC, employed in both 2.5G and 10G networks, was the Reed-Solomon (RS) block code. Employing RS-FEC added a byte overhead (OH) slightly less than 7% and produced a net OSNR improvement around 6 dB — approximately quadrupling wavelength travel distance. Upon discovering that adding stronger FEC was a highly cost-effective path to better 10G results, vendors began offering more complex algorithm schemes branded as enhanced FEC (EFEC). These enabled gains of approximately two more decibels without the need for expanding overhead.

One ECC that could be employed in this context is low-density parity check (LDPC) code. Designed for near-capacity performance, LDPC is a block code that comprises multiple single parity check (SPC) codes that are decoded in parallel using an iterated soft-decision. Another option, turbo code, is a block code built from two or more relatively simple convolutional codes plus interleaving that creates a more uniform distribution of errors. Turbo codes can perform within a fraction of a decibel of the Shannon limit.

One of the newer ECCs to be introduced is polar code, a block code that uses recursive concatenation to transform the physical channel into virtual outer channels. After enough recursions, the virtual channels become polarized in demonstrating either high or low reliability. Data bits can then be allocated to the most reliable channels. In theory, polar codes can achieve full channel capacity, yet the size of the block needed to do so presents a practicality challenge for real-world applications.

Looking forward

As the push for ever-higher transmission rates has continued, soft-decision forward error correction (SD-FEC) schemes have grown in popularity. Although these can require a byte overhead around 20% — nearly three times as large as the original RS coding scheme — the gains they produce in the context of high-speed networking are substantial. FEC that results in a 1 to 2 dB gain on a 100G network, for instance, translates to a 20% to 40% greater reach.

Another influential factor for FEC deployment is the emergence of software-defined optical networking (SDON), which has produced components that can adapt to physical channel parameters and thus provision resources more effectively. Both the FEC scheme and FEC overhead are among the many factors optimized in SDON. A configurable FEC core, for instance, may be switchable between 7% and 20% — the two OHs associated with hard- and soft-decision algorithms, respectively. When other configurable parameters such as baud rate and quadrature amplitude modulation (QAM) are factored in, it becomes increasingly possible to optimize channel capacities for accommodating wavelengths of 200G, 400G, and beyond.

Jerry Colachino is principal systems engineer at Precision Optical Transceivers, Inc.