SFP+ and EDC: The 10-Gigabit Ethernet game changer

Dec. 13, 2008
By Oswin Schreiber, Ph.D., ClariPhy Communications -- The combination of SFP+ and EDC has enabled the deployment of high-density, low-cost 10GbE equipment, making fiber (and more recently, twinaxial copper cable) the medium of choice in the data center versus twisted-pair cable.
Figure 1. Evolution of pluggable optical module form factors

The combination of SFP+ and EDC has enabled the deployment of high-density, low-cost 10GbE equipment, making fiber (and more recently, twinaxial copper cable) the medium of choice in the data center versus twisted pair cable.

By Oswin Schreiber, Ph.D.
ClariPhy Communications

For the first time since the late 1990s, Internet capacity is being strained by explosive growth in bandwidth demand from applications such as IP video. At the same time, network traffic in corporate data centers is surging as they provide greater and greater amounts of media-rich content to the Internet.

In 2007 Ethernet port shipments reached 300 million ports, with a majority of shipments occurring at rates of Gigabit Ethernet (1GbE) and 10-Gigabit Ethernet (10GbE) for the first time. Ethernet is driving towards a projected 500 million ports per year in 2010 with sales growth driven entirely by these 1GbE and 10GbE ports. While it currently represents only a sliver of the total Ethernet market, 10GbE has achieved critical mass and is expected to exhibit a similar growth pattern to 1GbE a decade earlier. If this trend holds, annual 10GbE port shipments will grow to tens of millions of units over the next five years.

Initially an expensive technology relegated to niche applications, 10GbE has taken hold as a mainstream technology in corporate data centers thanks to a dramatic drop in costs. Major factors in this transition have been the standardization of the SFP+ transceiver form factor, as well as the development of electronic dispersion compensation (EDC) silicon in low-cost, low-power CMOS technology. The combination of SFP+ and EDC has for the first time enabled the deployment of high-density, low-cost 10GbE equipment, backwards compatible with the existing infrastructure and capable of transmission distances from 1 m to 80 km. Additionally, SFP+ and EDC have made fiber (and more recently, twinaxial copper cable) the medium of choice in the data center versus twisted-pair cable.

In essence, they have changed the game in 10GbE.

The evolution of SFP+

Available for each of the 10GbE PMDs (10GBase-SR, LR, etc.), optical transceivers have undergone a dramatic evolution in cost, power, and size since the advent of the first 10GbE standard (see Figure 1). The XENPAK form factor emerged first in 2002 and uses a XAUI host interface consisting of four differential signals running at 3.125 Gbps each, including coding overhead. This requires the use of both a CDR as well as a XAUI SerDes inside the module, consuming valuable space and power budget. Next came X2, which reduced size and power dissipation relative to XENPAK, but kept the XAUI interface.

XFP made a leap forward by moving to a 10.3125-Gbps serial host interface, known as XFI, which eliminated the need for the XAUI SerDes inside the module. However, the signal integrity requirements of the XFI interface required XFP to maintain a significant amount of electronics inside the module, including a retimer in the transmit direction and a CDR in the receive direction.

SFP+ is the most recent and state-of-the-art optical module form factor, and contains a fundamental and significant improvement over XFP: It uses an interface known as SFI that requires no signal integrity electronics inside the module. Signal integrity is guaranteed by the PHY on the line card. In the transmit direction, the PHY uses pre-emphasis to compensate for the data-dependent jitter than accumulates between the PHY and the module. On the receive side, the PHY contains a high-performance CDR with EDC that compensates not only for dispersion in the fiber (or twinax), but also for the dispersion in the copper trace (up to 8 inches) between the module and the PHY. As a result of not needing signal integrity electronics, SFP+ modules dissipate less than 1 W of power and have a small footprint that enables 48 ports on a single line card.

Each successive generation of 10GbE optical modules has resulted in lower power dissipation, smaller footprint, and therefore higher port density. Most importantly for widespread deployment, the cost of these modules has declined by two orders of magnitude since 2002, as shown in Figure 1. As a result, 10GbE is nearing the price points required for mass adoption in corporate enterprise networks. Typically adoption of a new generation of Ethernet technology occurs when IT managers can buy a 10X increase in bandwidth for 3X-4X increase in price. With the advent of SFP+ optical modules, that point should be reached by the end of 2009.

SFP+ vs. 10GBase-T

While SFP+ is rapidly gaining traction because of its compelling value proposition for 10GbE data center and enterprise LAN applications, the alternative 10GBase-T technology for 10GbE transmission over Category 6/7 twisted-pair copper cabling has not gained wide market acceptance. 10GBase-T has proven to be an extremely challenging technology to implement in a cost- and power-effective way, because of the complexity of the signal processing required to overcome the bandwidth limitations and noise characteristics of the twisted-pair medium.

10GBase-T suffers from several inherent disadvantages relative to SFP+. The most important is power. Even in advanced 65-nm CMOS process technology, 10GBase-T PHYs dissipate around 6 W of power. This effectively rules out 10GBase-T as a viable high-density switch technology, relegating it to low-port-count adapter cards and uplinks. By contrast, the combination of an SFP+ module and PHY for data center applications dissipates 1-1.5 W of power, depending on reach.

Another significant disadvantage of 10GBase-T is its latency of approximately 2 µsec. This severely limits its applicability in the server and storage applications found in corporate data centers. The latency of SFP+ is less than 0.1 µsec for all 10GbE standards, and much less than that for data center applications.

Another disadvantage of 10GBase-T is its inability to work over most installed cable. The 10GBase-T standard has specified a maximum reach of 65 m for Category 6 cables, and requires the more expensive and bulky Category 6A (A = augmented) or Category 7 shielded cable for 100-m reach. Legacy Category 5 and 5e cables, which make up the bulk of enterprise 1GbE installations, are not supported by 10GBase-T. The fact that re-cabling is required makes the installation of 10GBase-T technology a very expensive proposition for IT departments. On the other hand, SFP+ combined with effective EDC can support transport at rates of 10 Gbps on installed fiber at all distances up to 300 m.

EDC enables affordable 10GbE designs

The fundamental role of EDC in a 10GbE SFP+ system is to eliminate intersymbol interference (ISI) that arises from dispersion in the media (fiber or twinax) as well as distortion in the module and its connection to the PHY.

The first generation of EDC products utilized a feed-forward equalizer (FFE) in combination with a decision feedback equalizer (DFE) as shown in Figure 2a. The FFE shapes the channel so that the ISI trails the main symbol and the DFE subtracts off this trailing ISI.

The key advantage of FFE-DFE is its simplicity, which enables implementation in either analog or digital architecture. However, the drawback of the FFE-DFE is that useful signal energy is thrown away in the process of subtracting off trailing ISI, thus reducing signal-to-noise ratio (SNR). Additionally FFE-DFE architectures suffer known problems with error propagation that further reduce effective SNR.

Next-generation EDC

The second generation of EDC products utilizes Maximum Likelihood Sequence Detection (MLSD) as shown in Figure 2b. MLSD is well known to be the optimal receiver for the uncoded NRZ modulation used in 10GbE (both fiber and twinax).

The MLSD architecture includes an FFE to shape the channel, much like the FFE-DFE. However, the FFE is followed by a Viterbi detector, which analyzes all possible 2N combinations of the transmitted sequence (N being the total number of bits transmitted) and selects the combination that is closest to the received signal. An MLSD EDC approach offers significantly better link budget performance (typically 3-4 dB SNR) when compared with FFE-DFE architectures. This advantage is particularly important for the 10GbE SFP+ applications with the most challenging channel characteristics, which are twinax and 10GBase-LRM.

MLSD is ubiquitous in lower-speed communications applications such as analog modems and hard disk drives, which have collectively shipped billions of units. However, it is only recently that low-power realizations of MLSD have become available for 10GbE applications, driven by both innovative circuit design techniques as well as advances in CMOS technology according to Moore's Law. As CMOS technology continues to advance to geometries of 40 nm and below, MLSD will become the prevalent 10GbE EDC architecture, much as it has become the prevalent architecture in so many other communications applications.

Oswin Schreiber, Ph.D.,is director of product marketing at ClariPhy Communications (www.clariphy.com)
Figure 2. EDC implementations: (a) FFE-DFE and (b) MLSD based

Sponsored Recommendations

Scaling Moore’s Law and The Role of Integrated Photonics

April 8, 2024
Intel presents its perspective on how photonic integration can enable similar performance scaling as Moore’s Law for package I/O with higher data throughput and lower energy consumption...

Supporting 5G with Fiber

April 12, 2023
Network operators continue their 5G coverage expansion – which means they also continue to roll out fiber to support such initiatives. The articles in this Lightwave On ...

From 100G to 1.6T: Navigating Timing in the New Era of High-Speed Optical Networks

Feb. 19, 2024
Discover the dynamic landscape of hyperscale data centers as they embrace accelerated AI/ML growth, propelling a transition from 100G to 400G and even 800G optical connectivity...

Advancing Data Center Interconnect

July 31, 2023
Large and hyperscale data center operators are seeing utility in Data Center Interconnect (DCI) to expand their layer two or local area networks across data centers. But the methods...