Scaling to 100GbE: Drivers and implementation

May 1, 2007

by Vijay Vusirikala

The sustained high growth of Internet traffic has important scaling implications for IP backbone network capacity, individual core router size, router-to-router link bandwidth, and the optical transport networks used to carry this traffic across the WAN. A number of new applications in a variety of industry verticals are feeding the continued traffic growth in the core. Rapid consumer adoption of downloadable video content, facilitated by ultra-broadband access networks, is driving the need for higher speeds in carrier, cable multiple-system operator (MSO), and content delivery networks. And in enterprise networks and data centers, new trends such as file and storage virtualization, combined with new data sets like high-resolution imagery and video, require higher-speed interfaces.

Scaling techniques currently used to meet such growth typically are based on bundling n × 10-Gbit/sec links using Layer 2 link aggregation groups (LAGs) or Layer 3 equal cost multipath forwarding (ECMP). These techniques have a number of drawbacks, including the high cost and complexity associated with multiple ports and links. Moreover, inefficiencies result from handling high-bandwidth flows because they cannot be striped effectively across multiple links. Thus, the limitations of current scaling techniques and continued traffic growth are underlining the strong need to develop an ecosystem for the next higher speed of Ethernet: 100-Gigabit Ethernet (100GbE).

The introduction of any new Ethernet standard requires broad industry consensus in definition, implementation, and adoption. Several standards organizations currently are developing different pieces of the end-to-end 100GbE puzzle. The Institute for Electrical and Electronics Engineers (IEEE) 802.3 committee, with its Higher Speed Study Group (HSSG), is developing the MAC layer parameters and LAN physical interface specifications for various application distances. The International Telecommunications Union (ITU), meanwhile, is developing recommendations for 100GbE transport over the WAN using the currently defined Optical Transport Network (OTN) rates as well as a new, higher-speed OTN rate. The physical interfaces for optoelectronic data modules are typically standardized via multisource agreements (MSAs) among the optical component vendors.

A crucial piece of the 100GbE ecosystem is the ability to cost-effectively transport a 100GbE signal across the WAN while easily provisioning, managing, troubleshooting, and protecting the service. The 100GbE signal from the client equipment (e.g., router) is connected to the optical transport system using a short-reach physical interface, currently being standardized by the IEEE. This signal is then transported over WDM networks spanning metro, regional, or long-haul distances, after which it is handed off to the receiving client.

The industry has proposed two approaches for this 100GbE over WDM transport: a serial method with a native line rate up to 120 to 130 Gbits/sec and a concatenated approach where the 100 Gbits/sec are inverse multiplexed into multiple wavelengths or optical data units (ODUs) in the OTN framework.

In the serial approach, the 100GbE signal is transmitted on a single wavelength using serial transmission at a rate that is typically higher than 100 Gbits/sec. For example, the serial transmission recommendation of the ITU proposes a payload container of 122 Gbits/sec and a transmission rate of 130 Gbits/sec with forward error correction (FEC) overhead.

While the serial transport option looks attractive from an ease-of-management point of view, the technologies needed for serial 120- to 130-Gbit/sec transmission are currently immature and very expensive. Serial transport can be accomplished using TDM-only transmission or by techniques that use advanced signal coding (e.g., duobinary transmission) or modulation techniques (e.g., QPSK, DQPSK, or QAM) to transmit 100 Gbits/sec at a lower effective baud rate. However, the implementation of both serial transmission techniques requires sophisticated and expensive optical components that are still a few years away from widespread commercial deployment.

Moreover, with these techniques, scaling to 100 Gbits/sec would continue to be limited by a wide range of optical impairments. Many optical transmission impairments, including chromatic dispersion, polarization-mode dispersion, and optical receiver sensitivity, scale exponentially with bit rate and limit the unregenerated reach unless complex and costly compensation techniques are used. When carriers scale their networks to 40 Gbits/sec today, they face significant issues in terms of compensation for optical impairments. And these impairments are magnified by a factor of 6.25 when scaling the network from 40 to 100 Gbits/sec.

Photonic integrated circuits operating with 10 wavelengths at 10 Gbits/sec per wavelength (top) enable transmitter and receiver modules with a total DWDM capacity of 100 Gbits/sec (bottom). Each 10-Gbit/sec channel in a conventional system, by contrast, requires up to a half-dozen discrete optoelectronic components.

In the concatenated approach, the packet data from the 100 Gbits/sec is inverse multiplexed over multiple ODU channels and deterministically remultiplexed back at the receiver. A number of concatenation schemes are possible depending on the type of transport unit chosen. For example, a 100-Gbit/sec signal can be transported using three ODU3 (ODU3-3v), 11 ODU2 (ODU2-11v) or 10 ODU2e (ODU2e-10v). (ODU2 and ODU3 refer to 10- and 40-Gbit/sec rates, respectively, as defined by ITU OTN. ODU2e refers to the overclocked version of ODU2 with a line rate of 11.1 Gbits/sec that some optical transport equipment providers have implemented to support the transparent transport of 10GbE LAN PHY.) An implementation based on ODU3-3v results in significant bandwidth inefficiency and does not support compatibility with currently deployed 10-Gbit/sec networks. ODU2e-10v, by contrast, is attractive because of its seamless compatibility with the 10GbE LAN PHY transport paradigm. Moreover, the use of an OTN framework for transporting 100GbE signals provides the additional benefit of a “digital wrapper” for performance monitoring, fault isolation, and protection of the circuit.

This type of “super-wavelength” transport through the bonding of multiple 10-Gbit/sec channels over WDM provides a number of benefits compared with the serial approach. The use of concatenated 10-Gbit/sec wavelengths enables 100GbE signals to be transported using existing optical line systems without cumbersome reengineering and ultraprecise compensation techniques. The rapid adoption of 10-Gbit/sec technologies also has propelled 10-Gbit/sec optical components along a steep volume-price curve, thus enabling low-cost equipment. Finally, the use of concatenated transport containers enables the transport link to function (with reduced capacity) even under single-wavelength failure conditions.

The use of large-scale optoelectronic photonic integrated circuits (PICs) enables efficient, high data rate transmission using a “super-wavelength” approach. A PIC can integrate the functionality of dozens of optical components, including lasers, modulators, detectors, attenuators, multiplexers/demultiplexers, and optical amplifiers, into a single device.

The figure depicts a commercially available implementation of PIC technology with a 100-Gbit/sec transmit PIC and a 100-Gbit/sec receive PIC, each incorporating multiple optical devices onto a chip some 5 mm2. PIC technology may enable significant improvements in the size, power consumption, reliability, and cost of ultrahigh-bandwidth optical interfaces at 100 Gbits/sec or higher. PICs operating with 10 wavelengths at 10 Gbits/sec per wavelength for a total DWDM capacity of 100 Gbits/sec have been widely deployed. By comparison, each 10-Gbit/sec channel in a conventional system requires up to a half-dozen discrete optoelectronic components (e.g., lasers, modulators, wavelength lockers, detectors, attenuators, WDM multiplexers, and demultiplexers).

Large-scale photonic integration may enable even greater capacity and functional integration. Recent R&D efforts have demonstrated PICs capable of total aggregate data rates of 400 Gbits/sec and 1.6 Tbits/sec per device pair. The integration and packaging consolidation monolithic integration offers may also enable future optical component costs to follow a cost reduction curve defined by volume manufacturing efficiencies, greater functional integration, and increased device density.

Service providers want a cost-effective approach to 100GbE that does not require significantly re-architecting their existing transport networks. Serial 100-Gbit/sec transmission may face technical and economic issues that limit its viability for a number of years. Hence, the most viable approach may be to bond multiple wavelengths operating at 10 Gbits/sec into a “super-wavelength service.” Such super-wavelength services are possible today over optical transport infrastructures flexible enough to carry sub-lambda, lambda, and super-lambda services and allow service providers to deliver next-generation services in a manner that is software- and protocol-based rather than infrastructure- or network-based.

Vijay Vusirikala is director of technical marketing at Infinera (www.infinera.com). He may be reached via e-mail at [email protected].

Sponsored Recommendations

Advances in Fiber & Cable

Oct. 3, 2024
November 7, 2024 1:00 PM ET / 12:00 PM CT / 10:00 AM PT / 6:00 PM GMT Duration: 1 hour Already registered? Click here to log in. A certificate of attendance...

How AI is driving new thinking in the optical industry

Sept. 30, 2024
Join us for an interactive roundtable webinar highlighting the results of an Endeavor Business Media survey to identify how optical technologies can support AI workflows by balancing...

Advancing Data Center Interconnection

July 25, 2024
Data Center Interconnect (DCI) solutions provide physical or virtual network connections between remote data center locations. Connecting geographically dispersed data centers...

The AI and ML Opportunity

Sept. 30, 2024
Join our AI and ML Opportunity webinar to explore how cutting-edge network infrastructure and innovative technologies can meet the soaring demands of AI memory and bandwidth, ...