40 Gbits/sec: applied physics, enabled economics

Dec. 1, 2001
SPECIAL REPORTS: Annual Technology Forecast

Successful 40-Gbit/sec optical-networking platforms will cost-effectively integrate new technology into existing 10-Gbit/sec optical networks widely deployed today.

BRIAN LAVALLÉE, Nortel Networks

When bandwidth demand experienced voracious growth rates, 40-Gbit/sec optical networking was an obvious choice to maintain pace and increase revenues. At the time, the main goal for carriers was to simply keep up with rapid growth rather than optimize the network to reduce the overall cost of ownership. That was quite understandable, since it was an extremely difficult task to maintain spectacular growth while cost-reducing networks simultaneously. With bandwidth demands stabilizing, network deployments are increasingly dependent on the lowest cost per connect bit, which incorporates costs associated with the line and switch portions of a managed network.

One method of cost-effectively improving spectral efficiency over long distances is to increase the line rate to 40 Gbits/sec, yielding improved economies of scale by carrying significantly more information on a given wavelength. However, platforms supporting 40-Gbit/sec line rates must also support 10-Gbit/sec line rates, since they better serve ultra-long-haul applications for the foreseeable future. Networks based on 40 Gbits/sec will not replace existing 10-Gbit/sec networks but will coexist due to significantly different network economics based on available technologies and target applications.

There are basically two main choices available to the system designer to keep pace with growing bandwidth demands. The system designer can either deploy more 10-Gbit/sec channels or increase the line rate. In essence, the actual bit rate is inconsequential to the service provider. If service providers are to deploy networks based on 40-Gbit/sec technology, a more cost-effective total network solution with improved reliability, flexibility, and manageability must first be offered. In other words, the driving factor for successful 40-Gbit/sec optical-networking commercialization is not simply increased bandwidth, but rather an improved business case over the 10-Gbit/sec optical networks of today and tomorrow.

Fortunately, rapid advances in 40-Gbit/sec enabling technologies are at the point where they are no longer solely the focus of successful lab experiments. Rather, they enable optical equipment vendors to soon offer commercially available 40-Gbit/sec optical-networking solutions that are cost-effective and forecast-tolerant to properly meet bandwidth needs and services today and tomorrow.

The seemingly obvious advantages of quadrupling the line rate in terms of reduced equipment must be counterbalanced by the need and availability of reliable and cost-effective components, modules, and systems. New technology mandates new processes and materials that are initially more expensive but account for rapid innovation when coupled with invention in key areas. The caveat for equipment vendors is to ensure that increased individual 40-Gbit/sec component costs are sufficiently offset by decreasing the network equipment required due to improved spectral efficiencies.

The successful technical operation of 40-Gbit/sec line rates alone does not guarantee commercial success. Instead, 40-Gbit/sec optical networks must allow service providers to enjoy reduced overall network costs to ensure commercial success. Reliability and survivability are the foundation of any network that is to carry mission-critical business applications. Since four times the information is carried on a single 40-Gbit/sec wavelength compared to today's 10-Gbit/sec networks, extreme reliability of the entire network solution is mandatory.

The primary and obvious difference between networks based on 40 Gbits/sec as compared to 10 Gbits/sec is the quadrupled line rate that results in bit unit intervals effectively four times narrower. It is precisely this significantly reduced pulse spacing that will yield systems that are much less tolerant to pulse spreading and/or distortion due to the inverse square relationship of dispersion susceptibility and line rate. Subsequently, 40-Gbit/sec-based networks are actually 16 times more susceptible to pulse spreading and/or distortion than are networks based on 10-Gbit/sec line rates. There is also four times less energy per pulse available for detection at the optical receiver. Therefore, 40-Gbit/sec pulse shape, spacing, and power levels require the use of very precise management techniques to allow receivers to successfully distinguish between incoming optical pulses.

The primary challenge is to fully understand what causes a transmitted optical pulse to distort, then implement reliable, cost-effective management techniques. The ultimate goal of the 40-Gbit/ sec system designer is to ensure that a pulse received at the network egress point is similar in both shape and timing to when it was originally launched into the network ingress point.

Precise chromatic dispersion. Light pulses representing information each have a definite spectral width. The intrinsic properties of an optical fiber dictate that different wavelengths will propagate at different speeds, thereby resulting in chromatic dispersion (CD). If left unmanaged, pulse spreading results in in ter symbol interference as adjacent pulses eventually overlap, leading to subsequent bit errors. Fortunately, this relationship is near-linear in nature and lends itself to relatively "simple" management techniques in most applications. The de facto accepted method of managing CD is to install inline fixed passive dispersion compensation modules (DCMs) constructed of specifically doped optical fibers, effectively reversing the incurred dispersion. This well-understood solution is sufficient for most 10-Gbit/sec optical networks.

Most dispersion compensation (DC) deployed today is coarse in nature, in that all wavelengths are compensated for, simultaneously resulting in an averaging approach where wavelengths at the spectrum extremities receive either too little or too much compensation. Recently deployed sloped DCMs have improved this situation, resulting in improved link budgets and more cost-effective networks. But for 40-Gbit/sec transmission, more precise compensation is required, primarily due to the acute susceptibility of 40-Gbit/sec optical networks to dispersion.
Figure 1. Achievable coding gains for various forward error correction coding schemes.

Several approaches exist for achieving finer compensation from grouping bands of wavelengths down to individual wavelength compensation. Using fixed compensation modules to perform very fine compensation requires numerous modules, resulting in a solution very lossy in nature that, although workable, is not an optimal solution.

Active DC techniques allow for a more elegant and effective solution. By combining active and passive sloped dispersion compensation, a highly optimal solution is obtained. The residual dispersion resulting from imperfect slope matching between the fiber plant and DC fibers is actively corrected. Increases in network robustness are also achieved by actively correcting for changes in dispersion resulting from ambient temperature and/or physical changes to the fiber plant. Optical networks based on 40-Gbit/sec line rates must respect existing hut spacing, making the approximate amount of re quired CD compensation already known. Readily available fixed sloped DCMs perform coarse compensation, while dynamic compensation modules tune the remaining residual dispersion.

Enhanced optical amplification. At ten uation is the reduction of optical signal power as it propagates down an op tical fiber. Fortunately, the lowest re gion of attenuation, namely the C-band (1530-1565 nm) and L-band (1565-1620 nm), are also the gain regions of erbium-doped fiber amplifiers (EDFAs). It is precisely this beneficial relationship that enabled the current DWDM optical-networking industry.

Using EDFAs for optical amplification in 40-Gbit/sec optical networks is not a novel concept. However, the abrupt nature of optical amplification used by today's EDFA yields relatively high amounts of optical noise and non-linear effects detrimental to 40-Gbit/sec optical networking. Each EDFA adds cumulative noise to the optical link, while amplified power reaches very high levels, giving rise to debilitating non-linear effects. Thus, EDFAs with lower noise figures are required for reliable 40-Gbit/sec transmission. Alternative network topologies combining distributed Raman amplification (DRA) and EDFAs yield reduced overall noise figures. DRA has essentially the same effect as lowering the noise figure of an EDFA or lowering the span loss of the link itself, further enabling 40-Gbit/sec optical networking.

An EDFA performs an abrupt amplification of incoming signals to very high power levels. Unfortunately, it is precisely these high optical powers that cause high fiber core intensities that quickly incur debilitating non-linear effects, limiting the allowable bit rates or distances.

EDFAs with reduced optical power and noise figures used in conjunction with DRA ultimately yield improved optical sig nal-to-noise-ratio (OSNR) levels while simultaneously reducing the optical intensity at any point along the core of the optical fiber. This improved OSNR is attainable since Raman amplification is distributed in nature and uses the fiber plant itself to amplify the signals over very long effective lengths of fiber. Thus, DRA allows for the use of low-noise, low-power EDFAs by amplifying the incoming signal at the receiving end in a backward pumping architecture.

Adding multiple Raman pump lasers operating at different frequencies with precisely controlled output powers ensures flatter gain profiles. That is the Raman functional equivalent of EDFA dynamic gain flattening filters that are required to combat amplifier gain tilt and unequal channel power equalization. Since DRA exploits a non-linear (optical intensity-related) effect, it is better suited to fibers with smaller internal cores; however, it is also achievable in fibers with larger effective aperture through the use of higher-power Raman pump-laser sources.

The very nature of distributed Raman gain enables three key benefits:

  • It provides a manner of gently amplifying signals along a given length of the fiber plant, without physically inserting an amplifier mid-span, resulting in lower overall noise.
  • By implementing a backward-pumping architecture, lower power levels are achieved at any point along the fiber span, reducing the prevalence of non-linear effects while achieving maximum gain. If forward pumped instead, the combined EDFA and Raman laser-pump output powers would show very-high-yielding in creased non-linear effects and little Raman gain due to rapid pump depletion. Overall, DRA used in conjunction with the EDFA leads to lower noise floors and improved OSNR, thus enabling 40-Gbit/sec line rates and longer allowable distances.
  • Distributed Raman amplification may also be used to amplify the S-band (1450-1530 nm) and open this untapped region of transmission where conventional EDFAs cannot amplify due to their inherent gain characteristics.

There are two methods of achieving the reliable transmission of information over a given fiber-optic link:

  • Ensuring there are little to no incurred bit errors during transmission regardless of the debilitating effects present. That mandates extremely precise optical management techniques that quickly become cost- and technology-prohibitive, especially at 40-Gbit/sec line rates.
  • Allow and readily accept a certain amount of received bit errors during transmission, then dynamically detect and correct them in real-time at the receiving end. This latter method leads to a much more cost-effective mathematical solution compared to purely optical solutions, given recent significant advancements in FEC and ASIC design.
The strictly mathematical process of forward error correction (FEC) encodes data to allow FEC-enabled receivers to dynamically detect and correct a given number of received bit errors. The actual error-correcting capability of FEC is determined by the chosen coding scheme as well as the method chosen to transport generated FEC codes. More coding data generated leads to improved error-correcting ability, but also requires that more codes be transported alongside the data signal itself. That essentially increases the effective line rate, resulting in a line-rate "tax." This increase in overall line rate must be offset by the improved net effective coding gain of the FEC scheme. An important benefit of FEC is that optical-component specifications may be slightly relaxed, improving manufacturability and resulting in higher-volume production and lower-cost components.
Figure 2. The traditional method of representing digital information uses the simpler and less costly non-return-to-zero format.

Two methods of FEC are available; namely, in-band (IB) FEC and out-of-band (OOB) FEC. IB FEC maps the generated FEC codes into undefined SONET/SDH overhead bytes for transport from the network ingress point (transmitter) to the network egress point (receiver). Since SONET/SDH has a limited number of undefined overhead bytes available, the ultimate FEC error-correcting capability is therefore limited and insufficient for 40-Gbit/sec optical transmission. It is, however, quite sufficient for most 10-Gbit/sec-based optical networks today due to their decreased susceptibility to optical transmission issues compared to 40-Gbit/sec transmission. As an added advantage, IB FEC can interoperate with certain non-FEC-enabled systems.

OOB FEC increases line rates by adding generated FEC codes to the original transmitted data, without using SONET/SDH overhead. Although the line rate is increased, significant coding gains are achievable. Existing ultra-long-haul 10-Gbit/sec optical equipment is capable of correcting a raw bit-error rate (BER) of 10-3 to a corrected BER below 10-15 using strong OOB FEC schemes. This significant increase in error detection/correction capability enables longer spans and higher line rates. 40-Gbit/sec networks will exploit OOB FEC rather than IB FEC to achieve significant coding gains. The decision is which specific coding scheme is to be implemented, given the tradeoff between coding gain and increased line rates. FEC coding based on BCH-30 (Bose-Chaudhuri-Hocquen ghem coding scheme) at 10 Gbits/sec and 40 Gbits/sec achieves very impressive results (see Figure 1).

Due to the direct relationship between OSNR and BER, a lower BER leads to a higher OSNR and vice versa. Thus, using FEC schemes to correct the actual BER subsequently results in an improved effective system OSNR by extrapolation. FEC mathematically overcomes such incurred transmission impairments as attenuation, dispersion, and noise that yield bit errors to maintain reliable, cost-effective link performance even at 40-Gbit/sec line rates. Since FEC is a mathematical rather than purely optical technique, it is cost-effectively embedded into discrete ASIC devices.

Expanding the error-correcting capability of FEC coding is achieved using a technique called interleaving that matches the error-correcting capabilities of FEC coding to the actual error characteristics of the transmission environment. Interleaving enhances the random error-correcting abilities of FEC, ultimately increasing efficiency in handling burst error environments such as polarization-mode dispersion (PMD)-impaired fiber links. Interleaving rearranges encoded bits over separate block lengths. The interleaver span length is determined by the amount of error protection desired and is based on expected burst-error lengths encountered during transmission. The ultimate goal of interleaving is to distribute long bursts of bit errors that appear to the decoder as independent random bit errors or shorter, more manageable burst errors. Interleaving performance is typically dictated by proprietary schemes. Receiver decoding must be matched to coding used at the transmitter for proper error detection and correction to occur.

There are numerous coding formats available, which are divided into two main categories: namely return-to-zero (RZ) and non-return-to-zero (NRZ) (see Figure 2). The traditional method of representing digital information uses the simpler and less costly NRZ format.

In the NRZ optical domain, a "0" bit is represented by the absence of an optical pulse of light, while a "1" bit is represented by the presence of an optical pulse of light. Although simpler to implement, this coding format has drawbacks with regard to 40-Gbit/sec transmission. A sequence of optical bits contains a relatively high average power level compared to RZ coding, making them more conducive to non-linear effects that are pronounced at higher optical power intensities. Since NRZ bit transitions do not return to zero after each pulse interval, they are inherently more susceptible to transmission impairments. These issues make NRZ non-optimal for 40-Gbit/sec transmission.

RZ is a much more effective coding format for 40-Gbit/sec transmission, especially as optical links extend to 1,000 km and beyond. In the optical-network do main, a "0" bit is represented by the absence of an optical pulse of light, while a "1" bit is represented by the presence of an optical pulse of light during the first half of the bit and absence of light during the second half. The RZ coding format en ables certain key benefits for 40-Gbit/sec transmission. When a data stream contains long sequences of 1s and 0s, transitions are still present, enabling improved clock recovery. RZ pulse formats are also inherently more immune to non-linear effects and PMD that are detrimental to 40-Gbit/sec transmission. The primary challenge related to RZ coding is the increased bandwidth incurred due to a higher number of bit transitions compared to NRZ coding, requiring faster transmitters and receivers. However, re cent advancements in transmitter and receiver materials make RZ transmitters technically and economically feasible.

The technologies required to enable 40-Gbit/sec optical links that extend to more than 1,000 km will soon be deployed. To achieve an optimal solution, the core optical switch performing bandwidth-management functions must also operate at 40-Gbit/sec line rates to avoid costly and redundant electrical multiplexer/demultiplexer stages in the network, resulting in increased complexity and reduced reliability. Embedding DWDM transceivers directly into the core optical switch further reduces the overall network cost and complexity by preventing unnecessary optical-electrical-optical (OEO) conversions, which at 40 Gbits/sec quickly become cost-prohibitive. Eliminating all possible redundancies allows for a more rapid adoption of 40-Gbit/sec networks.

Since 40-Gbit/sec line rates are not yet available on client service network switches and routers, traffic grooming up to and down from 40 Gbits/sec is required. That mandates an electrical switch core scalable up to several terabits of switching capacity within a single switch fabric to ensure a forecast-tolerant solution that achieves significant economies of scale to reduce network costs. Any size of switch fabric can potentially support 40-Gbit/sec line rates, but smaller switch fabrics mean fewer available ports. Only multiterabit switch fabrics can effectively exploit future 40-Gbit/sec line rates.

Next-generation optical switches supporting 40-Gbit/sec line rates must also support today's 10-Gbit/sec line rates and services in the same core switch. In fact, the unified support for line rates from 2.5 Gbits/sec to 40 Gbits/sec and beyond in a single optical switch allows service providers to gracefully grow the network. Crossconnect granularity down to the STS-1 level will allow for a flexible level of bandwidth management. The sheer volume of capacity passing through the core switch mandates a highly effective network-management solution tightly integrated into optical line network elements.

The topology of the network should also be considered when deploying 40-Gbit/sec networks. Most of today's core network comprises proven SONET/SDH rings, offering such key benefits as robust protection and manageability. For maximum SONET protection, carriers have deployed four-fiber bidirectional line-switched rings able to sustain multiple simultaneous fiber/equipment failures and still prevent the loss of mission-critical services. However, emerging high-speed data services such as leased lambda services and optical virtual private networks are better served using mesh network topologies controlled by intelligent optical control planes.

Global mesh networks mean interoperability between different service providers, dictating that standards-based signaling and routing schemes are implemented. Proprietary signaling and routing schemes would create networks that are essentially isolated islands of bandwidth and prevent rapid global adoption.

Significant capital expenditures have allowed service providers to deploy robust SONET/SDH rings that cost-effectively serve given applications. The 40-Gbit/sec optical switch of tomorrow must support the existing base of SONET/SDH rings as well as emerging mesh networks in the same core optical switch to allow service providers to leverage their network investments.

Recent advancements in key areas of ASIC design and VCSEL-based optics will allow for the advent of flexible multiterabit switching cores connected to client services via optical backplanes. Rather than manage separate mesh networks and SONET rings, it makes sense to cost-effectively combine the two networks into a single 40-Gbit/sec network. Intelligent optical switches containing switch fabrics scalable to terabits can support the traffic demands of existing SONET/SDH rings and the growing demand for agile lambda services, all in a unified 40-Gbit/sec network solution.

Technological hurdles inherent in 40-Gbit/sec optical networks are quite substantial; however, they are only part of the commercialization process. The challenge is to cost-effectively integrate new technology into existing 10-Gbit/sec optical networks widely deployed today. A flexible platform able to consolidate diverse topologies (mesh and rings) and 10-Gbit/sec and 40-Gbit/sec line rates will lead to substantial cost savings due to a reduced duplication of network infrastructure.

The economic advantages of consolidating mesh and ring network topologies coupled with the flexibility of supporting line rates from 10 Gbits/sec to 40 Gbits/sec should not be underestimated. Scalable multiterabit optical switches within the same switch fabric achieve forecast tolerance, while providing enhanced flexibility, manageability, and reliability.

As 40-Gbit/sec technologies improve and costs reduce, penetration into today's network will accelerate. The 40-Gbit/sec line rate is inconsequential to the service provider. Rather, it is the lowest cost per connected bit in a fully reliable and managed network that will ensure commercialization, not the line rate alone.

Brian Lavallée is senior manager of systems engineering in Nortel Networks' Optical Internet business in Montreal. He can be reached at [email protected].