by Carsten J. Videcrantz and Benny Mikkelsen
Although deployment has been delayed, 40-Gbit/s transmission will enable service providers to deliver bandwidth at a significantly lower cost per bit per kilometer than at 10 Gbit/s. The authors argue that new transponder designs, modulation formats, and dispersion-compensating techniques have proved the viability of such long-haul DWDM systems.
Over the past decade, service providers have adapted to such fundamental changes in their optical networks as the introduction of in-line optical amplifiers, wavelength-division multiplexing, and the transition from 2.5- to 10-Gbit/s transmission speeds, which also required the introduction of in-line dispersion compensation. These developments required major upgrades of installed systems as well as a new network planning approach.
One could argue that the upgrade to 40-Gbit/s transmission will be easy in comparison, only involving changing the transponders at the terminal sites. Recent advances in optic and electronic technologies suggest that these transponders are now moving from research laboratories into production-ready solutions suitable for deployment in core optical networks in the near future.
The key to 40-Gbit/s developments is the transponder, which essentially performs the necessary multiplexing/demultiplexing of client-side data streams from equipment such as routers or switches into the 40-Gbit/s data stream on the transport/dense wavelength-division multiplexing (DWDM) side (see Fig. 1). Initially, such a transponder will be assembled with discrete building blocks that over time will be integrated.
Some of the first transponders will be used to aggregate and transport 10-Gbit/s services over long-haul DWDM transport networks. Consequently, the client side of the transponder will contain four 10-Gbit/s transceivers to convert between optical and electrical in combination with SDH/SONET pointer processing and framing functionality before being multiplexed to 40 Gbit/s. The main challenge for this client interface module of the transponder is achieving both high density (small form factor) and low cost.
At the heart of the transponder reside the building blocks that perform the electrical multiplexing and demultiplexing between the client data rates and the 40-Gbit/s transport data rate. Support for SDH/SONET as well as ITU G.709 framing enables monitoring of well-known SONET/SDH transport overhead as well as several optical parameters. Additionally, with G.709 comes forward error correction (FEC) that will be added to the 40-Gbit/s data to increase receiver sensitivity and the robustness of the transport system. Typically, 7% overhead is added, resulting in a 43-Gbit/s line rate. Numerous companies now offer these components, which from a cost and technology perspective, have taken much of the risk out of building commercial 40-Gbit/s systems.
On the transport side, high-performance optical transmitters and receivers are critical components in a 43-Gbit/s transponder. The transmitter uses an optical modulator to convert the 43-Gbit/s non-return-to-zero (NRZ) electrical data stream into an optical NRZ signal format. High-performance and cost-effective modulators are currently available in the form of Mach-Zehnder modulators based on lithium niobate or electroabsorption modulators based on indium phosphide. While the former is characterized by low insertion loss, the latter exhibits relatively low drive-voltage requirements.
Although the straightforward and conventional NRZ modulation format is adequate for transmitting over relatively short unregenerated distances, special formats may be required for longer routes. This is because longer distances of fiber typically expose the signal more severely to impairments inherent to the fiber and factors such as noise from optical amplifiers. For example, the impact of fiber nonlinearities increases as the optical launch power is increased to bridge a larger fiber distance. Often called Kerr nonlinear effects, these impacts include four-wave mixing, self-phase modulation, and cross-phase modulation.
Using fiber with high effective mode area and relatively high chromatic dispersion can reduce the impact of these impairments. Equally important, the use of return-to-zero (RZ) modulation instead of NRZ alleviates the effect of fiber nonlinearities. The RZ format is generated by cascading two modulators (see Fig. 2). The first modulator is modulated by the NRZ data, while the second is driven by a sinusoidal clock signal.
Another advantage of the RZ format is the higher tolerance to polarization-mode dispersion (PMD). Polarization-mode dispersion is caused by the refractive index not exhibiting perfect rotational symmetry around the fiber axis, which may be the result of numerous factors, including fiber eccentricities and imperfect installation as well as transient stresses such as vibration and temperature change. Consequently, the two possible polarization states of the fiber propagate light at slightly different speeds. This difference in propagation speed between the slow and fast fiber axes (called differential group delay) leads to a broadening of the transmitted bits.
Today, all types of single-mode optical fibers are fabricated with very low PMD (below 0.05 ps/√km), and extensive studies of fiber in the ground indicates that the vast majority of post-1995 deployed fiber has sufficiently low PMD to allow 40 Gbit/s to be transmitted over several thousand kilometers of fiber (at 40 Gbit/s the tolerance to differential group delay is about 10 ps). However, on routes with older fibers, on very long routes, or on routes with transient stresses such as vibration, PMD compensation or mitigation techniques may be required. Note that one can regard RZ modulation as a simple mitigation method since it renders the signal more robust to PMD compared to conventional NRZ signals.
Finally, it should be noted that an RZ signal format also relaxes the bandwidth requirements of the electronic driver amplifier for the optical modulator and of the modulator itself. In addition, compared to NRZ, RZ is more tolerant to tight optical filtering by optical multiplexers/demultiplexers and add/drop filters.
FIGHTING CHROMATIC DISPERSION
In addition to the 40-Gbit/s PIN photodiode and transimpedance amplifier, a high-speed receiver might contain a tunable dispersion compensator or adaptive dispersion-compensating module (A-DCM) to compensate any residual chromatic dispersion at the end of the route. Chromatic dispersion in optical fibers leads to pulse broadening of the transmitted signal.
For increasing bit rate, the negative impact on signal quality mounts significantly. This fact has lead many to endorse the most prevailing myth surrounding 40-Gbit/s transmission—that chromatic dispersion in optical fibers prohibits cost-effective long-haul 40-Gbit/s systems.
Instead, the reality is that chromatic dispersion reduces nonlinear effects in the fiber and as such facilitates 40-Gbit/s DWDM over long distances when combined with well-established in-line dispersion-compensating modules (DCMs). These modules are widely used in 10-Gbit/s DWDM systems and simply consist of a single-mode dispersion-compensating fiber with a dispersion that is of opposite sign relative to the transmission fiber.
These conventional compensating fibers can compensate both the dispersion and the dispersion slope—that is, the wavelength dependency of the dispersion—of some types of transmission fiber such as standard single-mode fiber. For fiber types such as Corning's LEAF and Lucent Technologies' TrueWave, new types of DCMs have been developed using technologies such as dispersion-compensating fiber with higher relative dispersion slope and higher-order-mode fiber.
The real challenge with chromatic dispersion at 40 Gbit/s is the fact that the dispersion of the fiber is slightly dependent upon temperature variations. Thus, the accumulated dispersion in long-haul systems might exceed the dispersion tolerances of the receiver as the environmental temperatures of the transmission fiber and dispersion-compensating fiber change over time. Although this may seem like a significant obstacle, the reality is different: implementation of A-DCMs is a practical solution for both this problem and problems of insufficient dispersion-slope compensation.
Techniques such as those based on fiber Bragg gratings can provide optimum dispersion compensation (see Fig. 3). Without the adaptive dispersion compensator the tolerable residual dispersion is relatively low; however, deploying the A-DCM significantly increases the tolerance to at least ±200 ps/nm. The effect of chromatic dispersion is further illustrated by the related eye diagrams. With a residual dispersion of -50 ps/nm the uncompensated 43-Gbit/s eye diagram is severely distorted because of intersymbol interference caused by the dispersion-induced pulse broadening. With the A-DCM enabled, the eye is recovered to nearly perfection.
An added advantage of such an A-DCM scheme is that it relaxes the requirement for the design accuracy of the DCMs elsewhere in the system, essentially allowing the length of dispersion-compensating fiber to be fabricated with a focus on practical design and economic considerations. Furthermore, with the right design the introduction of the A-DCM can enable plug-and-play optimization when the transponder is installed in the network. Compact A-DCMs are commercially available from several vendors as single-channel devices. In addition, multichannel devices are emerging.
What makes 40-Gbit/s transmission such a promising alternative to 2.5 or 10 Gbit/s is the prospect of substantial cost savings for service providers. Current cost estimates show that a single 40-Gbit/s transponder will be significantly less costly than four legacy 10-Gbit/s transponders. However, the economic measure that matters is the cost of transmitting bits over a given distance—the total cost per bit per kilometer. Consequently, the supported fiber transmission distance without electronic regeneration is very important for a 40-Gbit/s DWDM transport implementation. Although a great deal of interest is focused on 40 Gbit/s for the ultralong-haul market (greater than 1500 km), initial deployment of 40-Gbit/s solutions will be cost-effective primarily at distances less than 1000 km.
When it comes to actually demonstrating long-haul performance of 40-Gbit/s systems, most trials have been conducted in the lab, where noncommercial, one-off equipment has been set up to test and, to some degree, demonstrate the capabilities of 40-Gbit/s technologies, including amplifiers, electronics, and optoelectronic components. We have recently conducted 40-Gbit/s experiments demonstrating straight-line fiber transmission with as many as 40 DWDM channels using field-deployable 40-Gbit/s components and commercially available in-line equipment (such as multiplexers, amplifiers, and dispersion compensators).
The 40 channels were transmitted over 1000 km of Corning LEAF fiber (10 spans of 100 km), separated by 100 GHz and covering the entire C-band. Impressive performance was noted for each of the 40 channels at a bit rate of 43 Gbit/s after 1000-km transmission, with margins of approximately 3 to 4 dB at a bit-error rate of 10-15 (see Fig. 4). Such results further demonstrate that 40-Gbit/s technology is becoming a deployable alternative to 10-Gbit/s systems.
Carsten J. Videcrantz is director of product marketing and Benny Mikkelsen is vice president and cofounder of Mintera, 847 Rogers Street, One Lowell Research Center, Lowell, MA 01852. Carsten Videcrantz may be reached at firstname.lastname@example.org.