Integration of 10 and 40 Gbit/s offers economies for future growth

Nov. 1, 2002

In today's economic climate new core network builds require a cost of <$5/Gbit*km to be viable to a carrier. This cost not only applies to a specific core market segment, such as ultralong-haul (ULH), but for all reach demands in the core network. "All reach" networking refers to the ability of the system to cost-effectively support a wide range of customer demand distances on a single fiber-pair and set of common equipment. This is possible by engineering a single amplifier platform with multiple DWDM transponder types at the edge to service a wide range of traffic demands. Integrating 10- and 40-Gbit/s transponder technology in the same amplifier band enables the lowest cost allreach solution. This is achieved by matching the DWDM line rate to the specific city-pair reach and capacity requirement.

The cost advantages of increasing the DWDM line rate to 40 Gbit/s have both a capital and operational expense component. The reduction in capital equipment cost is due mainly to the economies of scale in electronics and optoelectronic components. As the component technology matures, and yields become practical, the volume cost for 40-Gbit/s optoelectronic becomes less than 4 × 10 Gbit/s. However, 40-Gbit/s wavelengths have a shorter reach than 10 Gbit/s (approximately 1500 vs. 4000 km) and are deployed in larger bandwidth increments. This implies that the first use of 40-Gbit/s line rate will be for links characterized by high capacity and reach less than 1500 km.

In the United States, 40-Gbit/s technology maps well into traffic demands running up and down the east and west coasts. Cross-country interconnection is more economically supported with 10-Gbit/s ultralong haul, which avoids the requirement for high-cost intermediate OEO regeneration points. In the long-haul segment, 10 Gbit/s also offers a more cost-effective solution for supporting low capacity demands, where using 40-Gbit/s wavelengths would result in lower bandwidth utilization. In this case a lower cost structure 10-Gbit/s long-haul transponder can be utilized as ultralong-haul reach is not required. The optimal solution is to support 10-Gbit/s long-haul, 10-Gbit/s ultralong-haul, and 40-Gbit/s long-haul wavelengths on a single system with the line rate optimized to match the city-pair demand profile .

Mixing 10- and 40-Gbit/s wavelengths in the same band implies 40 Gbit/s is used for N × 40-Gbit/s capacity increments and any "remainder" traffic (less than 40 Gbit/s) is packed into 10-Gbit/s wavelengths. This solution provides 10-Gbit/s granularity where needed, but does not require an additional system overlay, minimizing stranded bandwidth and maximizing wavelength utilization on the line side. When the demand growth is high, 40-Gbit/s wavelengths should be deployed. When demand growth is low, or the system is close to full-fill, 10-Gbit/s wavelengths should be deployed so that 40-Gbit/s wavelengths achieve high rates of bandwidth utilization.

Another key to minimizing 40-Gbit/s cost is to allow a mix of different "hot-swappable" service interface plug-ins (OC-48, OC-192, and 10 Gigabit Ethernet into a single 40-Gbit/s wavelength, thus allowing OC-48 level grooming and aggregation into 40-Gbit/s wavelengths. This in turn does not restrict the 40-Gbit/s TDM multiplexer to only one service interface type, enabling high 40-Gbit/s wavelength utilization.

Allowing a mix of service interfaces into the 40-Gbit/s reconfigurable multiplexer is critical to reduce network start-up costs and achieve high 40-Gbit/s wavelength utilization (see Fig. 1). In the example node, it is assumed the carrier needs to add/drop four OC-48s, two OC-192s and a single 10 Gigabait Ethernet service at the site.

The economic benefits of using 40-Gbit/s reconfigurable multiplexers result in reduced initial cost and pay-as-you-grow structure for 40-Gbit/s network deployments, as well as higher bandwidth utilization as mixed-service interfaces can be efficiently groomed and aggregated into 40-Gbit/s wavelengths.

The network stranded bandwidth is minimized by using a 40-Gbit/s reconfigurable multiplexer and by integrating 10- and 40-Gbit/s wavelengths within the same amplifier band (see Fig. 2). This simple network model assumes that OC-48, OC-192, and 10 Gigabit Ethernet service interface demands grow at an equal rate. The x-axis shows the service bandwidth terminated by the carrier at the add/drop node and the y-axis shows the DWDM line capacity that must be deployed to aggregate the traffic. The high start-up cost of 40-Gbit/s conventional multiplexing is clearly shown, as only 30 Gbit/s of service traffic requires the deployment of 120-Gbit/s of line capacity (33% utilization). Using a 40-Gbit/s reconfigurable multiplexer reduces this to 40-Gbit/s of line capacity (75% utilization). By combining the 40-Gbit/s reconfigurable multiplexer with 10-Gbit/s transponders for "remainder" traffic, the wavelength utilization is 100% for any add/drop requirement, thus showing the inherent bandwidth efficiency of integrated 10- and 40-Gbit DWDM networking.

One of the great challenges in designing a single platform that supports two quite different transmission technologies is to ensure the more-stringent constraint (in this case support for 40-Gbit/s propagation) does not place too much "over-engineering tax" on the less-stringent (in this case10-Gbit/s) channels. The optical specifications on polarization-mode dispersion and chromatic-dispersion tolerance are particularly tight for 40-Gbit/s transmission. Polarization-dependent loss must also be minimized, both for 40-Gbit/s transmission over 1500 km, but even more so for 10-Gbit/s transmission over 4000 km. The result is that the common equipment design for amplifiers, filters, and other equipment, are more similar to submarine systems specifications than conventional terrestrial systems.

The DWDM filtering technology and multiplexing hierarchy must also account for the difference in spectral bandwidth of the 10- and 40-Gbit/s signals. The channel spacing for 40-Gbit/s transmission is typically 100 GHz, so if all channel slots were limited to this spacing then the 10-Gbit/s channels would only have a spectral efficiency of 0.1 bits/Hz, which is bandwidth inefficient. The spectral bandwidth, laser accuracy, and filter center-frequency accuracy of 10-Gbit/s channels permits channel spacing of 50 GHz , evolving to 25 GHz as laser and filter specs improve. One method of permitting mixed 10 and 40 Gbit/s in the same band is the use of sub-bands that can then either be configured as 10-Gbit/s sub-bands or 40-Gbit/s sub-bands, with their own associated filtering technology at the sub-band level.

Operational cost savings are associated with moving to 40-Gbit/s line rate, including reduced power—approximately 40% reduction in watts per Gbit, reduced space—a 50% reduction, service velocity—75% fewer wavelengths to commission, and lower maintenance costs—75% fewer wavelengths to manage.

On the other hand, increasing the DWDM line rate to 40 Gbit/s leads to increased propagation penalties with chromatic dispersion, polarization-mode dispersion, and fiber nonlinear distortion. Although 40-Gbit/s systems must be designed to tolerate, or automatically compensate for these impairments, they can make the system harder to commission and maintain. These impairments lead to new test and measurement requirements in the field for which current 40-Gbit/s test equipment is not designed. The 40-Gbit/s DWDM system should include integrated test capabilities such as bit-error-rate test set, eye-diagram analyzer, dispersion tolerance, and Q measurement. These tools are essential to allow rapid installation, turn-up, and ongoing management of 40-Gbit/s DWDM systems and will be critical to providing carriers with operating expenditure benefits of migrating to a 40-Gbit/s line rate.

Ross Saunders is director of product line management at Ceyba, 450 March Rd., Ottawa, Ontario K2K 3K2, Canada. He can be reached at [email protected].

Sponsored Recommendations

Coherent Routing and Optical Transport – Getting Under the Covers

April 11, 2024
Join us as we delve into the symbiotic relationship between IPoDWDM and cutting-edge optical transport innovations, revolutionizing the landscape of data transmission.

Scaling Moore’s Law and The Role of Integrated Photonics

April 8, 2024
Intel presents its perspective on how photonic integration can enable similar performance scaling as Moore’s Law for package I/O with higher data throughput and lower energy consumption...

Constructing Fiber Networks: The Value of Solutions

March 20, 2024
In designing and provisioning a fiber network, it’s important to think of it as more than a collection of parts. In this webinar, AFL’s Josh Simer will show how a solution mindset...

From 100G to 1.6T: Navigating Timing in the New Era of High-Speed Optical Networks

Feb. 19, 2024
Discover the dynamic landscape of hyperscale data centers as they embrace accelerated AI/ML growth, propelling a transition from 100G to 400G and even 800G optical connectivity...