Bandwidth virtualization enables a programmable optical network

March 1, 2008

by Serge Melle, Rick Dodd, Chris Liou, and Vijay Vusirikala

Average annual Internet Protocol (IP) traffic increases of 75% per year on average have significant implications on today's networks. These diverse challenges include accommodating higher-rate 40-Gbit/sec interfaces from IP core routers, which are expected to further scale to 100 Gbits/sec in the near term; supporting a wide range of services from 1 Gbit/sec to 100 Gbits/sec, including SONET/SDH, OTN, Fibre Channel, and video; and maximizing market competitiveness through speed of service turn-up. All the while, network operators must ensure operational simplicity for service planning, engineering, deployment, and operation.

In many cases, addressing these challenges with current technology imposes significant constraints and costs on network operators. Current optical transport networks are often built using WDM and all-optical reconfigurable optical add/drop multiplexing (ROADM) technologies to maximize fiber capacity and service reconfigurability, respectively. In a typical WDM network, the service interface – say, a 10-Gbit/sec input from a router – is directly coupled to a specific wavelength and then transmitted across the optical transport network (see Fig. 1).

Figure 1. Conventional WDM systems directly couple service provisioning to wavelength engineering and turn-up.

In this case, the commercialization of ultrahigh-bandwidth services at 40 Gbits/sec and, in the near future, 100-Gigabit Ethernet (100GbE) forces operators to run their WDM networks at significantly higher bit rates. WDM networks, in turn, must accommodate and compensate for optical impairments that scale – often exponentially – with bit rate. As a result, existing WDM transport systems, typically designed for transmission at 10 Gbits/sec per wavelength, must be overbuilt or upgraded to support transmission at 40 Gbits/sec per wavelength.

But network overbuilds can be costly and time-consuming, and upgrading existing capacity, though cost effective in the short term, requires extensive optical link re-engineering and often imposes limitations on the system's transmission distance. These issues are expected to be even more challenging for 100-Gbit/sec transmission and will need to be revisited once 100GbE services become a reality.

At the same time, once a high-bandwidth service is transponded onto a wavelength, it must then be routed end-to-end across an all-optical ROADM-based network. This results in additional network complexities, including:

  • The addition of regenerators and dispersion compensation when service demand distance exceeds end-to-end optical link budgets.
  • Wavelength conversion using regenerators to change the service wavelength when blocking occurs due to contention with other wavelengths.
  • The management of planning and engineering complexity associated with multiplexing lower-rate services onto a wavelength using muxponders.

As these challenges accumulate, operational complexity — and capital outlay — increases due to the additional truck rolls required for manual intervention and the significantly lengthened provisioning cycles for deployment of new capacity. Moreover, these challenges can limit an operator's flexibility to support new service orders from existing or potential customers. As a result, speed of service can no longer be used as a competitive differentiator.

Building a programmable optical network

Ideally, network operators should be able to deploy additional capacity and support new service types quickly and easily, with a minimum of manual intervention, hardware deployment, and engineering complexity. In such cases, service deployment should be a matter of software-enabled network reconfiguration (see Fig. 2).


Figure 2. Key elements of a programmable optical network include a pool of WDM bandwidth, integrated digital switching, multiservice client interfaces, and software intelligence.

A new network architecture called Bandwidth Virtualization enables this type of "programmable" optical network. End-to-end service provisioning is decoupled from the link-by-link optical wavelength engineering of WDM systems, enabling support for a variety of services, ranging from sub-wavelength data rates to super-wavelength data rates, over a common WDM network operating at a data rate optimized for the lowest network cost.

Here, sub-wavelength service refers to service data rates that are a fraction of the nominal data rate of a wavelength in the WDM line, whereas a super-wavelength service has a data rate higher than the wavelength data rate on the WDM line. In practice, electrical or "digital" multiplexing is used to map either multiple sub-wavelength services into a common wavelength or a super-wavelength service across multiple wavelengths that are "bonded" to provide the required bandwidth to support end-to-end transmission.

Bandwidth Virtualization requires the convergence of several key architectural elements within the WDM network, including:

  • WDM line capacity between nodes that is cost-optimized, scalable, pretested, and ready for service. Large-scale photonic integration provides an ideal platform for this form of consolidated, cost-effective WDM capacity deployment.
  • Integrated bandwidth management that consolidates high-capacity WDM transport with digital switching to enable remote, reconfigurable mapping of any service to any available line capacity.
  • Multiservice/protocol client interfaces that are independent and decoupled from the WDM line optics, thereby enabling any sub-wavelength or super-wavelength service to be mapped into the available WDM line capacity.
  • Software intelligence using a GMPLS control plane to allow automated, remote, and reconfigurable end-to-end service provisioning and routing without the need for manual interventions or truck rolls at intermediate sites.

Fundamentally, Bandwidth Virtualization enables services of any type or bit rate to be delivered using a "pool" of WDM line-side bandwidth versus a service that is coupled to a specific wavelength and line rate as in a conventional WDM network. Figure 3 illustrates how Bandwidth Virtualization enables a decoupling of service provisioning from optical link engineering to allow any service to be mapped to an available pool of WDM line-side bandwidth.

Figure 3. WDM with Bandwidth Virtualization decouples service provisioning from the underlying WDM link capacity, enabling rapid turn-up of any service.

The recent development and widespread adoption of large-scale photonic integrated circuits (PICs) are key enablers of Bandwidth Virtualization. Conceptually similar to an electronic IC, large-scale PICs integrate dozens to hundreds of optical components such as lasers, modulators, detectors, attenuators, multiplexers/demultiplexers, and optical amplifiers into a single device.

PICs operating with 10 wavelengths at 10 Gbits/sec per wavelength with a total WDM capacity of 100 Gbits/sec per device have been widely deployed in optical transport networks since 2004. Moreover, recent R&D efforts have demonstrated PICs capable of total aggregate data rates up to 1.6 Tbits/sec per device, highlighting the potential for large-scale photonic integration to enable even greater capacity and functional integration in the future.

PICs play two key roles in enabling Bandwidth Virtualization. First, bandwidth consolidation of multiple wavelength channels into a WDM system-on-a-chip with 100 Gbits/sec of aggregate capacity provides the required pool of line-side capacity that can be economically predeployed and over which any service can be mapped.

Second, PICs allow system designers to economically perform optical-electrical-optical (OEO) conversion at any node in the network. Digital optics, rather than analog optics, can then be used to perform feature-rich, value-added functions, including reconfigurable sub-wavelength add/drop multiplexing and bandwidth management as well as digital protection and performance monitoring, which enable new service features such as fast digital protection, GMPLS restoration, and Layer 1 optical virtual private networks (O-VPNs).

The combination of service-ready WDM capacity, integrated digital bandwidth management, and embedded software intelligence to automate end-to-end service provisioning results in a new programmable optical network paradigm. This network paradigm is characterized by a simple plug-and-play "any service, anywhere" approach to optical networking driven by software-initiated reconfiguration rather than hardware-driven engineering, installation, and manual intervention. And it provides a range of benefits to network operators, including:

  • New service support: Bandwidth Virtualization enables the transmission of ultrahigh-bandwidth 40- and 100-Gbit/sec services over the same line system using a common, cost-effective bit rate on the WDM line, without the need to re-engineer the deployed line system. Transponder-based WDM systems, by contrast, require the WDM line to be engineered in response to the highest supported service rate, thereby increasing network costs and complexity, and potentially delaying the operator's ability to offer the new service.
  • Fast service deployment: By leveraging software intelligence and eliminating the dependency between service deployment and optical network re-engineering, Bandwidth Virtualization allows new services to be quickly provisioned over existing infrastructure through the connection of a client interface at each end of the service path. This enables operators to quickly respond to new bandwidth requests, provide market-leading support for new services, and use these offerings as competitive differentiators in a marketplace that is otherwise characterized by price-based competition and commoditization.
  • Operational ease: Bandwidth Virtualization enables rapid and simple service activation. Service interfaces are added only at the network endpoints, regardless of service type, without the need to upgrade network resources or implement truck rolls to handle wavelength blocking or link re-engineering.
  • Capital efficiency: Bandwidth Virtualization also allows operators to select the most cost-efficient WDM line capacity and bit rate, independent of service type, and operate these over existing infrastructures. This eliminates the need to re-engineer the optical line (i.e., extra regenerators, complex chromatic or polarization-mode dispersion compensation, wavelength conversion, back-to-back muxponders, etc.) when provisioning new high-bandwidth services. It also improves capital efficiency by avoiding line overbuilds, enabling better wave fill, and avoiding stranded capacity for sub-wavelength services. By decoupling offered services from the underlying optical media, Bandwidth Virtualization allows operators to build heterogeneous "optical engines" (for instance, using 10G wavelengths in one area of the network and 40G in another) while providing a consistent product offering of all services across all locations.

Transport of 40- and 100-Gbit/sec services

Bandwidth Virtualization implemented using PIC-based WDM transport systems has been used successfully in several field trials to transmit both 40- and 100-Gbit/sec services over long-distance networks spanning thousands of kilometers.

In the first field trial demonstration of Bandwidth Virtualization, a 40-Gbit/sec service was transmitted over a record distance of 8,477 km on a trans-oceanic network. This trans-oceanic network comprises terrestrial links from Frankfurt connecting via Paris and London to a submarine cable system connecting the U.K. to the U.S., with a final terrestrial link connecting from the U.S.-based cable landing site to New York City. In this case, Bandwidth Virtualization was used to map a 40-Gbit/sec interface from a Juniper T640 router in Frankfurt across four wavelengths transmitted from a 100-Gbit/sec PIC, which was then reassembled as a 40-Gbit/sec service in New York City, resulting in the first 40-Gbit/sec IP link across the Atlantic.

Bandwidth Virtualization was also used to enable the first-ever transmission of prestandard 100GbE service over a WAN as part of a field trial that took place in November 2006, during the SC06 International Conference on High-Performance Computing, Networking, Storage, and Analysis. In this case, the 100GbE signal was mapped over 10 wavelengths transmitted from a PIC-based WDM transport system and transported from Tampa, FL, to Houston, TX, and back across a fiber network provided by Level 3 Communications, over a total distance of 4,000 km.

The demonstration represented the first time a prestandard 100GbE signal was successfully transmitted through a live production network across the WAN. It showed that 100GbE technology is not only viable but can be implemented in existing WDM networks engineered to support 10-Gbit/sec data rates per wavelength.

The pending demand for transport of high-capacity services at 40 and 100 Gbits/sec, along with the need to provide rapid service delivery, is driving operators to look at options that meet these needs in a capital and operationally efficient manner. New architecture concepts such as Bandwidth Virtualization allow the end-to-end provisioning and management of high-bandwidth services to be decoupled from the underlying optical transmission engineering. Thus, Bandwidth Virtualization helps network operators transform new service provisioning from what used to be a hardware- and engineering-driven operation into a software-driven activity, resulting in greater speed and flexibility and lower costs. Bandwidth Virtualization effectively enables the practical implementation of a "programmable" optical layer.

Serge Melle is vice president of technical marketing; Rick Dodd is vice president, product marketing; Chris Liou is vice president, product management; and Vijay Vusirikala technical marketing director at Infinera Corp.

Sponsored Recommendations

Data Center Network Advances

April 2, 2024
Lightwave’s latest on-topic eBook, which AFL and Henkel sponsor, will address advances in data center technology. The eBook looks at various topics, ranging...

From 100G to 1.6T: Navigating Timing in the New Era of High-Speed Optical Networks

Feb. 19, 2024
Discover the dynamic landscape of hyperscale data centers as they embrace accelerated AI/ML growth, propelling a transition from 100G to 400G and even 800G optical connectivity...

Scaling Moore’s Law and The Role of Integrated Photonics

April 8, 2024
Intel presents its perspective on how photonic integration can enable similar performance scaling as Moore’s Law for package I/O with higher data throughput and lower energy consumption...

Coherent Routing and Optical Transport – Getting Under the Covers

April 11, 2024
Join us as we delve into the symbiotic relationship between IPoDWDM and cutting-edge optical transport innovations, revolutionizing the landscape of data transmission.