by Paul Morkel
New technologies that enable the dynamic delivery of bandwidth to any location at any time are entering the marketplace. The use of on-demand provisioning and automated resource management has already proved popular in research and education networks both in Europe and the United States.
Carriers have considered the viability of these capabilities for their mainstream networks for several years and now sense that the revenue opportunities exist to justify deployment. Business, research, and consumer applications including high-speed data services, grid computing, and “triple play” are driving a surge of increased data traffic. For carriers to profitably capitalise on these trends, networks must become more dynamic and make the most effective use of resources, ensuring that none of these resources are left stranded. Dynamically reconfigurable switched optical networks offer carriers a compelling variety of short-, mid-, and long-range benefits-from reducing the data bottleneck in the metropolitan area to easing the transition to an automated, demand-responsive, and self-healing infrastructure that ultimately supports wavelength-on-demand services.
The key questions facing carriers today are how and when: How should they go about deploying dynamic reconfigurability? And when is the right time to do so?
Conceptual conversations about dynamic reconfigurability in optical networks date back at least a decade, but it is only in the last few years that we’ve started to see real-world deployments. The research and education community is the leader in this area, and carriers are paying close attention to the model that these institutions and their networks are providing.
Grid computing-in which users with huge bandwidth demands are connected across geographically dispersed computational grids-provides one such model. Grid computing networks have undergone a rapid surge in development, driven by research organisations.
One of the most successful examples of a dynamically reconfigurable switched optical network is the U.S.-based Dynamic Resource Allocation over Generalized Multiprotocol Label Switching (GMPLS) Optical Networks-or “DRAGON”-infrastructure. It’s a dynamic, deterministic, and manageable transport service that supports collaborative e-science and grid-computing applications. Collaborators on the National Science Foundation (NSF)-funded DRAGON network include the Massachusetts Institute of Technology (MIT) Haystack Observatory, National Aeronautics and Space Administration’s Goddard Space Flight Center and Ames Research Center, the U.S. Naval Observatory, and the Internet2 Hybrid Optical Packet Infrastructure (HOPI).
This revolutionary architecture represents the first in-service deployment of commercially available multidegree reconfigurable optical add/drop multiplexer (ROADM) technology, controlled end-to-end by GMPLS. Internal traffic management policies can be defined unilaterally within each autonomous network domain; at the same time, bilateral peering arrangements can be quickly and securely established with external domains.
Meanwhile, in Europe, the GÉANT2 network connects 30 independent research and education networks spanning 34 countries across a 10 Gbit/s backbone. Astronomers are using the network in interferometry exercises that link radio telescopes to create space images of unprecedented expanse and resolution. Grid computing is essential because the applications sometimes generate bandwidth exceeding 1 Gbit/s. Before grid computing’s advent, interferometry relied on the storage of images on magnetic tape and physical shipment among collaborating institutions. Medicine, climate studies, and high-energy particle physics are also developing as application areas for grid computing.
The research and education community is historically more accustomed to the “buy” model of acquiring services, but early adopters of grid computing and similarly sophisticated services have had no alternative other than to build their own capabilities. This would indicate a revenue opportunity for carriers, and it’s not the only one. Enterprises are following the research and education community’s lead in adopting grid-computing applications. There is also increasing customer demand for powerful storage-area network (SAN) capabilities, as well as IPTV and other triple-play services.
Carriers are gradually building out highly scalable reconfigurable switched optical networks in support of these new applications as well as established ones. Advanced optical functionality has been driven by the need for substantial bandwidth growth with lower cost-per-bit transport and with simple operational procedures for wavelength management and service provisioning. These networks contain many of the necessary components for full dynamic operation at the wavelength level, enabling time-of-day or day-of-week provisioning for network capacity for event-driven services such as broadcasts of sporting events and temporary services requiring very high network capacity.
Carriers have settled on WDM technologies to meet their requirements for inherent transport scalability and native support for high-growth services. Now they are deploying optical switching and control technologies for reconfiguration and balancing of wavelength connections across the WDM-enhanced optical network.
ROADMs enable cost-effective, simplified setup and reconfiguration of optical connections via software control and provide automated power balancing. Automation of optical power balancing is also a key feature of the new generation of ROADM and DWDM networks, eliminating the need for manual, error-prone tuning of wavelength-level optical power, retuning as components age, and use of fixed attenuation.
Integrating this array of capabilities in a single manageable entity, ROADMs eliminate many carriers’ primary concern about the emerging world of wavelength-on-demand services: finding a cost-effective operational manner of building, adjusting, and tearing down circuits for high-bandwidth services of sometimes short duration. For the first time, carriers gain the ability to deploy bandwidth when and where it is required and even dynamically provision wavelengths on demand. Network operation is greatly simplified, and carriers avoid risky guesswork about unknown traffic demands, eliminate the possibility of stranding equipment, and avoid capital expenditures for resources that they may not ultimately need.
ROADMs are now established for provisioning of wavelengths across a ring-based subnetwork. To extend the capability across the entire network, ring interconnection and ultimately full mesh-based optical networking require the use of multidegree ROADMs. Figure 1 shows how a carrier might employ both two-degree and multidegree ROADMs to enable optical ring interconnection. Two-degree ROADMs enable the provision of working wavelengths and wavelength protection via two network ports. Recently developed multidegree ROADMs enable any-port connectivity, which is necessary for multiple ring interconnects, wavelength mesh-based networking, and the promise of true end-to-end reconfigurable networking.
Emerging “colourless” ROADM operation promises to provide carriers with another level of flexibility for dynamic networks. Today’s ROADM network elements commonly implement fixed wavelength assignments to add/drop fibre ports, which requires manual fibre connection at the end points of the provisioned wavelength service. Colourless operation obviates this requirement and will allow unrestricted wavelength assignment to fibre ports. Colourless operation requires wavelength switching or selection and wavelength tunability, both attributes of the latest ROADM technologies. For carriers, colourless ROADMs will mean even faster service provisioning and will further reduce wavelength preplanning.
With the implementation of optical switching and control technologies for reconfiguration and balancing of wavelength connections across the network comes a need for network intelligence to manage and control the optical layer. To effectively switch and manage wavelengths, knowledge of the network topology (how nodes are connected to each other) and an inventory of available wavelength paths and hardware resources are required. The network also needs a means to calculate routes and to signal the dynamic or reconfigurable components to set up or tear down paths without impact to other traffic.
Centralised network management tools may be used for this in the near term, in particular where legacy network integration is mandatory. However, GMPLS control-plane technology has matured to the point where these functions are implemented in many networks today with embedded and distributed network intelligence. The GMPLS control plane enables bandwidth-based guaranteed services, priority-based bandwidth allocation, and pre-emption services across dissimilar networks.
Organisations such as the Internet Engineering Task Force and ITU-T have developed standards for use of extensions to familiar IP protocols such as Resource Reservation Protocol signaling and Open Shortest Path First routing to include traffic engineering functionality for enhancing circuit-setup and -teardown capabilities. The GMPLS control plane is similar to the MPLS control plane operating in the packet world in that label switched paths (LSPs) are established across the network-although in the GMPLS case, LSPs are TDM circuits, such as SONET synchronous transfer signal paths, or optical light paths.
Figure 2 shows the logical relationship between the GMPLS control plane and the dynamic optical components of the network. Key components are optical switching and transport in the optical (forwarding) plane and IP routing and signaling functions in the control plane.
A connection controller is responsible for control-plane routing and signaling. Connection controllers and dynamic optical components generally coexist and physically share the same equipment. User network interfaces (UNIs) to client equipment enable automated resource requests. External and internal network-network interfaces (E-NNIs and I-NNIs) enable intranetwork signaling and routing within or between carrier domains. UNIs and NNIs may be in-band such as a G.709 general communications channel wavelength or optical supervisory channel overhead or out-of-band such as in a parallel Ethernet interface connection. Network management systems may be used for A-Z path initiation requests across the network in the absence of UNIs and also typically will be used for network surveillance.
Carriers may implement various elements of control-plane technology in a manner suited to a graceful evolution of their management frameworks. In addition, interworking between an MPLS control plane operating in packet-forwarding equipment and GMPLS operating in the optical transport network offers significant possibilities for further automation of network operations for the future.
The circuit-based technology that still forms the core of many carriers’ infrastructures will need to evolve to meet the requirements of the revenue opportunities that are developing for high-capacity services. The research and education community is providing carriers with valuable models of how to move forward: A carrier must be able to unify data, voice, and video on a scalable, flexible infrastructure that automatically adapts for fluctuating demands. The possible benefits are varied and unprecedented:
- Improved network utilisation, with greater efficiency and less redundancy.
- Reduced operations, administration, and maintenance (OAM) costs through resource optimisation and automation.
- Enhanced network reliability through automation of error-prone manual processes.
- Minimised risk by eliminating the need to precisely forecast bandwidth needs.
- Creation of new revenue possibilities through fast turn-up and rapid bandwidth provisioning.
The dynamically reconfigurable switched optical network is set to emerge as the contemporary carrier’s most important strategic asset.
Paul Morkel is director, business management, carrier WDM, at ADVA Optical Networking (www.advaoptical.com).