Optimal grooming for the edge and core

By STEPHEN FRENCH, JEAN-FRANCOIS LABOURDETTE, and KRISHNA BALA, Tellium--History and economics have set the stage for grooming at 2.5-Gbit/sec granularity in the optical mesh-network core.

Feb 3rd, 2003
Th 117211

History and economics have set the stage for grooming at 2.5-Gbit/sec granularity in the optical mesh-network core.


Large networks have always been organized in multilevel hierarchies. It has been a service provider's dream to accommodate all services at all rates with a single box that is scalable, manageable, and low-cost. However, practical considerations, such as hardware and software scalability and manageability, have always led to hierarchical network architectures, taking advantage of optimizing each layer independently. "All-purpose boxes" may be well suited for enterprises and some metro applications but not for core applications that require specialized carrier-grade products.

In the past few years, we have witnessed tremendous growth in traffic volume, driven primarily by an exponential growth in data traffic. To accommodate this growth in traffic volume and shift in traffic mix, service providers are upgrading their core transport infrastructure from traditional SONET/SDH rings to optical mesh networks.

As carriers begin to build their next-generation transport infrastructure, they need to ask themselves, "What is the right granularity of grooming for the core of my optical transport network?" More specifically, should carriers continue to groom traffic at OC-1/STS-1 (52-Mbits/sec) granularity in the core, as they have since 1988, when core transport speeds were at DS-3 (45 Mbits/sec)/STS-1, or is it time to move to core-level grooming at OC-48/STS-48 (2.5 Gbits/sec) granularity now that core transport speeds are at 10 Gbits/sec?

We argue that the right granularity for grooming in the next-generation core optical network is STS-48, while STS-1 grooming is better suited for the edge of the network and legacy applications. In today's TDM world, both wideband (VT1.5--1.5-Mbit/sec) and broadband (STS-1) grooming are required in particular segments of the network to manage different traffic levels, serving distinct but complementary roles in the network. Similarly, as transport speeds scale up to 40 Gbits/sec and DWDM channel counts continue to increase, we expect STS-1 and STS-48 grooming switches to also perform complementary roles at the network edge and core, respectively (see Figure 1).

History of network grooming

Although grooming has received significant marketing coverage in 2001, remember that grooming devices have been used in networks for over 20 years. Grooming has been and will continue to be a vital requirement in carrier networks. Grooming has evolved over the past two decades (see Figure 2).

Electronic narrowband crossconnects (64-kbit/sec granularity) were first introduced in 1981. By the mid-1980s, as the core network bandwidth grew, it became clear that narrowband (1/0) crossconnects were not suitable for backbone traffic engineering, since the backbone was running at DS-3 rates. Electronic wideband crossconnects (1.5-Mbit/sec granularity) were introduced in the late 1980s to replace the functionality of DS-1/DS-3 multiplexers, whose connections were hand-wired to both the narrowband DS-0 and broadband DS-3 core networks. Electronic broadband crossconnects (45-Mbit/sec granularity) were also introduced in the late 1980s to replace manual patch panels as the vehicle for routing office circuits onto long-haul (LH) transmission equipment and routing pass-through circuits off of one LH system and onto the next span. Finally, ultra-broadband (2.5-Gbit/sec) crossconnects, referred to as optical core switches, were introduced in 2000.

As history has shown, grooming granularity has increased to support traffic engineering of backbone networks and offer higher service rates in the network. When DS-3 and lower rates dominated the network, carriers originally questioned the economics of broadband crossconnects. The broadband crossconnects' popularity increased with carriers, as they accounted for the operational penalties of sending unchannelized DS-3 services through subrate grooming devices and as their customers began to experience the superior performance of network-based restoration services.

As shown in Figure 2, broadband-crossconnect grooming at the DS-3 level, or equivalent SONET rate of STS-1, were introduced in the late 1980s when core transport speeds were at a similar rate. With core transport speeds now at 10 Gbits/sec--nearly 200 times faster--continuing to groom the core at the same level as in the late 1980s seems quite inadequate.

History shows that grooming speeds increase as transport speeds, average circuit size, and overall network traffic increase. Carriers will need to evolve their core grooming speeds to 2.5 Gbits/sec for the same reasons they moved from narrowband to wideband and broadband crossconnects. Those reasons include:

• Emerging application drivers--what feeds the core?
• Evolution of transport speeds.
• Scalability and manageability.
• Performance--provisioning, restoration, and management.
• Total network cost.

Driving core traffic growth

The dominant traffic carried in today's network is evolving from legacy voice and leased line services to data services, predominantly IP. According to a recent Dell'Oro Group (Redwood City, CA) report, OC-48 and higher-speed router port shipments are expected to more than quadruple between 2001 and 2004.

TDM aggregation switches--optimized for legacy voice, leased line services and acting as edge devices--groom signals at lower bit-rates : VT 1.5 and STS-1. They typically feed into the core at rates of OC-48 and above. Furthermore, high-speed trunking between edge service platforms (IP routers, ATM switches, digital crossconnect systems, SONET/SDH add/drop multiplexers, multiservice provisioning platforms, and Gigabit Ethernet platforms) for sub-OC-48 services is usually done at OC-48 and OC-192 (10-Gbit/sec) speeds. Consequently, the natural rate of grooming granularity in the core is STS-48.

Data traffic is statistically groomed and packed by IP routing devices into concatenated OC-48 and OC-192 trunks, without the need for STS-1 grooming. There is no benefit gained in sending a concatenated 2.5- or 10-Gbit/sec circuit through an STS-1 grooming broadband switch. In fact, that may harm the circuit quality and drive up operational costs. This argument follows through with emerging Gigabit Ethernet (GbE) and 10-GbE services. With the majority of network traffic being data and with Internet traffic still growing at over 100% per year, it has become increasingly apparent that STS-1 grooming broadband switches do not belong in the core.

Transmission evolution

Transmission equipment at the core of the network has experienced a dramatic increase in channel counts per fiber and transmission speed per channel over the last few years. The core of the transmission network is evolving from carrying tens of OC-48 wavelengths per fiber to carrying more than 100 OC-192 channels per fiber using LH DWDM systems.

Furthermore, the TDM hierarchy is not at its end. It is now clear that the network will migrate to 40 Gbits/sec per wavelength. At STS-48 granularity, a 40-Gbit/sec signal comprises 16 blocks, as opposed to 768 blocks per 40-Gbit/sec signal with STS-1 granularity. This difference illustrates the complexity of grooming at the STS-1 layer--about 50 times that of managing at the STS-48 layer.

A KMI Corp. (Providence) research report reinforces the fact that the network core operates at 2.5-Gbit/sec and higher speeds, showing that sales of OC-48 and higher-speed DWDM systems will represent over 95% of shipments between 2000 and 2005. With the evolution in transport speeds to OC-48 and higher, core grooming speeds are naturally driven to the STS-48 level.

Core scalability and manageability

History has repeatedly demonstrated that core grooming speeds increase in step with increases in core transport speeds because of scalability and manageability. As traffic volume grows and the size of the network increases, core grooming granularity naturally increases to keep network complexity under control. The complexity of managing the core of a large optical mesh network at edge grooming rates of STS-1 is at least 48 times more complex compared to an optical core that supports STS-48 grooming (see Figure 3). Grooming at STS-48 granularity allows operators to scale their network, taking advantage of higher-port-count STS-48 switches and dealing with the right-size "block" as total traffic and network size increase.

Scaling limitations of STS-1 switches are mostly due to the tremendous increase in software complexity involved in handling STS-1 signals. That includes path computation across the fabric and subsequent management of those paths, which becomes at least 48 times more complex for an OC-48-equivalent connection through an STS-1 fabric compared to an STS-48 fabric.

Maintaining the synchronization among all the STS-1 signals traversing the fabric also becomes extremely difficult as the size of the switch increases. The challenge for OC-192 is at least four times higher (192 STS-1s), which is why STS-1 switch vendors have had great difficulty in developing and deploying OC-192 interfaces that support STS-1 grooming granularity. OC-192 interfaces for STS-48-based switches have been shipping to customers since the first quarter of last year.

Today, there are core STS-48 grooming products that are shipping with 512 OC-48 ports--twice the size of any STS-1 switch currently on the market. Furthermore, these core STS-48 grooming switches can scale to eight 192 ports of equivalent OC-48 capacity. That is a factor of two to three times larger than the scalability of STS-1 switches. Therefore, as traffic scales beyond STS-1 switch capabilities, carriers will be forced to deploy several STS-1 granularity switches per office in a one-tier STS-1 architecture, wasting large portions of effective capacity and losing potential revenue streams due to intermachine ties.

As overall traffic grows and the traffic mix becomes more heavily weighted toward OC-48 and above data rates, a two-tier architecture made of STS-1-based switches at the edge and STS-48-based switches at the core will be more efficient and cost-effective than a single STS-1-based tier architecture. Based on internal network studies, overall network capital savings of a two-tier STS-1/STS-48 architecture over a single-tier STS-1 architecture can start from 15-25% under reasonable network assumptions of size and traffic.

Provisioning, restoration, management

Critical real-time network functions such as mesh provisioning, restoration, and network management, can be performed much more efficiently and with much better performance at STS-48 granularity. Core grooming at the STS-48 level allows operators to support fast, capacity-efficient, shared-mesh provisioning, restoration, and management.

Provisioning.Mesh routing of light paths requires diversity of primary and restoration paths as well as appropriate sharing restrictions on restoration capacity to offer restoration guarantees against link and node failures. Optimal mesh routing of thousands of OC-48 light paths can be handled effectively in a shared-mesh optical STS-48 granularity core network.

On the other hand, the complexity of routing hundreds of thousands of STS-1-equivalent signals in a single-tier STS-1 network architecture while trying to satisfy diversity and sharing constraints is a much larger task. Time and memory requirements are 48 times greater per connection, leading to minutes (instead of seconds) and requiring gigabytes of memory to compute the route (primary and restoration paths) of a light path. Having to sequentially route all STS-1 connections that constitute OC-48 could potentially increase provisioning times by a factor of 48.

Restoration. Fast, capacity-efficient, shared-mesh restoration is a key benefit of an optical core mesh network. The performance of mesh restoration is greatly impacted by the number of signals that have to be restored during a link or node failure event. Where an STS-48 core network would restore all services by handling the tens of failed OC-48 connections on a link or node (hundreds `of connections for large points of presence), an STS-1 granularity switch would have to restore 48 times more connections. Such an increase in the number of signals to be restored would drive the restoration time for shared-mesh restored services from less than 200 msec in an STS-48 network to several seconds in an STS-1 network, as demonstrated by simulation studies.

In fact, the difficulty of doing shared-mesh restoration in an STS-1-based network could force connections to be protected in a dedicated mesh (1+1) configuration. That would drive the required network capacity at least 50% higher than the amount that would be required when shared-mesh restoration is used. With an STS-48 granularity core network, carriers can provide better performance using less dedicated protection than would be needed with an STS-1 core, resulting in significant cost savings (see Figure 4).

Network management. Network management scaling limitations and performance issues arise when managing a large number of STS-1 connections in a mesh architecture. An STS-48 core network provides a more scalable network management solution than that of an STS-1 network. For topology and configuration management, an element management system (EMS) is typically required to maintain a port-state database, which stores the status and availability information for all of the physical termination points (PTPs) and connection termination points (CTP) within the core network.

If the core is operated at the STS-1 level, there are 48 times more CTPs compared to an equivalent STS-48 network, resulting in 48 times more storage overhead. Similarly, excessive communications overhead would also be incurred for service management (i.e., light-path and mesh-sharing databases) and performance monitoring (PM) information, resulting in an extraordinary amount of storage capacity and potentially impacting performance. Analysis indicates that 250 Gbytes of PM data would be sent to and stored in the EMS every day in a 100-node STS-1 network, compared to 5 Gbytes for a core network that uses STS-48 grooming switches.

2.5-Gbit/sec core grooming

The evolution of core grooming speeds to the STS-48 granularity level is a continuation of a cycle that began with the introduction of narrowband crossconnects into the network more than 20 years ago. Since then, crossconnect grooming granularity has gradually increased from 64 kbits/sec to 2.5 Gbits/sec with the introduction of ultra-broadband crossconnects in 2000. The following key factors are driving the need to migrate the core grooming speed to STS-48 granularity:

• Bandwidth requirements continue to grow, with Internet traffic growth rates exceeding 100% per year.
• Access bandwidth grooming at STS-1 granularity is best suited for the edge of the network and legacy TDM applications.
• Emerging applications and services such as IP are feeding the core at concatenated OC-48 and OC-192 rates and do not require STS-1 grooming.
• Core transport speeds are at OC-48 and OC-192 today and will be at OC-768 in a few years.
• Using STS-48 granularity switches reduces equipment and network complexity, increases reliability, and provides the necessary equipment and network scalability and manageability.
• Reduced complexity improves equipment and network (provisioning, restoration, and management) performance.

History has shown that core grooming granularity has increased simultaneously with application and transport speeds and network capacity. The reasons are clear and simple: reliability, scalability, performance, and most important, cost. For carriers planning to build-out their optical networks, STS-48 is the right-size "block" for grooming in the core.

Stephen French is senior manager of product marketing, Jean-Francois Labourdette is manager of network routing and design, and Krishna Bala is chief technology officer at Tellium (Oceanport, NJ). They can be reached via the company's Web site at www.tellium.com.

More in Network Design