Packet transport at 100 Gbits/sec, no waiting

Aug. 1, 2007

by Moran Roth, Luis Aguirre-Torres, and Mannix O'Connor

As new on-demand, high-definition content distribution applications drive demand for higher network capacity, the pressure mounts for service providers to increase the network’s packet-based transmission capacity beyond 40 Gbits/sec and up to 100 Gbits/sec in some instances. Most of today’s deployment-ready 40-Gbit/sec systems are both sophisticated and inherently expensive. Components and systems for achieving 100 Gbits/sec are still being defined and, in most cases, are not even at the prototype stage, so when they’ll hit the market is anyone’s guess.

However, there is a way to scale packet transport networks today from 10 to 100 Gbits/sec, incrementally as needed, without taking the network out of service and while using common, commercially available optical components. This approach uses advanced bonding techniques based on the concept of Ethernet Link Aggregation as defined in IEEE 802.3ad. Although the obvious benefit of this technology to service providers today would be the ability to scale their networks as the demand for bandwidth-intensive applications increases, the resultant high-capacity packet transport networks also would provide other benefits such as enhanced resiliency and load balancing for ring architectures.

Multiple initiatives for high-capacity, packet-based transmission beyond 10 Gbits/sec are being considered by different standards bodies, including the IEEE High Speed Study Group (HSSG) and the ITU-T, where efforts are currently focused on the definition of a “greater than 10 Gbits/sec Ethernet MAC data rate and related physical capability to IEEE Standard 802.3.” Today, while progress has been made at both the IEEE and ITU-T, the multiple alternatives are still subjects of debate, which means service providers cannot realistically expect industry-wide consensus and availability of any affordable systems within the current time frame set by the consumers themselves.

However, a solution for high-capacity packet-based transmission rates beyond 10 Gbits/sec is available today through a combination of technologies such as Ethernet, Resilient Packet Ring (RPR), and link aggregation. These technologies make up the building blocks of a highly scalable packet transport network approach capable of delivering bandwidth-intensive applications such as high-definition video-on-demand (VoD) and online gaming. Figure 1 provides an example of such high-capacity packet transport, in which the packet-based transmission and distribution network delivers triple-play services, including video, voice, and broadband Internet access.

The benefits of such high-capacity packet transport networks come not only from their simplicity and enhanced resiliency, but also from their unrivaled economics. With off-the-shelf components, standard protocols, and independence from the physical layer (the approach works with both SONET/SDH and 10-Gigabit Ethernet physical layers), this kind of high-capacity packet transport makes packet-based 100-Gbit/sec transmission viable today.

The high-capacity (HC) packet transport approach employs advanced bonding techniques similar to those used for Ethernet Link Aggregation. In a ring configuration, multiple 10-Gbit/sec RPR instances may be combined to create a single, bonded logical link (referred to as HC RPR), which then functions as shared media over which packet and TDM services can be statistically multiplexed, taking full advantage of the overall bandwidth capacity available across multiple physical links. While each RPR instance may be mapped directly onto a 10-Gigabit Ethernet or an OC-192/STM-64 SONET/SDH physical link, the available bandwidth is logically aggregated onto a single logical HC RPR interface over which RPR packets may be transmitted according to a flow-aware hashing mechanism. Perhaps the greatest benefit of this approach is that service providers can implement it without negatively affecting existing services or taking down any segment of the network at any point.

Figure 1. The demand for aggregation of triple-play services is an example of the growing need for high-capacity packet-based networks.

As mentioned, HC RPR relies on a flow-aware hashing algorithm that is used for load balancing and distributing RPR packets over multiple parallel physical links. The hashing algorithm is also used to guarantee traffic integrity by uniquely identifying each flow based on information in the TCP, IP, or Ethernet header, which ensures packets belonging to the same traffic flow are always sent over the same physical link. Among other benefits, this approach gives service providers the flexibility to support multiple applications simultaneously and over the same network infrastructure, ensuring that frames of a particular flow are always delivered in order.

Another benefit to HC RPR is its increased resiliency, which builds on the high availability that characterizes packet transport networks today. Upon a failure of any of the physical links, the available bandwidth for the affected HC RPR segment is updated accordingly and traffic is distributed over the remaining physical links, hence adding an extra layer of resiliency to the packet transport network (see Fig. 2). Specifically, this mechanism guarantees that service-level agreements are maintained even upon physical link or equipment failure.

HC packet transport enables service providers to upgrade network capacity according to customer demand. When more capacity is needed in the network, service providers may add it without having to tear down any active service; they simply define a new RPR instance and its associated link capacity as being part of the HC RPR shared media, which automatically increases the available bandwidth of the bonded packet-based transmission network.

Nowadays, to calculate the required transport capacity for a triple-play services network, service providers take into consideration the amount of bandwidth required for the aggregation and distribution of broadcast video and voice services as well as Internet access. These calculations can be very straightforward, allowing service providers to further optimize the use of the bandwidth, especially for the distribution of broadcast video traffic.

However, the introduction of VoD services, in standard or high definition, adds an element to the equation that can only be calculated by assuming a number of concurrent customers at any one time. This presents to service providers the difficult task of forecasting the potential success of these services and building their networks accordingly. HC packet transport enables service providers to minimize their initial investment in this type of deployment by allowing them to initially limit their spending according to the most conservative estimates. They then can simply scale network capacity as needed, consistent with the incremental success of their service offering.

The hashing algorithm previously discussed helps guarantee the integrity of independent flows and provide load balancing. In doing its work, the hashing algorithm takes into consideration the available capacity of each span independently. This feature provides service providers with the added benefit of being able to define asymmetric HC RPR rings.

One example of asymmetric HC RPR is a ring in which each span comprises a different number of links. In this scenario, not all nodes are accessible through all links, and the hashing mechanism at the source node makes use of only those links that are available for a particular destination. This is of particular relevance for applications such as content distribution in which traffic is distributed from a single service node toward multiple local offices in a hub-and-spoke configuration.

For example, network architects considering the dimensions of an on-demand content-distribution network are required to balance the need to guarantee bandwidth availability with the need to achieve the most efficient use of the transmission network’s available bandwidth. This may prove particularly challenging if the demand for such services is not uniform across the entire network. Asymmetric HC RPR therefore gives network architects the opportunity to add network capacity as the demand for bandwidth increases in specific network segments.

Another example is a ring built out of multiple links of different rates. In this example, the hashing mechanism takes into account the rate of each link to efficiently use the total ring capacity. In this way, working packet transport networks may be expanded with new links of higher rates as the technology becomes available.

HC RPR is a key building block in HC packet transport networks. Ring configurations can be scaled beyond 10 Gbits/sec by defining a logical HC RPR network over multiple SONET/SDH or 10-Gigabit Ethernet physical links. Load balancing and traffic integrity are guaranteed through a flow-aware hashing algorithm similar to Ethernet link aggregation. The implementation of such functions, however, does not take place on the “network side” interfaces; instead, it is implemented on the “user side” before traffic is switched over the universal packet fabric. Among other things, this implementation enables the aggregation and distribution of traffic from multiple user interfaces into a single logical HC RPR interface built of RPR instances on separate network cards to provide equipment and facility protection.

Figure 2. Among other benefits, the flow-aware hashing algorithm adds an extra layer of resiliency to the high-capacity packet transport network.

HC packet transport interfaces may be implemented today using low-cost, commercially available 10-Gbit/sec DWDM transceivers. There is no need to wait for the availability of 100-Gbit/sec optics to build transport networks scalable to 100 Gbits/sec. Each physical interface in an HC packet transport network can be in turn matched to a specific wavelength for transmission over a WDM network. For example, a network-side interface in an HC RPR network can connect directly and inexpensively to a WDM system (integrated or not) using standard ITU-T grid optics in a “transponderless” mode. This effectively reduces the per-wavelength transmission cost of the HC packet transport network while facilitating full management integration and guaranteeing interoperability between HC packet transport and WDM systems.

The ability to carry HC RPR links in a cost-effective and efficient manner over DWDM infrastructure enables a fiber pair to carry 100 Gbits/sec of services over a single logical link. Over this link, service bandwidth can be scaled on demand using network control protocols with minimal user intervention to simplify network management and lower operational costs.

In comparison, each wavelength in a common WDM network is a separate logical link, and scaling service bandwidth so the total capacity required exceeds the capacity of a single WDM optical channel requires complex network-wide procedures. This involves finding new routes for existing services to make capacity available for the service in question. This may also result in suboptimal use of network resources due to bandwidth fragmentation, as certain WDM channels may not be used to full capacity. Since HC RPR is managed as a single logical link, bandwidth fragmentation is of no concern and network resources may be used in full.

Tunable DWDM transceivers may be employed for enhanced flexibility in adding new HC RPR instances over optical networks. This enables carriers to scale the capacity of the HC packet transport network by adding links that will transit a node or add/drop within a node based on the HC RPR topology. The overlay model guarantees the HC RPR and its underlying DWDM topologies are independent of each other and can, therefore, be scaled as needed without affecting live traffic. For example, adding a link to a node on an HC ring may be as simple as reconfiguring a ROADM to add/drop a specific wavelength.

The wait for a high-speed packet-based transmission system can be a long one given the current state of the work at the different standards bodies and industry forums. A method for enabling HC packet-based transmission rates beyond 10 Gbits/sec is available today through the combination of existing, field-proven technologies such as Ethernet, RPR, and link aggregation.

HC packet transport gives service providers an alternative to enable HC transmission rates exceeding 10 and 40 Gbits/sec. It enables service providers to scale the transmission network to accommodate the impending increase in bandwidth demand derived from on-demand content-distribution applications and provides them with an alternative that minimizes initial investment and optimizes the use of the available network resources.

Hence, while the wait for high-speed Ethernet technology continues, HC packet transport is available today.

Moran Roth is senior manager of technical marketing, Luis Aguirre-Torres is director of product marketing, and Mannix O'Connor is director of corporate marketing at Corrigent Systems (www.corrigent.com).

Sponsored Recommendations

Advances in Fiber & Cable

Oct. 3, 2024
Attend this robust webinar where advancements in materials for greater durability and scalable solutions for future-proofing networks are discussed.

Advancing Data Center Interconnection

July 25, 2024
Data Center Interconnect (DCI) solutions provide physical or virtual network connections between remote data center locations. Connecting geographically dispersed data centers...

How AI is driving new thinking in the optical industry

Sept. 30, 2024
Join us for an interactive roundtable webinar highlighting the results of an Endeavor Business Media survey to identify how optical technologies can support AI workflows by balancing...

Balanced vs. Unbalanced PON: Key Differences and Deployment Impact

Nov. 7, 2023
Learn how to choose the right PON architecture for your network.