Ensuring service quality in optical Ethernet networks

Jan. 1, 2003

Until recently, service providers attempting to deliver next-generation multimedia services to enterprise customers in metropolitan-area networks were at a major disadvantage. The only transport methods available to ensure delivery of mission-critical voice, video, and data applications were beyond the budgets of many businesses, making a large portion of the lucrative market for Ethernet services unavailable.

But with the introduction of optical Ethernet technology, service providers finally discovered a way to enter a previously untapped market sector. Optical Ethernet-based networks enable a broad range of attractively priced transport services, making it possible for a wide variety of service providers to deliver their offerings at a price enterprises of all sizes can afford.

The technology not only offers a more cost-effective approach than legacy alternatives, but it also provides all the elements necessary for true carrier class deployment, including the ability to create highly differentiated services and ensure the quality of those offerings. That makes it possible to offer new high-margin services, enabling providers to quickly generate new revenues and a rapid return on investment.

But before this high level of service differentiation and control becomes possible, providers must deploy the right software/hardware combination, which combines high-performance edge devices and ultra-scalable core equipment with a comprehensive management solution to facilitate service regulation with strict service-level-agreement (SLA) conformance. This type of management allows providers to control networked core, edge, and WDM devices. More important, it lets them manage the flow of individual services and applications, ensuring bandwidth availability for all customers and eliminating the potential for network bottlenecks. In addition, it gives providers a way to tailor services geared to enterprise customers' specific budget and network needs.

Such a management system must be adaptable to today's multiservice metro networks, transporting a broad mix of multimedia and IP-based applications—from low-bandwidth, low-latency service for instant messaging and Web browsing to high-bandwidth applications for video on demand and multimedia conferencing. Such a system must deliver enterprise communications and applications to thousands of employees, business partners, and customers located in multiple sites and geographically dispersed locations.

To satisfy these diverse requirements, providers must be prepared to offer an equally varied blend of transport services, including virtual leased line, transparent LAN, backhaul, Internet access, TDM, and SAN transport capabilities. Such a service portfolio provides a powerful foundation that can meet the telecommunications requirements of a broad customer base, including thousands of price-sensitive small- and medium-enterprise customers requiring basic services and large Internet service providers and applications service providers that need customized offerings with dedicated resources.

Figure 1. Traffic management is one of three key components required to make standard enterprise Ethernet suitable for carrier networks.

A key requirement for delivery of customized, tiered services is the ability to control quality. The variety of transport services necessary in today's metro optical Ethernet networks must be able to coexist, with the assurance that all traffic types, regardless of the resources and bandwidth they consume, will be delivered to every customer's satisfaction. To deliver this quality assurance, providers must deploy a comprehensive provisioning system that enables network operators to configure services and establish customer-specific SLA attributes. In addition, traffic must be monitored on an ongoing basis, measuring characteristics such as packet loss, throughput, delay, and jitter to ensure SLAs are being met.

Optical Ethernet traffic management solutions accomplish these tasks using a connection-oriented approach in which end-to-end connections for each customer's traffic are established across the network. Connections are created with the aid of open shortest path first (OSPF) traffic engineering (TE) and the resource reservation protocol (RSVP)-TE protocols that automatically detect network topology, determine the most efficient route across the network, and decide how devices and resources should be configured.

These mechanisms streamline and speed the provisioning process, in contrast to alternative methods that involve manual calculation and require multiple management systems to configure services. With a few clicks of a mouse button, a single network operator using a single workstation can provision services in seconds, as opposed to legacy operations that require time-consuming and complicated interaction with multiple management systems.

Connections can be defined using MPLS and virtual LAN (VLAN) tunnels. Separating traffic into individual tunnels enables a specific customer's traffic and services to be segregated from those of other subscribers as they traverse the network. This technique accomplishes two things: It provides more efficient and faster traffic delivery than alternative solutions and enables bandwidth and other resources to be reserved, eliminating the possibility that they will be consumed or degraded by other services. The individual tunneling method differs greatly from those used in many of today's Ethernet and IP networks in which services can only be guaranteed on a best-effort basis.

Once traffic has been separated into individual tunnels, it becomes easy to assign and apply unique attributes for quality and protection corresponding to individual paths. These attributes form a contract, or SLA, defining the service level according to predefined or customized values.

All parameters are variable and can be changed on command, enabling providers and customers to alter SLAs as business and market requirements develop. During the provisioning process, bandwidth is reserved in 1-1,000-Mbit/sec increments. Parameters corresponding to the following classifications are entered into the management interface:

  • Committed information rate (CIR). Corresponds to the maximum transmission rate allowed the customer on a daily basis. A very low or zero CIR number will give the traffic lower priority.
  • Excess information rate (EIR). Sets the burst rate or the amount over the CIR that the customer is allowed to send during a limited period of time. A low EIR designation is used for services such as voice that do not burst traffic.
  • Tunnel priority. Sets the queue number. Traffic can be assigned one of five queues, giving it lower or higher priority over other traffic. Queues are also used to manage delay and jitter.
  • Class of service (CoS). Defines the customer's traffic according to five levels: normal, business-critical, TDM, delay-sensitive, and control traffic.
  • Protection level. Ensures traffic will reach its destination with little or no disruption in service in the event of a span cut or network failure. Network operators can select the restoration level that fits customer needs—from no protection to SONET-like sub-50-msec restoration. Protection schemes use MPLS and/or VLAN tagging to create backup tunnels. A hardware mechanism is used to detect link and node outages.

Figure 2. During optical Ethernet's provisioning process, service-level agreements can be guaranteed using a set of parameters for prioritizing and classifying traffic, including committed information rate and excess information rate.

With an optical Ethernet solution, customer connections are mapped onto MPLS label-switched paths (LSPs), which are created in accordance with service-provider traffic engineering guidelines. These guidelines are determined after a careful study of the provider's network, equipment, and capacity.

The network operator assigns priorities that dictate how traffic will traverse the network. That ensures each service will always have the bandwidth and resources it needs and will not conflict with or be degraded by others. Because the service provider can offer guaranteed bandwidth and subsequently burst bandwidth contingent on availability, oversubscription can be done with confidence, ensuring maximum revenue generation without jeopardizing bandwidth delivery to all customers.

Also essential to the TE process is the establishment of restoration paths per customer connection that not only ensure SLA enforcement, but also guarantee services will continue in the event of a network failure. Once the path has been assigned, TE can maintain optimum traffic routing and service-level conformance.

TE involves much more than careful traffic network design and capacity planning. It also monitors the network on an ongoing basis to ensure peak performance of all aggregated services and customer connections. When certain conditions are detected, the optical Ethernet management system makes automatic adjustments to ensure that all traffic continues along to its destination without delay, unhampered by network bottlenecks or other problems that can affect performance. For example, after detecting a service disruption, the system reroutes traffic along the prescribed restoration path in accordance with the customer's SLA parameter for protection.

In addition, the network is managed in relation to each customer's SLA. As live traffic traverses the network, the optical Ethernet system checks SLA CIR and EIR values using separate dual token-bucket policing mechanisms for each connection. During this process, traffic is allowed or discarded, depending on established parameters. For instance, as packets exit the ingress node, they are held in queues corresponding to the traffic priority established for the SLA. If a connection is below the CIR, the packets always will be forwarded and will not be impacted during periods of peak network congestion. But if the CIR is 0, the customer's traffic always will be in excess and therefore handled on a best-effort basis.

After the connection and SLA have been established, traffic can be delivered in accordance with customer-specific quality of service guarantees. VLAN tunnels can be created for transport locally, placing traffic that has been engineered according to the customer's SLA within a tunnel, then sending it to the core network. At the core, the VLAN tunnel is mapped to an MPLS LSP. After leaving the core, the packet is converted back into a VLAN tunnel and sent to the egress device.

The ability to engineer and manage traffic is an essential requirement of any next-generation infrastructure. Today's service networks are becoming increasingly complex as they grow to support a broader array of applications, technologies, and vendor solutions as well as higher speeds and capacities. As a result, providers need a management system that assures services do not become degraded as they compete for bandwidth. They also need the capacity to create highly defined, tiered services with guaranteed quality that will attract new clients and satisfy existing customers. An optical Ethernet management solution offers an unprecedented level of control and configuration flexibility.

Nan Chen is director of product marketing and standards at Atrica Corp. (Santa Clara, CA) and founding president and a board member of the Metro Ethernet Forum. He can be reached via the company's Website, www.atrica.com.

Sponsored Recommendations

Next-Gen DSP advancements

Nov. 13, 2024
Join our webinar to explore how next-gen Digital Signal Processors (DSPs) are revolutionizing connectivity, from 400G/800G networks to the future of 1.6 Tbps, with insights on...

How AI is driving new thinking in the optical industry

Sept. 30, 2024
Join us for an interactive roundtable webinar highlighting the results of an Endeavor Business Media survey to identify how optical technologies can support AI workflows by balancing...

On Topic: Optical Players Race to Stay Pace With the AI Revolution

Sept. 18, 2024
The optical industry is moving fast with new approaches to satisfying the ever-growing demand from hyperscalers, which are balancing growing bandwidth demands with power efficiency...

The Road to 800G/1.6T in the Data Center

Oct. 31, 2024
Join us as we discuss the opportunities, challenges, and technologies enabling the realization and rapid adoption of cost-effective 800G and 1.6T+ optical connectivity solutions...