QoS challenges, opportunities for next-generation optical Internet

April 1, 2001
Adding QoS to optical networks

Several alternatives are being offered to provide quality of service for today's and tomorrow's networks.

BY GREG TREECE, DAVID STEINMAN, and BEN A. BITTLE, VHB Technologies Inc.

In the changing global e-commerce economy, corporations need to provide services and solutions to their customer base as expeditiously and securely as possible. Users are demanding more features and options in their everyday applications and networks.

The convergence of voice, video, and data (VVD) into a single network infrastructure is creating new demands and challenges for network vendors, carriers, and service providers. Network bandwidth is growing exponentially-from the home, where DSL and cable-modem connections are in high demand, to businesses, where LAN connections are growing to gigabit speeds.

As Internet demands grow, driven by the demand for new applications supporting VVD and the ever-expanding e-commerce and the global marketplace, many standards bodies and groups are providing the standards that will allow the current Internet and next-generation optical Internet to support these sophisticated applications. The existing Internet backbones are currently sitting at OC-48c/OC-192c and will undergo another bandwidth upgrade to OC-768c with pilot projects beginning this year to accommodate this exponential future growth. Figure 1 outlines the bandwidth growth and supporting technologies.

The challenges of implementing quality of service (QoS) in such an environment derive from how each routing methodology handles resources. A QoS scheme for one data type may impose heavy burdens for other traffic types. QoS can be difficult to manage, due to the fact that an application has no idea of what it needs. A well-defined set of rules is needed for QoS, and as long as this set does not exist, QoS will temporarily be satisfied by massive over-provisioning of bandwidth. Also, QoS services will need to be widely deployed, with monitoring hooks installed for pricing and billing and for customer monitoring. QoS will also have to penetrate virtual private networks (VPNs), network address translation, dynamic host configuration protocol, and other dynamic networks.

One of the easiest ways to implement QoS within any network infrastructure is through content-distribution/content-intelligent networks. Today's Internet is reshaping how information is delivered to a client's desktop-from static passive delivery to intelligent content delivery based on rules, network policies, and specific user or company identities. The static passive delivery methods are built using Layer 2/3 switching technology focused on predictable network and server utilization.
Figure 1. Growth in bandwidth demand has been a fact of life for carriers in recent years. A variety of technologies has emerged to meet this demand, with more solutions on the way.

In today's aggressive e-commerce world, that is simply not good enough. The phrases "Please wait, server not responding" and "Server not found" will cost any company e-commerce dollars. In e-commerce, speed rules-survivors must deliver content and information faster to clients than the competition. Today's content-distribution/content-intelligent delivery methods intelligently distribute content, based on content within the packet. These devices are purpose-built and provide intelligent switching based on all seven layers instead of the legacy Layer 2/3 switches.

Layer 4 switches direct traffic based on the Internet Protocol (IP) destination addresses and HTTP port-80 traffic. That means all HTTP requests are routed accordingly to content caching devices or directly to the appropriate Web server. This method can introduce processing overhead when requests are switched directly to Web servers, thus resulting in possibly slower response times: "Please wait while we load the information for you."

Layers 5-7 content-aware switches look deep into the HTTP port-80 traffic down to the Universal Resource Locator (URL) and associated cookies. Content-intelligent networking technology looks deep into the same HTTP port-80 traffic but provides the Web manager the ability to apply intelligent rules to manage their Websites. That allows Websites to forward information to designated servers based on specific URL information or cookies, thus providing enhanced delivery and decreasing response times to the client.

Several challenges exist for content-aware/content-intelligent vendors. The overall goal is to provide a technology platform that looks deep into every packet at wire-speed versus a small subset or portion of the packet. Many products available today can only read 64, 128, or 256 bytes into each packet. Content-aware technology vendors currently shipping products include Alteon, Cisco Systems, F5, Extreme Networks, Foundry, and VHB Technologies.

Several standards-based protocols have emerged from Internet standards bodies such as the Internet Engineering Task Force (IETF) to provide the QoS definition required by next-generation applications. QoS offers an opportunity for Internet service providers (ISPs) and carriers to provide additional service-level agreements (SLAs) with their existing and future customer base. These emerging QoS protocols include Resource Reservation Protocol (RSVP), differential services (DiffServ), and Multiprotocol Label Switching (MPLS).

RSVP is a control protocol that enables applications to obtain QoS for data. It is not a routing protocol; it works in conjunction with routers and calculates routes based on routing protocol information. RSVP originated at the University of Southern California's Information Sciences Institute and the Xerox Palo Alto Research Center. The IETF is charged with evolving the protocol with specific RSVP working groups.

RSVP is a Layer 3 protocol, in which applications request end-to-end, per-conversation resources to ensure QoS. These resources typically reside in routers. It is anticipated that RSVP will be used for very sensitive applications requiring resource reservation and admission control. Voice over IP would be one such application. One potential hurdle for achieving RSVP QoS is that it requires total network deployment and control to realize guaranteed service. Broad support of RSVP has been waning because of concerns over its high degree of complexity and lack of scalability.
Figure 2. The 32-bit Multiprotocol Label Switching (MPLS) header falls between the data link-layer information and network-layer data. MPLS can significantly aid quality-of-service provisioning.

Early adapters' reactions to implementing RSVP and its complexity has cleared the path for DiffServ, an IETF protocol that is a scalable foundation for QoS as well as lightweight, meaning an IETF protocol combines routing and transport services in a more streamlined fashion than do traditional network- and transport-layer protocols. DiffServ attempts to overcome the end-to-end hurdles that are required with RSVP by using packet tagging within existing packet headers. RSVP is based on flow and is not scalable; DiffServ is a more scalable traffic-engineering mechanism.

The goal of DiffServ is to push complexity to the network boundary (application hosts and leaf and edge routers, which police and "mark" particular local traffic based on filter specifications). Another goal of DiffServ is to separate control policy from the support mechanisms. The argument is that since the network boundary has a small number of flows, it should be able to perform detailed operations with little impact, leaving the busy core routers to handle their larger number of flows. By defining only a few per-hop behaviors (PHBs) separate from the control policy, DiffServ achieves a high degree of flexibility. Its PHBs are relatively stable, and the control policy can be modified independently as needed.

The DiffServ theory is based on a simple model where traffic entering a network is classified and conditioned at the edge of the network and assigned different behavioral aggregates. Each aggregate is identified by a single DiffServ code-point. Within the core of the network, infrastructure packets are forwarded according to the PHB associated with the DiffServ code-point. A DiffServ domain consists of a contiguous set of nodes that operate with a common service-provisioning policy and set of PHB groups implemented on each node.

MPLS, although not a QoS protocol, can be groomed into a QoS tool. Originally conceived as a means of improving the forward speed of routers, MPLS can be used in QoS for information-flow engineering. Label switching provides a way to route data based on its type, as identified in the label.

Where DiffServ is implemented by use of the type-of-service byte in the IP header, MPLS for QoS is achieved by using the class-of-service (CoS) field of the MPLS header. An MPLS cloud can be embedded in a DiffServ configuration to achieve a lower level of routing QoS control. In such a configuration, when the DiffServ packet enters the MPLS cloud, the MPLS labeling scheme takes over, routing efficiently within the cloud. Upon exit of the cloud, the DiffServ QoS hook is still in place and proceeds in its regular manner.

MPLS applied to IP networks provides QoS where no circuit statistics or traffic matrixes exist. By creating label-switched paths (LSPs) from a source to its destinations, the information flows can be quantified for better flow management. The LSPs are constructed using a signaling or label-distribution protocol (LDP) such as RSVP-TE or CR-LDP or using Simple Network Management Protocol or common open-policy services.

MPLS packets have a header slipped in between the IP header and the link-layer header, which label-switching routers (LSRs) can examine and act on. Upon receipt of an MPLS packet, the LSR extracts the label, performs a lookup in a forwarding table, updates the label, and sends the packet on its way.

With the proper planning of topology and capacity, a network can be designed to route information efficiently based on current capacities (via LSP statistics), instead of relying on load-blind shortest-path schemes that can cause congestion on those paths. A layer of backup LSPs can also provide a rollover plan in the event of a link failure. In addition, MPLS can also be used to define explicit routes for specific flow types.

MPLS can provide better price/performance ratios in network routing, improved scalability of the network layer, and greater flexibility in routing services. MPLS in general has no QoS specifics built-in, but can be used as a tool for providing a higher QoS. It is in the planning and configuring of LSPs with regard to QoS constraints that the final QoS requirements are met.

Figure 2 illustrates the MPLS header structure. The 20-bit label is the MPLS label. The CoS value affects queuing and discards algorithms within the MPLS cloud. The stack is for hierarchical labels. And the time-to-live (TTL) is used just like the TTL in the IP header. The entire 32 bits is "shimmed" into the packet after the Layer 2 header and before the IP header.

The Internet2 movement seeks to enable next-generation applications and recreate the leading-edge research network, which will allow a full technology transfer to the commercial Internet. It comprises 135 member universities, 44 commercial corporations, and numerous applications and engineering initiatives.

The efforts toward Internet2 include a focus on QoS mechanics that will support measurability by the user on an end system and by operators on transit networks and provide flexible administrative tools for reservation requests, admissions control, accounting and billing, and monitoring flows. The mechanisms for QoS are being defined to interoperate across different vendor products and administrative domains and are being evaluated to assure they will not starve normal information flows of resources.

There are several approaches for providing QoS solutions for the next wave of high-bandwidth connections. One approach, the over-provisioning theory, says that if there is an excessive amount of bandwidth, why should there be a concern for QoS? No bandwidth contention, so everything should arrive on time. That seems to be a typical approach for today.

Another approach is to use the underlying QoS built into the existing protocols and systems since DWDM is transparent to their implementation. In other words, let's continue with the existing QoS structure currently in use, and when we transform to the higher bandwidths, our QoS policies are already in place.

A third approach under consideration is to standardize an assignment or allocation of wavelengths to certain applications or information flows. A good way to think of that is a color or color group (wavelength) associated with certain criteria or policies so that it is predetermined to have a reduced latency. This approach will require a significant increase of intelligence and flexibility in the network devices. No longer will simple header labeling be sufficient to provide prioritization to data; content-intelligent devices must be deployed to include data content into the decision-making process. The next generation should be an exciting and dynamic era for future computing and networking.

Regardless, as bandwidth rates increase and their associated costs decrease, service providers must continue to offer new features, functions, and benefits to their customer to remain competitive.

Greg Treece is senior software engineer, David Steinman is director of federal sales, and Ben A. Bittle is senior vice president of product development at VHB Technologies Inc. (Richardson, TX). They can reached at the company's Website, www.vhbtech.com.